2、k8s-集群环境的搭建-docker安装部署-k8s组件(kubelet)安装部署-集群初始化-fannel网络的配置
kubernetes集群分为两类:
·一主多从:一台master节点和多台node节点、搭建简单、但是有单机故障的风险、适合用于测试环境
·多主多从:多台master节点和多台node节点、搭建麻烦、安全性高、适用于生产环境
===================================一主多从搭建===================================
安装方式:
·minikube:一个用于快速搭建单节点kubernetes的工具
·kubeadm:一个用于快速搭建kubermetes集群的工具
·二进制包:从官网下载每个组件的二进制包、依次安装、此方法对于理解kubernetes组件更加有有效
=============================1、kubernetes的环境搭建===================================
搭建k8s的环境:
#以下操作三台主机都要操作: 1、准备三台服务器:一主多从 2、检查centos版本、至少是7.5以上:cat /etc/redhat-release ============================= CentOS Linux release 7.9.2009 (Core) ============================= 3、在三台主机上解析域名 4、时间同步(同步网络时间)-先安装chronyd工具:yum install -y chrony 5、启动时间同步工具并设置开机自启:systemctl start chronyd、systemctl enable chronyd 6、禁用iptables(没有可以不管) 7、禁用selinux 8、禁用swap分区(如果swap分区不能关闭有用处的、需要安装k8s集群过程中明确的通过参数的配置进行说明):编辑文件 vim /etc/fstab ============================注释掉后需要重启系统才会生效 #注释掉swap分区这一行 #/dev/mapper/centos-swap swap swap defaults 0 0 ============================ 9、修改linux内核参数-网络配置参数-新建文件:vim /etc/sysctl.d/kubernetes.conf ================================================== net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
#启用 IPv6 数据包通过桥接设备时进行 IPTables 规则的处理。当设置为 1 时,IPv6 数据包将会被传递到 IPTables 进行处理,从而允许您在桥接设备上应用 IPv6 的防火墙规则
#启用 IPv4 数据包通过桥接设备时进行 IPTables 规则的处理。当设置为 1 时,IPv4 数据包将会被传递到 IPTables 进行处理,从而允许您在桥接设备上应用 IPv4 的防火墙规则
#启用 Linux 内核的 IP 转发功能。当设置为 1 时,Linux 内核将允许将接收到的数据包从一个网络接口转发到另一个网络接口,实现网络数据的路由功能。
================================================== 10、重新加载配置:sysctl -p 11、加载网桥过滤模块:modprobe br_netfilter #用于加载br_netfilter
内核模块。这个模块是 Linux 内核中的一个网络桥接过滤器模块,它提供了桥接设备的网络过滤功能。 12、查看网桥过滤模块是否加载成功:lsmod | grep br_netfilter ============================================== br_netfilter 22256 0 bridge 151336 1 br_netfilter ==============================================
配置ipvs功能:
·在kubernetes中service有两种代理模型,一种是基于iptables的,一种是基于ipvs的、两种比较的话、ipvs的性能明显要高一些、但是如果要使用它、需要手动载入ipvs模块
13、安装ipvs:yum install -y ipset ipvsadm 14、添加需要加载的模块写入脚本文件-配置网络模块-用于ip负载均衡:直接执行以下命令 ===================================================== [root@k8s-master ~]cat <<EOF > /etc/sysconfig/modules/ipvs.modules 回车 #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF ======================================================= 15、给脚本添加执行权限:chmod +x /etc/sysconfig/modules/ipvs.modules 16、执行脚本文件:/bin/bash /etc/sysconfig/modules/ipvs.modules 17、查看对应的模块是否加载成功:lsmod | grep -e ip_vs -e nf_conntrack_ipv4 =============================================================== nf_conntrack_ipv4 15053 0 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 ip_vs_sh 12688 0 ip_vs_wrr 12697 0 ip_vs_rr 12600 0 ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 139264 2 ip_vs,nf_conntrack_ipv4 libcrc32c 12644 3 xfs,ip_vs,nf_conntrack ============================================================ 18、重启linux系统
====================================2、docker安装=============================
docker安装
1、下载阿里 的docker-ce镜像源仓库文件:wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo //将repo文件下载到指定的位置 2、更新yum源:yum makecache fast 3、查看docker-ce版本:yum list docker-ce --showduplicates 4、安装docker指定版本:yum install -y --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7 //--setopt=obsoletes=0关闭安装最新的包 指定安装版本 5、新建docker配置文件daemon.json(作用是配置docker国内的镜像下载的网站、阿里云加速器):mkdir /etc/docker ====================================== [root@k8s-master ~]# tee /etc/docker/daemon.json <<-'EOF' { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors":["https://gwsg6nw9.mirror.aliyuncs.com"] } EOF ======================================= 6、启动docker:systemctl start docker.service 7、查看docker版本:docker version
=================================3、安装kubernetes组件============================
安装kuberbets组件:以下操作三台主机都是一样的操作
1、新建repo文件: ================================================================================================================== cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF =================================================================================================================== 2、安装指定的版本组件:yum install -y --setopt=obsoletes=0 kubelet-1.17.4-0 kubeadm-1.17.4-0 kubectl-1.17.4-0
==========================================================================================================================
-
kubelet
:kubelet
是 Kubernetes 集群中每个节点上运行的主要组件之一。它负责管理节点上的容器,包括创建、启动、停止和监控容器的生命周期。 -
kubeadm
:kubeadm
是 Kubernetes 的一个命令行工具,用于初始化和管理 Kubernetes 集群。它可以帮助您快速设置一个单节点或多节点的 Kubernetes 集群。 -
kubectl
:kubectl
是 Kubernetes 的命令行客户端工具,用于与 Kubernetes 集群进行交互。您可以使用kubectl
来部署和管理应用程序、查看集群状态、执行命令和调试问题等。
===========================================================================================================================
3、配置kubelet的环境变量:vim /etc/sysconfig/kubelet ===================================== #删掉原有的、添加下边的 KUBELET_CGROUP_ARGS="--cgroup-driver=systemd" KUBE_PROXY_MODE="ipvs"
#注:------------------
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
是一个用于配置 Kubernetes Kubelet 的环境变量。--cgroup-driver=systemd
指定了 Kubelet 使用 systemd 作为 cgroup 驱动程序。cgroup 是 Linux 内核提供的一种机制,用于限制、控制和监视进程组的资源使用。
KUBE_PROXY_MODE="ipvs"
是一个用于配置 Kubernetes Proxy 的环境变量。ipvs
表示使用 IPVS (IP Virtual Server) 模式作为代理模式。IPVS 是 Linux 内核提供的一种负载均衡技术,它可以将入站流量分发到后端的多个服务实例上,以实现负载均衡和高可用性。
===================================== 4、设置开机自启:systemctl enable kubelet.service
=========================4、准备集群镜像===========================
#此操作主节点和node节点都要操作
1、查看kubernetes集群所需的镜像:kubeadm config images list ======================================= #这是kubernetes集群所需要的7个镜像、需要逐一的下载 k8s.gcr.io/kube-apiserver:v1.17.4 k8s.gcr.io/kube-controller-manager:v1.17.4 k8s.gcr.io/kube-scheduler:v1.17.4 k8s.gcr.io/kube-proxy:v1.17.4 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 ====================================== 2、下载镜像(可以使用docker pull 镜像名 去一个一个的下载):这里使用列表循环的方式去下载(原因是这些镜像在k8s仓库中、在国外仓库无法连接) ==================================== ·新建镜像列表、定义images数组: [root@k8s-node1 ~]images=( kube-apiserver:v1.17.4 kube-controller-manager:v1.17.4 kube-scheduler:v1.17.4 kube-proxy:v1.17.4 pause:3.1 etcd:3.4.3-0 coredns:1.6.5 ) ·使用for循环逐一下载(将k8s仓库源暂时换成阿里云、下载完再删掉): [root@k8s-node1 ~]# for imageName in ${images[@]} ; do //将{images[@]}数组遍历赋给imagesName docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/"$imageName" //从阿里云拉取镜像 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/"$imageName" "k8s.gcr.io/$imageName" //docker tag 添加镜像别名 "k8s.gcr.io/$imageName" docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/"$imageName" //删除以这个registry.cn-hangzhou.aliyuncs.com/google_containers/"$imageName"命名的镜像
done
=======================================
3、查看镜像:docker images
===============================================
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.17.4 6dec7cfde1e5 3 years ago 116MB
k8s.gcr.io/kube-apiserver v1.17.4 2e1ba57fe95a 3 years ago 171MB
k8s.gcr.io/kube-controller-manager v1.17.4 7f997fcf3e94 3 years ago 161MB
k8s.gcr.io/kube-scheduler v1.17.4 5db16c1c7aff 3 years ago 94.4MB
k8s.gcr.io/coredns 1.6.5 70f311871ae1 3 years ago 41.6MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 3 years ago 288MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 5 years ago 742kB
===================================================
=====================5、集群初始化==========================
#此操作只需在master主机上操作
#下面开始对集群进行初始化、并将node节点加入到集群中
1、初始化集群-命令行行输入: kubeadm init \ --kubernetes-version=v1.17.4 \ #k8s版本号 --pod-network-cidr=10.244.0.0/16 \ #pod网络默认这个 --service-cidr=10.96.0.0/12 \ #service网络默认这个 --apiserver-advertise-address=192.168.177.151 #master主机ip
注意:
●apiserver-advertise-address 一定要是主机的 IP 地址。
●apiserver-advertise-address 、service-cidr 和 pod-network-cidr 不能在同一个网络范围内。
●不要使用 172.17.0.1/16 网段范围,因为这是 Docker 默认使用的。 -------------------------------------------------------------------------
注意:这里如果再初始化时有问题或者信息填写错误需要重新初始化:
1、先执行 :kubeadm reset
2、rm -rf /etc/kubernetes/*
3、rm -rf ~/ .kube/*
4、rm -rf /var/lib/etcd/*
删除这些目录后再执行kubeadm init \指令
------------------------------------------------------------------------- =========================================================================================================== #以下是运行完成后的提示 #看到以下这句话说明运行成功: Your Kubernetes control-plane has initialized successfully! #这个是提示:如果你需要使用集群,就需要你执行以下的指令 To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config #这个提示是:创建一个网络的指令 You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ #添加node节点到master节点(需要在node节点的主机上执行这个语句即可) Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.177.151:6443 --token oupjt9.syc4ggcq7tvhkmha \ --discovery-token-ca-cert-hash sha256:91251f71f34981a57d347e8b6663579c9820a6cc1b2e81a374eeba814b4ca8b4 =============================================================================================================== 2、创建集群:执行上面的提示:(执行这一步后才会显示nodes节点) =============================================================== mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config =============================================================== 3、查看node节点:kubectl get nodes =============================================== #此时还没有添加node节点 NAME STATUS ROLES AGE VERSION k8s-master NotReady master 24m v1.17.4 =============================================== 4、添加node节点-在node主机(192.168.177.152-153)上执行这个指令:kubeadm join 192.168.177.151:6443 --token oupjt9.syc4ggcq7tvhkmha \ --discovery-token-ca-cert-hash sha256:91251f71f34981a57d347e8b6663579c9820a6cc1b2e81a374eeba814b4ca8b4
---------------------------------------------------------------------------------------
如果4这里报错:[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver.
The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
需要查看一下cgroup的驱动是否为systemd:查看指令:docker info | grep Cgroup
--------------------------------------------------------------------------------------- 5、在master主机(192.168.177.151)上再次查看节点信息:kubectl get nodes ================================================== NAME STATUS ROLES AGE VERSION k8s-master NotReady master 31m v1.17.4 k8s-node1 NotReady <none> 3m2s v1.17.4 k8s-node2 NotReady <none> 4s v1.17.4 ==================================================
#注意上面的STATUS 为NotReady 说明没有配置网络
6、安装网络插件:kubernetes支持多种网络插件、比如flannel、calico、canal等等、任选一种即可、这里选择flannel
#此操作只需在master节点操作
1、fannel网络的配置文件的下载: wget https://github.com/demeter-ink/fannel_yml/blob/master/kube-flannel.yml(建议先将文件下载再传到服务器)---或者直接复制最后fannel的配置文件文件再上传至服务器 2、执行下载的kube-fannel.yml配置文件:kubectl apply -f kube-flannel.yml ============================================================================ podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel configured clusterrolebinding.rbac.authorization.k8s.io/flannel configured serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created 注意:如果有报错需要把文件里的image地址改为阿里云的国内仓库 可将文件里的:quay.io改为quay-mirror.qiniu.com ============================================================================ 3、查看nodes节点的status状态是否是可读状态:kubectl get nodes ================================================== NAME STATUS ROLES AGE VERSION k8s-master Ready master 96m v1.17.4 k8s-node1 Ready <none> 67m v1.17.4 k8s-node2 Ready <none> 64m v1.17.4 ===================================================
如果执行了flannel文件后还是显示notready:缺少flannel文件
1、·查看日志:journalctl -f -u kubelet.service(也可以查看docker日志) #显示[failed to find plugin "flannel" in path [/opt/cni/bin]] 这个错误表示 Kubernetes 在指定的路径 /opt/cni/bin 中无法找到名为 "flannel" 的插件。 2、到github下载cni包:https://github.com/containernetworking/plugins/releases/tag/v0.8.6 3、将flannel文件拷贝到/opt/cni/bin/下
注意:node节点主机上也要有这个flannel文件
只有flannel网络正常了,才能部署应用、否则应用无法启动、如nginx
flannel的配置文件文件:kube-flannel.yml(建议不要使用vim创建、直接在本机创建文本后、复制下面的代码、再将文本上传至服务器)
--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - amd64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - ppc64le hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - s390x hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg