kubeadm部署和实验

部署K8S流程:
1、基础环境准备,并关闭防火墙 selinux 和 swap,更新软件源、时间同步、安装常用命、开启路由转发。
2、部署 harbor 及 haproxy keeplivad高可用反向代理
3、在所有 master 安装指定版本的 kubeadm 、kubelet、kubectl、docker
4、在所有 node 节点安装指定版本的 kubeadm 、kubelet、docker,在 node 节点 kubectl 为可选安装,看是否需要在 node 执行 kubectl 命令进行集群管理及 pod 管理等操作。 
5、master 节点运行 kubeadm init 初始化命令
6、验证 master 节点状态
7、在 node 节点使用 kubeadm 命令将自己加入 k8s master(需要使用 master 生成 token 认 证) 
8、验证 node 节点状态
9、创建 pod 并测试网络通信
10、部署 web 服务 Dashboard
11、k8s 集群升级



192.168.80.100   localhost7A.localdomain      harbor        CentOS 7.7
192.168.80.110   localhost7B.localdomain     keepalived haproxy     192.168.80.222    CentOS 7.7
192.168.80.120   localhost7C.localdomain     master      192.168.80.222    CentOS 7.7
192.168.80.130   localhost7D.localdomain     master      192.168.80.222    CentOS 7.7
192.168.80.140   localhost7E.localdomain     master      192.168.80.222    CentOS 7.7
192.168.80.150   localhost7F.localdomain     node1        CentOS 7.7
192.168.80.160   localhost7G.localdomain     node2        CentOS 7.7
192.168.80.170   localhost7H.localdomain     node3        CentOS 7.7

一、部署 harbor 

1.同步时间服务 并关闭防火墙和selinux 开启路由转发
[root@localhost7A ~]# ntpdate   time1.aliyun.com && hwclock  -w
[root@localhost7A ~]# echo 1 > /proc/sys/net/ipv4/ip_forward  


2.下载YUM源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo



3.安装docker  docker-compose
yum list docker-ce --showduplicates
yum install docker-ce-19.03.15-3.el7  docker-ce-cli-19.03.15-3.el7  -y
yum install docker-compose  -y
systemctl enable docker && systemctl start docker

4.下载harbor 解压
https://github.com/goharbor/harbor/releases/download/v1.7.6/harbor-offline-installer-v1.7.6
tar xvf harbor-offline-installer-v1.7.6.tgz 
ln -sv /usr/local/src/harbor   /usr/local/
cd /usr/local/harbor/

5.设置harbor配置文件,启动。
[root@localhost7A ~]# grep "^[a-Z]" /usr/local/harbor/harbor.cfg 
hostname = harbor.zzhz.com
ui_url_protocol = http
harbor_admin_password = Harbor12345

5.启动 
./install.sh 
 

6.docker 命令登录测试
设置要登录harbor服务器的各节点.master和node节点要下载镜像。
默认下harbor是采用的https登录,如果使用http登录.使用此参数:非安全登录方式,--insecure-registry harbor.zzhz.com

[root@localhost7C ~] cat /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry harbor.zzhz.com
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

[root@localhost7C ~]# systemctl daemon-reload &&  systemctl restart docker
[root@localhost7C ~]# docker login harbor.zzhz.com
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded

#下载
[root@localhost7C ~]# docker pull nginx
[root@localhost7C ~]# docker images
REPOSITORY                                  TAG             IMAGE ID       CREATED       SIZE
nginx                                       latest          12766a6745ee   2 weeks ago   141MB

#打标签,baseimage先要创建。
[root@localhost7C ~]# docker tag 12766a6745ee  harbor.zzhz.com/baseimage/nginx:latest

#上传Harbor
[root@localhost7C ~]# docker push harbor.zzhz.com/baseimage/nginx:latest


7.web页面登录查看

 

 

二、部署haproxy keeplivad高可用反向代理

1.同步时间服务
[root@localhost7B ~]# ntpdate   time1.aliyun.com && hwclock  -w
开启路由转发
[root@localhost7B ~]# echo 1 > /proc/sys/net/ipv4/ip_forward   
2.下载YUM源
[root@localhost7B ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@localhost7B ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

3.安装软件
[root@localhost7B ~]# yum install keepalived  haproxy  -y
 
   
4.配置keepalived
[root@localhost7B ~]# cat /etc/keepalived/keepalived.conf 

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from root@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id localhost7B
   vrrp_iptables
   vrrp_garp_interval 0
   vrrp_gna_interval 0
   vrrp_mcast_group4 224.0.0.18
}
vrrp_instance zzhz {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 95
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass centos
    }
    virtual_ipaddress {
        192.168.80.222/24 dev eth0 label eth0:1
    }
}

5.配置haproxy
[root@localhost7B ~]# cat /etc/haproxy/haproxy.cfg 
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http

    option                  httplog
    option                  dontlognull
    option http-server-close
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

listen stats
   mode http
   bind 0.0.0.0:9999
   stats enable
   log global
   stats uri /haproxy-status
   stats auth haadmin:12345

listen k8s-6443
   bind 192.168.80.222:6443
   mode tcp
   balance roundrobin
    server 192.168.80.120 192.168.80.120:6443 check inter 2s fall 3 rise 5
    server 192.168.80.130 192.168.80.130:6443 check inter 2s fall 3 rise 5
    server 192.168.80.140 192.168.80.140:6443 check inter 2s fall 3 rise 5

6.启动服务
[root@localhost7B ~]# systemctl enable  keepalived.service   haproxy.service 
[root@localhost7B ~]# systemctl start keepalived.service 
[root@localhost7B ~]# systemctl status  keepalived.service 
[root@localhost7B ~]# systemctl start haproxy.service 
[root@localhost7B ~]# systemctl status haproxy.service

三、部署master 

在所有 master 安装指定版本的 kubeadm 、kubelet、kubectl、docker


1.同步时间服务 并关闭防火墙和selinux
[root@localhost7C ~]# ntpdate   time1.aliyun.com && hwclock  -w

2. /etc/hosts 解析
192.168.80.100   localhost7A.localdomain   harbor.zzhz.com
192.168.80.110   localhost7B.localdomain
192.168.80.120   localhost7C.localdomain
192.168.80.130   localhost7D.localdomain  
192.168.80.140   localhost7E.localdomain
192.168.80.150   localhost7F.localdomain
192.168.80.160   localhost7G.localdomain
192.168.80.170   localhost7H.localdomain


3.swapoff -a    vim  /etc/fstab  中的注释掉

4.开启路由转发 net.ipv4.ip_forward =1   

5.将桥接的IPv4流量传递到iptables的链
sysctl -w net.bridge.bridge-nf-call-iptables=1
sysctl -w net.bridge.bridge-nf-call-ip6tables=1

永久修改
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
sysctl -p /etc/sysctl.d/k8s.conf


5.下载YUM源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  
6.安装docker 并启动
yum list docker-ce --showduplicates
yum install docker-ce-19.03.15-3.el7  docker-ce-cli-19.03.15-3.el7
systemctl enable  docker  &&  systemctl start docker
 
 
7.配置镜像加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
 "registry-mirrors": ["https://9916w1ow.mirror.aliyuncs.com"]
}
EOF

systemctl daemon-reload && systemctl restart docker  

8.配置k8s的YUM源
[root@localhost7C ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


9.可能需要安装socat 
wget http://mirror.centos.org/centos/7/os/x86_64/Packages/socat-1.7.3.2-2.el7.x86_64.rpm
yum install socat-1.7.3.2-2.el7.x86_64.rpm

9.安装kubeadm  kubelet  kubectl
yum install  --nogpgcheck   kubeadm-1.17.2-0.x86_64 kubelet-1.17.2-0.x86_64 kubectl-1.17.2-0.x86_64
 
先要启动,提示错误没关系。
[root@localhost7C ~]# systemctl enable kubelet.service  &&  systemctl start kubelet.service 


10.验证当前 kubeadm 版本
[root@localhost7C ~]#  kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", 
GitTreeState:"clean", BuildDate:"2020-01-18T23:27:49Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

10.查看安装指定版本 k8s 需要的镜像有哪些
[root@localhost7C ~]#  kubeadm config images list --kubernetes-version v1.17.2
W0224 11:00:59.345931   29980 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0224 11:00:59.345965   29980 validation.go:28] Cannot validate kubelet config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.2
k8s.gcr.io/kube-controller-manager:v1.17.2
k8s.gcr.io/kube-scheduler:v1.17.2
k8s.gcr.io/kube-proxy:v1.17.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
[root@localhost7C ~]# 


10.下载所需要版本,推荐提前在 master 节点下载镜像以减少安装等待时间,但是镜像默认使用 Google 的镜像仓
库,所以国内无法直接下载,但是可以通过阿里云的镜像仓库把镜像先提前下载下来,可以避免后期因镜像下载异常而导致 k8s 部署异常。
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5


11.下载所需要版本(脚本)
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/google_containers
version=v1.14.2
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
  docker pull $url/$imagename
  docker tag $url/$imagename k8s.gcr.io/$imagename
  docker rmi -f $url/$imagename
done

chmod u+x image.sh 
./image.sh




12.在三台 master 中任意一台 master 进行集群初始化,而且集群初始化只需要初始化一次。(本例群集)
群集模式:222地址必须有,所以要先配置KA+hp
# kubeadm init --apiserver-advertise-address=192.168.80.120 --apiserver-bind-port=6443 --control-plane-endpoint=192.168.80.222 --kubernetes-version=v1.17.2 --pod-network-cidr=10.10.0.0/16 --service-cidr=10.20.0.0/16 --service-dns-domain=zzhz.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap

单机模式:
kubeadm init --apiserver-advertise-address=192.168.80.120 --apiserver-bind-port=6443  --kubernetes-version=v1.17.2 --pod-network-cidr=10.10.0.0/16 --service-cidr=10.20.0.0/16 --service-dns-domain=linux36.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap




13.完成后的提示,复制到文本中,添加master和node成员需要使用到
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.80.222:6443 --token qqjzbt.n9z1zesd7ied1sbe \
    --discovery-token-ca-cert-hash sha256:3321b21a12832325852fdab7c10b132b3cca2fb450ac5160de1288ed4ef5700e \
    --control-plane  --certificate-key 4e5704f14b0fd11d4ad53ab6113825ae28727f9de668ff7f7ccb5151b88f745a


Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.80.222:6443 --token qqjzbt.n9z1zesd7ied1sbe \
    --discovery-token-ca-cert-hash sha256:3321b21a12832325852fdab7c10b132b3cca2fb450ac5160de1288ed4ef5700e


init初始化成功提示操作步骤
1.创建Kube-config 文件,里面包含 kube-apiserver 地址及相关认证信息,kebectl命令操作的认证。
2.部署网络组件 flannel
3.添加master成员,kubeadm init phase upload-certs --upload-certs命令生成 --certificate-key
4.添加node节点
---------------


14.当前 maste 生成证书用于添加新master控制节点:
# kubeadm init phase upload-certs --upload-certs
[root@localhost7C k8s]# kubeadm init phase upload-certs --upload-certs
I0417 13:10:32.698196   24960 version.go:251] remote version is much newer: v1.23.5; falling back to: stable-1.17
W0417 13:10:34.643215   24960 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0417 13:10:34.643243   24960 validation.go:28] Cannot validate kubelet config - no validator is available
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
4e5704f14b0fd11d4ad53ab6113825ae28727f9de668ff7f7ccb5151b88f745a


14.添加master成员
kubeadm join 192.168.80.222:6443 --token qqjzbt.n9z1zesd7ied1sbe \
    --discovery-token-ca-cert-hash sha256:3321b21a12832325852fdab7c10b132b3cca2fb450ac5160de1288ed4ef5700e \
    --control-plane  --certificate-key 4e5704f14b0fd11d4ad53ab6113825ae28727f9de668ff7f7ccb5151b88f745a




15.kebectl命令配置信息
[root@localhost7C k8s]# mkdir -p $HOME/.kube
[root@localhost7C k8s]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@localhost7C k8s]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@localhost7C k8s]# kubectl get node
NAME                      STATUS   ROLES    AGE    VERSION
localhost7c.localdomain   Ready    master   116m   v1.17.2
localhost7d.localdomain   Ready    master   28m    v1.17.2
localhost7e.localdomain   Ready    master   25m    v1.17.2

四、部署Node节点设置

0.前期设置的步骤与布署master一样.(1-10步骤)
各需要加入到 k8s master 集群中的 node 节点都要安装 docker kubeadm kubelet,其中kubectl-1.17.2-0.x86_64(可选) ,
因此都要重新执行安装 docker kubeadm kubelet 的步骤,即配置 apt 仓库、配置 docker 加速器、安装命令、启动 kubelet 服务


1.下载所需要版本
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1

2.node节点优化设置
cat >> /etc/security/limits.conf <<EOF
* soft nproc 65535
* hard nproc 65535
* soft nofile 655350
* hard nofile 655350
EOF

sed -i 's#4096#65535#g' /etc/security/limits.d/20-nproc.conf


cat >> /etc/sysctl.conf <<EOF
kernel.sysrq = 0
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 50000
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_max_tw_buckets = 50000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 1800
net.ipv4.ip_local_port_range = 1024 65535
vm.swappiness = 0
vm.min_free_kbytes = 524288
fs.inotify.max_user_instances = 8192
fs.inotify.max_user_watches = 262144
fs.file-max = 1048576
EOF

sysctl  -p


3.安装kubeadm  kubelet  
[root@localhost7F~]# yum install  --nogpgcheck   kubeadm-1.17.2-0.x86_64 kubelet-1.17.2-0.x86_64 
[root@localhost7F~]# systemctl enable kubelet.service  &&  systemctl start kubelet.service 


4.添加node节点
kubeadm join 192.168.80.222:6443 --token qqjzbt.n9z1zesd7ied1sbe \
    --discovery-token-ca-cert-hash sha256:3321b21a12832325852fdab7c10b132b3cca2fb450ac5160de1288ed4ef5700e

[root@localhost7C k8s]# kubectl get node
NAME                      STATUS     ROLES    AGE    VERSION
localhost7c.localdomain   NotReady      master   143m   v1.17.2
localhost7d.localdomain   NotReady      master   54m    v1.17.2
localhost7e.localdomain   NotReady      master   52m    v1.17.2
localhost7f.localdomain   NotReady      <none>   16s    v1.17.2
localhost7g.localdomain   NotReady      <none>   24s    v1.17.2
localhost7h.localdomain   NotReady      <none>   34s    v1.17.2

五、部署网络组件 flannel

1.安装网络插件,一般的网络无法访问quay.io,可以曲线救国,找国内的镜像源,或者从docker hub上拉取flannel的镜像手动拉取flannel镜像,在集群的所有机器上操作
# 手动拉取flannel的docker镜像
[root@localhost7C k8s]#docker pull easzlab/flannel:v0.12.0-amd64
# 修改镜像名称
[root@localhost7C k8s]#docker tag ff281650a721   harbor.zzhz.com/baseimage/flannel:v0.12.0-amd64
[root@localhost7C k8s]#docker push   harbor.zzhz.com/baseimage/flannel:v0.12.0-amd64


2.下载flannel资源配置清单文件 wget  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
修改里面的镜像地址 和与--pod-network-cidr=10.10.0.0/16同一网段地址
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.10.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: harbor.zzhz.com/baseimage/flannel:v0.12.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: harbor.zzhz.com/baseimage/flannel:v0.12.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
View Code

3.安装flannel
# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created



3.[root@localhost7C k8s]# kubectl get node
NAME                      STATUS     ROLES    AGE    VERSION
localhost7c.localdomain   NotReady      master   143m   v1.17.2
localhost7d.localdomain   NotReady      master   54m    v1.17.2
localhost7e.localdomain   NotReady      master   52m    v1.17.2
localhost7f.localdomain   NotReady      <none>   16s    v1.17.2
localhost7g.localdomain   NotReady      <none>   24s    v1.17.2
localhost7h.localdomain   NotReady      <none>   34s    v1.17.2

4.查看日志,说明docker:网络插件未就绪:cni配置未初始化,
[root@localhost7C ~]# kubectl describe nodes localhost7c.localdomain 
luginNotReady message:docker: network plugin is not ready: cni config uninitialized

[root@localhost7C ~]# systemctl  status kubelet.service 
[root@localhost7C ~]# journalctl -f -u kubelet.service
-- Logs begin at 日 2023-02-26 21:54:47 CST. --
2月 27 10:57:18 localhost7D.localdomain kubelet[10247]: "type": "portmap",
2月 27 10:57:18 localhost7D.localdomain kubelet[10247]: "capabilities": {
2月 27 10:57:18 localhost7D.localdomain kubelet[10247]: "portMappings": true
2月 27 10:57:18 localhost7D.localdomain kubelet[10247]: }
2月 27 10:57:18 localhost7D.localdomain kubelet[10247]: }
2月 27 10:57:18 localhost7D.localdomain kubelet[10247]: ]
2月 27 10:57:18 localhost7D.localdomain kubelet[10247]: }
2月 27 10:57:18 localhost7D.localdomain kubelet[10247]: : [failed to find plugin "flannel" in path [/opt/cni/bin]]
2月 27 10:57:18 localhost7D.localdomain kubelet[10247]: W0227 10:57:18.597526   10247 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d
2月 27 10:57:21 localhost7D.localdomain kubelet[10247]: E0227 10:57:21.727513   10247 kubelet.go:2183] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

下载插件:(在k8s 1.0.0版本后CNI Plugins中没有flannel,需自行下载)
https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
tar xvf cni-plugins-linux-amd64-v0.8.6.tgz 
scp /opt/cni/bin/flannel  192.168.80.150:/opt/cni/bin/

安装插件(在k8s 1.0.0版本之前)
yum install kubernetes-cni -y 


5.查看集群的node状态,安装完网络工具之后,只有显示如下状态,所有节点全部都Ready好了之后才能继续后面的操作
[root@localhost7C k8s]# kubectl  get node
NAME                      STATUS   ROLES    AGE     VERSION
localhost7c.localdomain   Ready    master   2d23h   v1.17.2
localhost7d.localdomain   Ready    master   2d23h   v1.17.2
localhost7e.localdomain   Ready    master   2d23h   v1.17.2
localhost7f.localdomain   Ready    <none>   2d23h   v1.17.2
localhost7g.localdomain   Ready    <none>   2d23h   v1.17.2

六、部署 web 服务 dashboard

1.设置要登录harbor服务器的各节点.
cat /usr/lib/systemd/system/docker.service
默认下harbor是采用的https登录,如果使用http登录.使用此参数:非安全登录方式,--insecure-registry harbor.zzhz.com
[root@localhost7A ~] cat /usr/lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry harbor.zzhz.com
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
[root@localhost7A ~]# systemctl daemon-reload &&  systemctl restart docker



2.下载dashboard和metrics-scraper镜像,
[root@localhost7A ~]# docker pull  kubernetesui/dashboard:v2.0.0-rc6
v2.0.0-rc6: Pulling from kubernetesui/dashboard
1f45830e3050: Pull complete 
Digest: sha256:61f9c378c427a3f8a9643f83baa9f96db1ae1357c67a93b533ae7b36d71c69dc
Status: Downloaded newer image for kubernetesui/dashboard:v2.0.0-rc6
docker.io/kubernetesui/dashboard:v2.0.0-rc6
[root@localhost7A ~]# docker pull kubernetesui/metrics-scraper:v1.0.3
v1.0.3: Pulling from kubernetesui/metrics-scraper
75d12d4b9104: Pull complete 
fcd66fda0b81: Pull complete 
53ff3f804bbd: Pull complete 
Digest: sha256:40f1d5785ea66609b1454b87ee92673671a11e64ba3bf1991644b45a818082ff
Status: Downloaded newer image for kubernetesui/metrics-scraper:v1.0.3
docker.io/kubernetesui/metrics-scraper:v1.0.3

[root@localhost7A ~]# docker images
REPOSITORY                      TAG             IMAGE ID       CREATED       SIZE
kubernetesui/dashboard          v2.0.0-rc6      cdc71b5a8a0e   2 years ago   221MB
kubernetesui/metrics-scraper    v1.0.3          3327f0dbcb4a   2 years ago   40.1MB


3.打标和上传到Harbor的baseimage项目中.
[root@localhost7A ~]docker tag cdc71b5a8a0e harbor.zzhz.com/baseimage/dashboard:v2.0.0-rc6
[root@localhost7A ~]docker tag 3327f0dbcb4a harbor.zzhz.com/baseimage/metrics-scraper:v1.0.3
[root@localhost7A ~]# docker login harbor.zzhz.com
[root@localhost7A ~]docker push harbor.zzhz.com/baseimage/metrics-scraper:v1.0.3
[root@localhost7A ~]docker push harbor.zzhz.com/baseimage/dashboard:v2.0.0-rc6


4.设置yaml文件
[root@localhost7C k8s]# ls
admin-user.yml              #用户文件
dash_board-2.0.0-rc6.yml
[root@localhost7C k8s]# vim dash_board-2.0.0-rc6.yml  #设置端口和镜像地址
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30002   #设置端口
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: harbor.zzhz.com/baseimage/dashboard:v2.0.0-rc6   #设置image
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "beta.kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: harbor.zzhz.com/baseimages/metrics-scraper:v1.0.3  #设置image
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "beta.kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
View Code
cat admin-user.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
View Code
[root@localhost7C k8s]# kubectl  apply -f dash_board-2.0.0-rc6.yml  -f admin-user.yml 


5.获取用户登录 token
[root@localhost7C k8s]# kubectl get secret -A | grep admin-user
kubernetes-dashboard   admin-user-token-4htwp                           kubernetes.io/service-account-token   3      3m10s

[root@localhost7C k8s]# kubectl describe secret  -n kubernetes-dashboard  admin-user-token-4htwp
Name:         admin-user-token-4htwp
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: d7880161-6c5b-4b94-8dff-076a4f27a393
Type:  kubernetes.io/service-account-token
Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjJwWTNxbVhFRXh4ZzR3V2EwakRleThrQ2U1c1A3WlZGekRFcXZ0UU8xN0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTRodHdwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkNzg4MDE2MS02YzViLTRiOTQtOGRmZi0wNzZhNGYyN2EzOTMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.RIZuIdo1ne-nXnCF9qDMMwGtcv6rrMYALrdnCYv_oeC3dTO14r5-RpVNycD-2G8IV3SQKUrqE6RaUiLXmZzQUpIDOqO7SdECIbHEix33nAE2qB0KAmk6lMbB0z53B5SG_2dS4H-YheDCAKcnqRMi00agjoTnL3X7-ehTgAuVNBugBdOha2RvxLDCmHA3JUvjM6Aeoj0715nsD2pA3l9VzQ7eFcbN1dbacri6H0sZ9hEWdiHCUnj0cecR_5FQhySnH5gcIrBbTZSmk9Gp8U4sB82uI47cmVV7JKlTd1W5VvXX8HnsLB9cDQXzomg59C-QuGdAhGjB_3L2m7tB8dVoRQ
ca.crt:     1025 bytes
[root@localhost7C k8s]# 
6.使用token测试登录 dashboard: https://192.168.80.120:30002/#/login

 七、升级k8s版本

1.升级master:
1.1先安装指定版本的kubeadm
[root@localhost7D ~]# kubeadm version
[root@localhost7D ~]# yum install kubeadm-1.17.17
[root@localhost7D ~]# kubeadm  version

1.2.查看升级计划kubeadm  upgrade plan
[root@localhost7C ~]# kubeadm  upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.17.2
[upgrade/versions] kubeadm version: v1.17.17
I0227 14:54:06.681442   29772 version.go:251] remote version is much newer: v1.26.1; falling back to: stable-1.17
[upgrade/versions] Latest stable version: v1.17.17
[upgrade/versions] Latest version in the v1.17 series: v1.17.17

Upgrade to the latest version in the v1.17 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.17.2   v1.17.17
Controller Manager   v1.17.2   v1.17.17
Scheduler            v1.17.2   v1.17.17
Kube Proxy           v1.17.2   v1.17.17
CoreDNS              1.6.5     1.6.5
Etcd                 3.4.3     3.4.3-0

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.17.17


1.3.kubeadm upgrade apply v1.17.17(升级过程中会下载镜像,可以先下载)
[root@localhost7C ~]#  kubeadm config images list --kubernetes-version v1.17.17
k8s.gcr.io/kube-apiserver:v1.17.17
k8s.gcr.io/kube-controller-manager:v1.17.17
k8s.gcr.io/kube-scheduler:v1.17.17
k8s.gcr.io/kube-proxy:v1.17.17
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5

先下载镜像
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.17
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.17
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.17
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.17
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5

1.4.更新k8s, 后查看下pod使用的镜像。
[root@localhost7C ~]# kubeadm upgrade apply v1.17.17

1.5.升级  kubelet=1.17.17   kubectl=1.17.17 
[root@localhost7C ~]# yum  install kubelet-1.17.17  kubectl-1.17.17

[root@localhost7C ~]# systemctl daemon-reload && systemctl restart kubelet 

[root@localhost7C ~]# kubectl  get node
NAME                      STATUS   ROLES    AGE    VERSION
localhost7c.localdomain   Ready    master   3d3h   v1.17.17(kubelet版本)
localhost7d.localdomain   Ready    master   3d3h   v1.17.17
localhost7e.localdomain   Ready    master   3d3h   v1.17.17



2.升级各node节点:
2.1.kubeadm需要升级吗?
[root@localhost7G ~]# yum install kubeadm-1.17.17  -y

2.2.各node节点配置文件:
[root@localhost7G ~]# kubeadm upgrade node --kubelet-version 1.17.17
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Using kubelet config version 1.17.17, while kubernetes-version is v1.17.17
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.



2.3.升级  kubelet=1.17.17  ( kubectl=1.17.17 node一般不安装kubectl,不用升级。)    

[root@localhost7F ~]# yum  install kubelet-1.17.17  
[root@localhost7F ~]# systemctl daemon-reload && systemctl restart kubelet

[root@localhost7C ~]# kubectl  get node
NAME                      STATUS   ROLES    AGE    VERSION
localhost7c.localdomain   Ready    master   3d3h   v1.17.17(kubelet版本)
localhost7d.localdomain   Ready    master   3d3h   v1.17.17
localhost7e.localdomain   Ready    master   3d3h   v1.17.17
localhost7f.localdomain   Ready    <none>   3d3h   v1.17.17
localhost7g.localdomain   Ready    <none>   3d3h   v1.17.17

八、项目实例:运行 Nginx+Tomcat,实现动静分离:

客户端访问网站流程:
    1.客户端访问到公网IP到HAproxy
    2.HAproxy到其中某一node宿主机上监听的端口(nodeport)
    3.nodeport转发到service,service根据标签转发到nginx(service指的是网络,筛选器)
    4.nginx如有后端,这里配置没有写IP,是标签名,根据kube-dns解析得到tomcat的label标签.

1.项目目录结构
[root@localhost7C k8s]#  tree  nginx-tomcat/
nginx-tomcat/
├── nginx-dockerfile
│   ├── conf
│   │   └── default.conf     #nginx配置文件
│   └── Dockerfile           #制作镜像文件
├── nginx.yml                 #启动nginx容器文件
│
├── tomcat-dockerfile
│   ├── app                    
│   │   └── index.html        #nginx配置文件
│   └── Dockerfile            #制作镜像文件
└── tomcat.yml                #启动tomcat容器文件



2.下载镜像
[root@localhost7C k8s]# docker pull tomcat 
[root@localhost7C k8s]# docker pull nginx


3.查看和制作tomcat镜像
[root@localhost7C tomcat-dockerfile]# cat app/index.html 
tomcat app images web page

[root@localhost7C tomcat-dockerfile]# cat Dockerfile 
FROM tomcat
ADD ./app /usr/local/tomcat/webapps/app/

制作镜像
[root@localhost7C tomcat-dockerfile]# docker build  -t harbor.zzhz.com/baseimage/tomcat:app1 .

测试tomcat是否能访问(可过)
[root@localhost7C tomcat-dockerfile]# docker run -it --rm -p 8080:8080  harbor.zzhz.com/baseimage/tomcat:app1 
#curl http://192.168.80.120:8080/app/
tomcat app images web page

上传到harbor
[root@localhost7C k8s]# docker push harbor.zzhz.com/baseimage/tomcat:app1


4.查看和制作nginx镜像,
[root@localhost7C nginx-dockerfile]# cat conf/default.conf 
server {
    listen       80;
    listen  [::]:80;
    server_name  localhost;    
    #access_log  /var/log/nginx/host.access.log  main;
    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
    #添加这里
    location /app {
        proxy_pass http://magedu-tomcat-service;
    } 
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
    
}
[root@localhost7C nginx-dockerfile]# cat Dockerfile 
FROM nginx
ADD ./conf/default.conf  /etc/nginx/conf.d/

#制作镜像
[root@localhost7C nginx-dockerfile]# docker build  -t harbor.zzhz.com/baseimage/nginx:v1 .

#如果要测试:nginx定义了proxy_pass,后端没启动会提示错误。)
[root@localhost7C nginx-dockerfile]#docker run -it --rm -p 80:80 harbor.zzhz.com/baseimage/nginx:v1

上传到harbor
[root@localhost7C nginx-tomcat]# docker push harbor.zzhz.com/baseimage/nginx:v1



5.查看tomcat和nginx的yaml文件,并运行
#tomcat文件
[root@localhost7C nginx-tomcat]# cat tomcat.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deployment
  labels:
    app: tomcat
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tomcat
  template:
    metadata:
      labels:
        app: tomcat
    spec:
      containers:
      - name: tomcat
        image: harbor.zzhz.com/baseimage/tomcat:app1
        ports:
        - containerPort: 8080

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: magedu-tomcat-service-label
  name: magedu-tomcat-service
  namespace: default
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30005
  selector:
    app: tomcat

#nginx文件
[root@localhost7C nginx-tomcat]# cat nginx.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: harbor.zzhz.com/baseimage/nginx:v1
        ports:
        - containerPort: 80

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: magedu-nginx-service-label
  name: magedu-nginx-service
  namespace: default
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30004
  selector:
    app: nginx

6.运行容器
[root@localhost7C nginx-tomcat]# kubectl apply -f tomcat.yml  -f nginx.yml


7.通过 HAProxy 实现高可用反向代理:
基于 haproxy 和 keepalived 实现高可用的反向代理,并访问到运行在 kubernetes集群中业务 Pod。

[root@localhost7B ~]# cat  /etc/keepalived/keepalived.conf
     ....
     ....
    virtual_ipaddress {
        192.168.80.222/24 dev eth0 label eth0:1
        192.168.80.223/24 dev eth0 label eth0:2
    }
    
[root@localhost7B ~]# cat  /etc/haproxy/haproxy.conf
    ....
    ....
listen k8s-6443
 bind 192.168.80.222:6443
 mode tcp
 balance roundrobin
 server 192.168.80.110 192.168.80.110:6443 check inter 2s fall 3 rise 5
 server 192.168.80.120 192.168.80.120:6443 check inter 2s fall 3 rise 5
 server 192.168.80.130 192.168.80.130:6443 check inter 2s fall 3 rise 5

listen nginx-80
 bind 192.168.80.223:80
 mode tcp
 balance roundrobin
 server 192.168.80.150 192.168.80.150:30004 check inter 2s fall 3 rise 5
 server 192.168.80.160 192.168.80.160:30004 check inter 2s fall 3 rise 5


8.测试:
[root@localhost7A harbor]# curl 192.168.80.223
<h1>Welcome to nginx!</h1>
[root@localhost7A harbor]# curl 192.168.80.223/app/index.html
tomcat app images web page



9.业务逻辑说明:
[root@localhost7C nginx-tomcat]# kubectl  get svc
NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes              ClusterIP   10.20.0.1       <none>        443/TCP        4d4h
magedu-nginx-service    NodePort    10.20.109.183   <none>        80:30004/TCP   52m
magedu-tomcat-service   NodePort    10.20.85.117    <none>        80:30005/TCP   58m


[root@localhost7C nginx-tomcat]# kubectl describe service magedu-tomcat-service
Name:                     magedu-tomcat-service
Namespace:                default
Labels:                   app=magedu-tomcat-service-label
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"magedu-tomcat-service-label"},"name":"magedu-tomcat-serv...
Selector:                 app=tomcat
Type:                     NodePort
IP:                       10.20.85.117  #地址
Port:                     http  80/TCP
TargetPort:               8080/TCP
NodePort:                 http  30005/TCP
Endpoints:                10.10.4.6:8080  #后端服务
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>


进入到 nginx Pod
[root@localhost7C ~]# kubectl exec  -it nginx-deployment-5999b8f5d6-5pz96  bash
# cat /etc/issue
Debian GNU/Linux 9 \n \l
更新软件源并安装基础命令
# apt update
# apt install procps vim iputils-ping net-tools curl

测试 service 解析
root@nginx-deployment-5999b8f5d6-5pz96:/# ping  magedu-tomcat-service
PING magedu-tomcat-service.default.svc.zzhz.local (10.20.85.117) 56(84) bytes of data.


测试在 nginx Pod 通过 tomcat Pod 的 service 域名访问:
root@nginx-deployment-5999b8f5d6-5pz96:/# curl  magedu-tomcat-service.default.svc.zzhz.local/app/index.html
tomcat app images web page

 

posted @ 2023-02-27 09:26  yuanbangchen  阅读(93)  评论(0编辑  收藏  举报