k8s集群安装
笔记连接:https://www.yuque.com/docs/share/1ee7e7e1-a207-49e8-9620-2927eba77f3a?# 《1:安装》
1:kubeadmin安装
1.1:单master节点kubeadmin安装
关闭防火墙和交换分区
ufw disable && ufw status swapoff -a
#配置host文件
开启转发功能
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
安装依赖包,安装docker
apt install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg |apt-key add - add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" apt update apt install -y docker-ce docker-ce-cli containerd.io
配置docker加速
vim /etc/docker/daemon.json { "registry-mirrors":["https://registry.docker-cn.com"], "exec-opts":["native.cgroupdriver=systemd"], "log-driver":"json-file", "log-opts":{ "max-size":"100m" }, "storage-driver":"overlay2" }
设置docker开机启动
systemctl daemon-reload systemctl start docker.service systemctl enable docker.service
配置kubernets源、安装
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg|apt-key add - echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" >>/etc/apt/sources.list apt install -y kubelet kubeadm kubectl apt install -y kubelet=1.19.9-00 kubeadm=1.19.9-00 kubectl=1.19.9-00
master节点配置
#server端 kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.19.0 --control-plane-endpoint k8s-api.ilinux.io --apiserver-advertise-address 172.16.1.2 --pod-network-cidr 10.244.0.0/16 --token-ttl 0 mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config #网络原因,需配置hosts; kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml #查看网络插件是否启动 kubectl get pods -n kube-system|grep flannel #查看kubeadm-join kubeadm token create --print-join-command
node节点
#添加工作节点 kubeadm join k8s-api.ilinux.io:6443 --token k4d4cb.ridawlr5ecyvhb64 \ --discovery-token-ca-cert-hash sha256:d15ebd0aaa28941c64ab62233ff2e39e32b948db933307d0406c3daba76ee28c \
常见操作
#重新生成token kubeadm token generate tj1cqq.g85vf7ngx88eia25 root@k8s:~# kubeadm token create tj1cqq.g85vf7ngx88eia25 --print-join-command --ttl=0
2:二进制部署
集群架构规划:
操作系统:ubuntu18.4 k8s-master1 192.168.3.151 kube-apiserver,kube-controller-manager,kube-scheduler,etcd k8s-master2 192.168.3.152 kube-apiserver,kube-controller-manager,kube-scheduler,etcd k8s-master3 192.168.3.153 kube-apiserver,kube-controller-manager,kube-scheduler,etcd k8s-node1 192.168.3.160 kubelet,kube-proxy,docker,flannel,etcd k8s-node2 192.168.3.161 kubelet,kube-proxy,docker,flannel master负载均衡 192.168.3.99 vip 镜像仓库 解析 cat <<eof >>/etc/hosts 192.168.3.151 k8s-master1 192.168.3.152 k8s-master2 192.168.3.153 k8s-master3 192.168.3.160 k8s-node1 192.168.3.161 k8s-node2 192.168.3.99 lvs-server eof
环境准备:
##更换阿里源## ##网址:https://developer.aliyun.com/mirror/ubuntu?spm=a2c6h.13651102.0.0.3e221b11qYtGD9 #所有节点: mv /etc/apt/sources.list /etc/apt/sources.list.bak #更改源 cat <<eof >/etc/apt/sources.list deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse # deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse # deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse eof ###节点的mac和uuid唯一### ###关闭swap### swapoff -a sed -i.bak '/swap/s/^/#/' /etc/fstab ###关闭防火墙### systemctl stop ufw.service;systemctl disable ufw.service
###安装常用软件### apt-get install wget jq psmisc vim net-tools telnet lvm2 git lrzsz -y
免密钥登录
####master1 cd ~/.ssh/ ssh-keygen -t rsa var='k8s-master2 k8s-master3 k8s-node1 k8s-node2';for n in $var;do ssh-copy-id $n;done
2.1:时间同步:
install ntpdate -y #设置阿里时间服务器同步 ntpdate time2.aliyun.com #设置crontab使用vim打开 export EDITOR=vim systemctl restart cron #定时同步时间 crontab- l */5 * * * * ntpdate time2.aliyun.com
2.2:linux优化
ulimit -SHn 65535 cat <<eof >>/etc/security/limits.conf * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * soft memlock unlimited * hard memlock unlimited eof
2.3:默认git版本低
2.4:安装lvs
apt-get install ipvsadm ipset sysstat conntrack -y 所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可 cat <<eof > /etc/modules-load.d/ipvs.conf ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_sh nf_conntrack # 4.18 改成这个nf_conntrack_ipv4 ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip eof #生效 systemctl enable --now systemd-modules-load.service
2.5:开启一些k8s集群中必须的内核参数,配置k8s内核(所有节点)
cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF #重启系统 #查看内核参数 lsmod | grep --color=auto -e ip_vs -e nf_conntrack
2.6:安装docker
apt-get install -y docker.io ##由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"] } EOF ##所有节点设置开机自启动Docker## systemctl daemon-reload && systemctl enable --now docker
2.7:K8s及etcd安装
2.7.1:下载kubernetes安装包(Master01)
# 注意目前版本是1.21.0,安装时需要下载最新的版本: # https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/ [root@k8s-master01 ~]# wget https://dl.k8s.io/v1.20.0/kubernetes-server-linux-amd64.tar.gz
2.7.2:下载etcd安装包(Master01上)
##下载 wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz ##解压 tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} ##解压etcd tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.13-linux-amd64/etcd{,ctl} ##版本查看 kubelet --version etcdctl versio
2.7.3:将组件推送到其他master节点(master1节点上操作)
MasterNodes='k8s-master2 k8s-master3' WorkNodes='k8s-node1 k8s-node2' for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $WorkNodes; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
2.7.4:创建/opt/cni/bin目录(所有节点)
mkdir -p /opt/cni/bin
2.7.5:切换分支(master1节点)
# 切换到1.21.x分支(其他版本可以切换到其他分支) # 查看所有分支 #下载k8s-ha-install cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git ##切换分支 cd k8s-ha-install && git init ;git checkout manual-installation-v1.20.x ##查看文件 ls
2.7.6:下载生成证书工具(master1)
##下载生成证书工具(Master01) wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
2.7.7:生成证书
###生成etcd证书(etcd节点) #创建目录 mkdir /etc/etcd/ssl -p #创建k8s证书(所有节点) mkdir -p /etc/kubernetes/pki ###生成etcd证书(Master01节点)将证书复制到其他节点### #生成证书的CSR文件:证书签名请求文件,配置了一些域名、公司、单位 cd /root/k8s-ha-install/pki 生成etcd CA证书和CA证书的key cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca ######### 2022/07/02 03:42:24 [INFO] generating a new CA key and certificate from CSR 2022/07/02 03:42:24 [INFO] generate received request 2022/07/02 03:42:24 [INFO] received CSR 2022/07/02 03:42:24 [INFO] generating key: rsa-2048 2022/07/02 03:42:24 [INFO] encoded CSR 2022/07/02 03:42:24 [INFO] signed certificate with serial number 646915031305932264129241508375759570921509417713 ######### ###可以在-hostname 参数后面预留几个ip,方便日后扩容### cfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master1,k8s-master2,k8s-master3,192.168.3.151,192.168.3.152,192.168.3.153 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd ###执行结果### 2022/07/02 03:45:57 [INFO] generate received request 2022/07/02 03:45:57 [INFO] received CSR 2022/07/02 03:45:57 [INFO] generating key: rsa-2048 2022/07/02 03:45:58 [INFO] encoded CSR 2022/07/02 03:45:58 [INFO] signed certificate with serial number 20485479488073671142317323035139958642024908260 ###将证书复制到其他节点### MasterNodes='k8s-master2 k8s-master3' WorkNodes='k8s-node1 k8s-node2' for NODE in $MasterNodes; do ssh $NODE "mkdir -p /etc/etcd/ssl" for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE} done done #####k8s组件证书##### ###生成kubernetes证书(Master01节点)### cd /root/k8s-ha-install/pki cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca #执行结果 2022/07/02 03:51:01 [INFO] generating a new CA key and certificate from CSR 2022/07/02 03:51:01 [INFO] generate received request 2022/07/02 03:51:01 [INFO] received CSR 2022/07/02 03:51:01 [INFO] generating key: rsa-2048 2022/07/02 03:51:01 [INFO] encoded CSR 2022/07/02 03:51:01 [INFO] signed certificate with serial number 440474687796324933985998795677171028370246484533 #### 10.96.0.1是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.96.0.1, # 如果不是高可用集群,192.168.3.99(lvs的vip)为Master01的IP cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,192.168.3.99,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.3.151,192.168.3.152,192.168.3.152 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver #执行结果 2022/07/02 03:55:23 [INFO] generate received request 2022/07/02 03:55:23 [INFO] received CSR 2022/07/02 03:55:23 [INFO] generating key: rsa-2048 2022/07/02 03:55:23 [INFO] encoded CSR 2022/07/02 03:55:23 [INFO] signed certificate with serial number 27229882319319488307772044502632435864802764197
2.7.8:生成apiserver的聚合证书(master1)
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca #执行结果 2022/07/02 03:57:59 [INFO] generating a new CA key and certificate from CSR 2022/07/02 03:57:59 [INFO] generate received request 2022/07/02 03:57:59 [INFO] received CSR 2022/07/02 03:57:59 [INFO] generating key: rsa-2048 2022/07/02 03:58:00 [INFO] encoded CSR 2022/07/02 03:58:00 [INFO] signed certificate with serial number 655535237951102155098966506254195519300928132725 cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client 执行结果:(告警忽略) 2022/07/02 04:00:17 [INFO] generate received request 2022/07/02 04:00:17 [INFO] received CSR 2022/07/02 04:00:17 [INFO] generating key: rsa-2048 2022/07/02 04:00:18 [INFO] encoded CSR 2022/07/02 04:00:18 [INFO] signed certificate with serial number 242272517163968166160394915791589563899699669433 2022/07/02 04:00:18 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
2.7.9:生成controller-manage的证书(Master01节点)
cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager ####执行结果###################################################### 2022/07/02 04:02:52 [INFO] generate received request 2022/07/02 04:02:52 [INFO] received CSR 2022/07/02 04:02:52 [INFO] generating key: rsa-2048 2022/07/02 04:02:53 [INFO] encoded CSR 2022/07/02 04:02:53 [INFO] signed certificate with serial number 708896765353197282288105312991245608572002216845 2022/07/02 04:02:53 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). ######################################################### # 注意,如果不是高可用集群,192.168.3.99:8443改为master01的地址,8443改为apiserver的端口,默认是6443 # set-cluster:设置一个集群项,192.168.3.99是VIP kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.3.99:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个环境项,一个上下文 kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig ############## 执行结果################# Context "system:kube-controller-manager@kubernetes" created. #################################### # set-credentials 设置一个用户项 kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig ###### 执行结果######### User "system:kube-controller-manager" set. ############################# # 使用某个环境当做默认环境 kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig ############# 执行结果################### Switched to context "system:kube-controller-manager@kubernetes". ################ cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler ##################执行结果############## 2022/07/02 04:12:28 [INFO] generate received request 2022/07/02 04:12:28 [INFO] received CSR 2022/07/02 04:12:28 [INFO] generating key: rsa-2048 2022/07/02 04:12:28 [INFO] encoded CSR 2022/07/02 04:12:28 [INFO] signed certificate with serial number 91019045404074288811253859291910574878877304720 2022/07/02 04:12:28 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). #################################### # 注意,如果不是高可用集群,192.168.3.99:8443改为master01的地址,8443改为apiserver的端口,默认是6443 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.3.99:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin # 注意,如果不是高可用集群,192.168.3.99:8443改为master01的地址,8443改为apiserver的端口,默认是6443 kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.3.99:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
2.7.10:创建ServiceAccount Key à secret(master1)
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
2.7.11:发送证书至其他节点
for NODE in k8s-master2 k8s-master3; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done; ####查看证书文件### ls /etc/kubernetes/pki/
2.8:Kubernetes系统组件配置
2.8.1:etcd配置(所有etcd节点)
etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址
在vim中,进入命令模式输入:
set paste
在进行粘贴,就不会乱码但是这样存在一个问题,就是不会自动产生缩进了,因此需要在粘贴完成之后命了输入:
set nopaste
master1:
[root@k8s-master1 ~]# vim /etc/etcd/etcd.config.yml name: 'k8s-master1' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.3.151:2380' listen-client-urls: 'https://192.168.3.151:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.3.151:2380' advertise-client-urls: 'https://192.168.3.151:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master1=https://192.168.3.151:2380,k8s-master2=https://192.168.3.152:2380,k8s-master3=https://192.168.3.153:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false
master2:
[root@k8s-master2 ~]# vim /etc/etcd/etcd.config.yml name: 'k8s-master2' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.3.152:2380' listen-client-urls: 'https://192.168.3.152:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.3.152:2380' advertise-client-urls: 'https://192.168.3.152:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master1=https://192.168.3.151:2380,k8s-master2=https://192.168.3.152:2380,k8s-master3=https://192.168.3.153:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false
master3:
[root@k8s-master3 ~]# vim /etc/etcd/etcd.config.yml name: 'k8s-master3' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://192.168.3.153:2380' listen-client-urls: 'https://192.168.3.153:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://192.168.3.153:2380' advertise-client-urls: 'https://192.168.3.153:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master1=https://192.168.3.151:2380,k8s-master2=https://192.168.3.152:2380,k8s-master3=https://192.168.3.153:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false
2.8.2:创建Service
创建etcd service并启动(所有Master节点)
vim /etc/systemd/system/etcd.service ##centos:vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service ## systemctl daemon-reload
创建etcd的证书目录(所有Master节点)
cd /etc/etcd/ssl && scp * k8s-master2:`pwd` && scp * k8s-master3:`pwd` mkdir /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ systemctl daemon-reload systemctl enable --now etcd
查看集群
export ETCDCTL_API=3 etcdctl --endpoints="192.168.3.151:2379,192.168.3.152:2379,192.168.3.153:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
2.9:高可用搭建
高可用配置(注意:如果不是高可用集群,haproxy和keepalived无需安装) 如果在云上安装也无需执行此章节的步骤,可以直接使用云上的lb,比如阿里云slb,腾讯云elb等 Slb -> haproxy -> apiserver
2.9.1:安装keepalived和haproxy(所有Master节点)
apt install keepalived haproxy -y
2.9.2:Master配置HAProxy,Master节点都配置一样
cat <<eof >/etc/haproxy/haproxy.cfg global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend k8s-master bind 0.0.0.0:8443 bind 127.0.0.1:8443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master1 192.168.3.151:6443 check server k8s-master2 192.168.3.152:6443 check server k8s-master3 192.168.3.153:6443 check eof
2.9.3:配置KeepAlived(Master节点)
注意每个节点的IP和网卡(interface参数)
Master01
cat <<eof >/etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface eth0 mcast_src_ip 192.168.3.151 virtual_router_id 51 priority 101 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.3.99 } track_script { chk_apiserver } } eof
master2:
cat <<eof >/etc/keepalived/keepalived.conf Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface eth0 mcast_src_ip 192.168.3.152 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.3.99 } track_script { chk_apiserver } } eof
master3:
cat <<eof >/etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface eth0 mcast_src_ip 192.168.3.153 virtual_router_id 51 priority 100 nopreempt advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.3.99 } track_script { chk_apiserver } } eof
2.9.4:健康检查脚本(所有master节点)
cat > /etc/keepalived/check_apiserver.sh << EFO #!/bin/bash err=0 for k in $(seq 1 3) do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 1 continue else err=0 break fi done if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi EFO # 授权 chmod +x /etc/keepalived/check_apiserver.sh
2.9.5:节点启动haproxy和keepalived(所有master节点)
systemctl daemon-reload systemctl enable --now haproxy systemctl enable --now keepalived
2.9.6:可用测试
VIP测试 (master01)
重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的
# 看到有VIP绑定到eth0网卡上了 [root@k8s-master01 ~]# ip a | grep eth0 2 eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 inet 192.168.3.151/24 brd 192.168.3.255 scope global eth0 inet 192.168.3.99/32 scope global eth0 # 任意节点,检查haproxy telnet 192.168.3.99 8443 如果ping不通且telnet没有出现 "]",则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等 所有节点查看防火墙状态必须为disable和inactive:systemctl status ufw 所有节点查看selinux状态,必须为disable:getenforce master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy master节点查看监听端口:netstat -lntp
2.10:Kubernetes组件配置
2.10.1:Apiserver配置
所有Master节点创建kube-apiserver service,# 注意,如果不是高可用集群,192.168.3.99改为master1的地址
注意本文档使用的k8s service网段为10.96.0.0/12,该网段不能和宿主机的网段、Pod网段的重复,请按需修改
###创建目录,所有节点 mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet/var/log/kubernetes
Master01配置
[root@k8s-master1 ~]# vim /etc/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.3.151 \ --service-cluster-ip-range=10.96.0.0/12 \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.3.151:2379,https://192.168.3.152:2379,https://192.168.3.153:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target
Master02配置
[root@k8s-master2 ~]# vim /etc/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.3.152 \ --service-cluster-ip-range=10.96.0.0/12 \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.3.151:2379,https://192.168.3.152:2379,https://192.168.3.153:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target
Master03配置
[root@k8s-master3 ~]# vim /etc/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --v=2 \ --logtostderr=true \ --allow-privileged=true \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --insecure-port=0 \ --advertise-address=192.168.3.153 \ --service-cluster-ip-range=10.96.0.0/12 \ --service-node-port-range=30000-32767 \ --etcd-servers=https://192.168.3.151:2379,https://192.168.3.152:2379,https://192.168.3.153:2379 \ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \ --etcd-certfile=/etc/etcd/ssl/etcd.pem \ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \ --service-account-key-file=/etc/kubernetes/pki/sa.pub \ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User # --token-auth-file=/etc/kubernetes/token.csv Restart=on-failure RestartSec=10s LimitNOFILE=65535 [Install] WantedBy=multi-user.target
启动并验证(所有节点)
systemctl daemon-reload && systemctl enable --now kube-apiserver # 检测kube-server状态 systemctl status kube-apiserver ###查看错误日志 journalctl -f -u kube-apiserver.service
2.10.2:配置kube-controller-manager service (所有Master节点)
注意本文档使用的k8s Pod网段为172.16.0.0/12,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改
三Master配置文件节点相同
vim /etc/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-controller-manager \ --v=2 \ --logtostderr=true \ --address=127.0.0.1 \ --root-ca-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ --leader-elect=true \ --use-service-account-credentials=true \ --node-monitor-grace-period=40s \ --node-monitor-period=5s \ --pod-eviction-timeout=2m0s \ --controllers=*,bootstrapsigner,tokencleaner \ --allocate-node-cidrs=true \ --cluster-cidr=172.16.0.0/12 \ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \ --node-cidr-mask-size=24 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target
启动
systemctl daemon-reload && systemctl enable --now kube-controller-manager
2.10.3:配置kube-scheduler service(所有Master节点)
配置相同
vim /etc/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-scheduler \ --v=2 \ --logtostderr=true \ --address=127.0.0.1 \ --leader-elect=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig Restart=always RestartSec=10s [Install] WantedBy=multi-user.target
systemctl daemon-reload && systemctl enable --now kube-scheduler
2.10.4:TLS Bootstrapping配置(为node节点颁发证书)
创建bootstrap(Master01节点)
注意,如果不是高可用集群,192.168.3.99:8443改为master1的地址,8443改为apiserver的端口,默认是6443
[root@k8s-master1 ~]# cd /root/k8s-ha-install/bootstrap kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.3.99:8443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig #注意:如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保证下图红圈内的字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致 #### apiVersion: v1 kind: Secret metadata: name: bootstrap-token-c8ad9c namespace: kube-system type: bootstrap.kubernetes.io/token stringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c #这个跟metadata.name 后面那个一样 cd /root/k8s-ha-install/bootstrap # ##########所有master节点#################### mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config ################################## kubectl create -f bootstrap.secret.yaml #################################### secret/bootstrap-token-c8ad9c created clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created #####################################
2.11:node节点配置
复制证书至Node节点
[root@k8s-master1 ~]# cd /etc/kubernetes/ for NODE in k8s-master2 k8s-master3 k8s-node1 k8s-node2; do ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/ssl for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/ done for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE} done done
Kubelet配置,所有节点
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
配置kubelet service(所有节点)
cat <<eof >/etc/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=docker.service Requires=docker.service [Service] ExecStart=/usr/local/bin/kubelet Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target eof
配置kubelet service的配置文件(所有节点)
cat <<eof >/etc/systemd/system/kubelet.service.d/10-kubelet.conf [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2" Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' " ExecStart= ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS eof
kubelet的配置文件(所有节点)启动所有节点kubelet
注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.96.0.10(k8s的service网段开始设置的是10.96.0.0/12)
cat <<eof>/etc/kubernetes/kubelet-conf.yml apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration address: 0.0.0.0 port: 10250 readOnlyPort: 10255 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd cgroupsPerQOS: true clusterDNS: - 10.96.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s eof
启动kubelet(所有节点)
systemctl daemon-reload systemctl enable --now kubelet # 查看此时系统日志 tail -f /var/log/messages
查看集群状态(matser01上)
kkubectl get node
新增node节点kubelet踩坑记录
node节点kubelet启动不了,一直在刷日志
异常原因:kubelet cgroup驱动:"systemd"不同于docker cgroup驱动:"cgroupfs"
查看master设置的docker json文件驱动为system,node节点的daemon为空,复制一份过去
cat /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"] } ###复制masterdaemon.json到node节点 for i in k8s-node1 k8s-node2;do scp /etc/docker/daemon.json root@$i:/etc/docker/ ;done ##重启node节点的docker和kubelet后恢正常 systemctl restart docker kubelet
总结:systemd是Kubernetes自带的cgroup管理器, 负责为每个进程分配cgroups, 但docker的cgroup driver默认是cgroupfs,这样就同时运行有两个cgroup控制管理器, 当资源有压力的情况时,有可能出现不稳定的情况
2.12:部署kube-proxy(master1)
如果不是高可用,192.168.3.99换成master1的ip
cd /root/k8s-ha-install kubectl -n kube-system create serviceaccount kube-proxy kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy SECRET=$(kubectl -n kube-system get sa/kube-proxy \ --output=jsonpath='{.secrets[0].name}') JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \ --output=jsonpath='{.data.token}' | base64 -d) PKI_DIR=/etc/kubernetes/pki K8S_DIR=/etc/kubernetes kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.3.99:8443 --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig kubectl config set-credentials kubernetes --token=${JWT_TOKEN} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
2.12.1:准备proxy service文件
cat<<eof >/etc/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube Proxy Documentation=https://github.com/kubernetes/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.conf \ --v=2 Restart=always RestartSec=10s [Install] WantedBy=multi-user.target eof
2.12.2:准备kube-proxy配置文件
cd /etc/kubernetes/ cat <<eof>kube-proxy.conf apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig qps: 5 clusterCIDR: 172.16.0.0/12 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: masqueradeAll: true minSyncPeriod: 5s scheduler: "rr" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999 portRange: "" udpIdleTimeout: 250ms eof
2.12.3:复制文件到其他节点
cd /root/k8s-ha-install/ for NODE in k8s-master1 k8s-master2 k8s-master3 k8s-node1 k8s-node2; do scp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf scp kube-proxy/kube-proxy.service $NODE:/etc/systemd/system/kube-proxy.service done for NODE in; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf scp kube-proxy/kube-proxy.service $NODE:/etc/systemd/system/kube-proxy.service done ####服务开机启动,所有节点 systemctl daemon-reload systemctl enable --now kube-proxy
2.13:安装cni网络插件
以下操作只在master1节点执行
cd /root/k8s-ha-install/calico/ sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.3.151:2379,https://192.168.3.152:2379,https://192.168.3.153:2379"#g' calico-etcd.yaml ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'` ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'` ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'` sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml #####更改此处为自己的pod网段 POD_SUBNET="172.16.0.0/12" sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml ###安装calico插件 kubectl apply -f calico-etcd.yaml ###查看pod状态 kubectl get po -n kube-system
2.14:安装coredns,master1节点
cd /root/k8s-ha-install/ # 设定coredns的解析地址为clusterip中的第十个ip sed -i "s#10.96.0.10#10.96.0.10#g" CoreDNS/coredns.yaml kubectl create -f CoreDNS/coredns.yaml
2.14.1:填坑
在以二进制方式部署的Kubernetes集群中部署CoreDNS组件的时候,对应的pod一直处于CrashLoopBackOff状态,其错误日志如下: #] kubectl logs -f coredns-867d46bfc6-6vjdt -n kube-system ################################################## .:53 [INFO] plugin/reload: Running configuration MD5 = b0741fcbd8bd79287446297caa87f7a1 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [FATAL] plugin/loop: Loop (127.0.0.1:35714 -> :53) detected for zone ".", see https://coredns.io/plugins/loop#troubleshooting. Query: "HINFO 6342663713556121436.560618418325353294." ############################################## 异常原因:当Kubernetes中部署的CoreDNS Pod检测到循环时,CoreDNS Pod将开始“ CrashLoopBackOff”。 这是因为,每当CoreDNS检测到循环并退出时,Kubernetes都会尝试重新启动Pod。 Kubernetes集群中转发循环的常见原因是与主机节点上的本地DNS缓存进行交互(例如systemd-resolved)。 例如,在某些配置中,systemd-resolved会将回送地址127.0.0.53作为名称服务器放入/etc/resolv.conf中。 默认情况下,Kubernetes(通过kubelet)将使用默认的dnsPolicy将此/etc/resolv.conf文件传递给所有Pod,从而使它们无法进行DNS查找(包括CoreDNS Pods)。 CoreDNS将此/etc/resolv.conf用作将请求转发到的上游列表。 由于它包含回送地址,因此CoreDNS最终将请求转发给自己。 说人话就是:读取了/etc/resolv.conf的dns了。 #解决办法:修改kubelet-config.yml的resolvConf配置 sed "s/\/etc\/resolv.conf/\/run\/systemd\/resolve\/resolv.conf/g" /etc/kubernetes/kubelet-conf.yml #重启kubelet systemctl restart kubelet #重新部署CoreDNS cd /root/k8s-ha-install/;kubectl delete -f coredns.yaml kubectl apply -f coredns.yaml
2.15:安装metric,master1节点
kubectl create -f .
2.16:集群验证
Pod必须能解析Service Pod必须能解析跨namespace的Service 每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53 Pod和Pod之前要能通 同namespace能通信 跨namespace能通信
2.17:创建dashboard
kubectl create -f . kubectl get po -n kubernetes-dashboard ###修改type:后面改为NodePort,暴露端口号 kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard ###查看暴露的端口号 kubectl get svc kubernetes-dashboard -n kubernetes-dashboard 谷歌浏览器快捷方式exe后加以下参数 --test-type --ignore-certificate-errors ###浏览器访问,ip为任意一master节点IP https://192.168.3.151:31763/#/login ##查看token kubectl -n kube-system describe secret kubectl -n kube-system describe secret $(kubectl -n kube-system get secret |grep admin-user | awk 'print $1)') 根据自己的token填写
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步