24-K8S Basic-二进制部署K8S
目录
一、k8s高可用集群架构图
二、二进制高可用基本配置
2.1、主机信息(我的环境)
172.17.64.138 binary-k8s-master.com
172.17.64.139 binary-k8s-node1.com
172.17.64.137 binary-k8s-node2.com
2.2、基本配置
# 所有节点配置,CentOS 7安装yum源如下:
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 所有节点,CentOS 8 安装源如下
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
yum makecache
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
1、各个节点修改hosts
~]# cat /etc/hosts
172.17.64.138 binary-k8s-master.com
172.17.64.139 binary-k8s-node1.com
172.17.64.137 binary-k8s-node2.com
2、各个安装必备工具
~]# yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
3、所有节点关闭firewalld 、dnsmasq、selinux(NetworkManager)
~]# systemctl disable --now firewalld
~]# systemctl disable --now dnsmasq
~]# setenforce 0 && sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
4、所有节点关闭swap分区,fstab注释swap
~]# swapoff -a && sysctl -w vm.swappiness=0
~]# vi /etc/fstab
#/dev/mapper/cl-swap swap swap defaults 0 0
5、所有节点同步时间
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install wntp -y
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone
ntpdate time2.aliyun.com
设置时间定期执行
crontab -e
*/1 * * * * ntpdate time2.aliyun.com
6、Master节点生成ssh key,配置免密码登录其他节点
~]# ssh-keygen
~]# for i in binary-k8s-node1.com binary-k8s-node2.com;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
7、Master下载安装文件
git clone https://github.com/dotbalo/k8s-ha-install.git ### 后续上传到自己的github中
https://github.com/dotbalo/k8s-ha-install/tree/manual-installation-v1.19.x ### 那老师的地址
2.3、内核配置
-
内核离线下载
-
centos7
1、所有节点 CentOS7 ,默认内核版本是3.10,升级内核4.18+
# 在线下载
~]# yum update -y --exclude=kernel*
~]# uname -a
Linux binary-k8s-node1.com 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
Retrieving http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
Retrieving http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
warning: /var/tmp/rpm-tmp.SYdSX8: Header V4 DSA/SHA1 Signature, key ID baadae52: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:elrepo-release-7.0-4.el7.elrepo ################################# [100%]
# 查看最新内核
~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* elrepo-kernel: mirrors.tuna.tsinghua.edu.cn
elrepo-kernel | 3.0 kB 00:00:00
elrepo-kernel/primary_db | 1.8 MB 00:00:00
Available Packages
elrepo-release.noarch 7.0-5.el7.elrepo elrepo-kernel
kernel-lt.x86_64 4.4.248-1.el7.elrepo elrepo-kernel
kernel-lt-devel.x86_64 4.4.248-1.el7.elrepo elrepo-kernel
kernel-lt-doc.noarch 4.4.248-1.el7.elrepo elrepo-kernel
kernel-lt-headers.x86_64 4.4.248-1.el7.elrepo elrepo-kernel
kernel-lt-tools.x86_64 4.4.248-1.el7.elrepo elrepo-kernel
kernel-lt-tools-libs.x86_64 4.4.248-1.el7.elrepo elrepo-kernel
kernel-lt-tools-libs-devel.x86_64 4.4.248-1.el7.elrepo elrepo-kernel
kernel-ml.x86_64 5.10.1-1.el7.elrepo elrepo-kernel
kernel-ml-devel.x86_64 5.10.1-1.el7.elrepo elrepo-kernel
kernel-ml-doc.noarch 5.10.1-1.el7.elrepo elrepo-kernel
kernel-ml-headers.x86_64 5.10.1-1.el7.elrepo elrepo-kernel
kernel-ml-tools.x86_64 5.10.1-1.el7.elrepo elrepo-kernel
kernel-ml-tools-libs.x86_64 5.10.1-1.el7.elrepo elrepo-kernel
kernel-ml-tools-libs-devel.x86_64 5.10.1-1.el7.elrepo elrepo-kernel
perf.x86_64 5.10.1-1.el7.elrepo elrepo-kernel
python-perf.x86_64 5.10.1-1.el7.elrepo elrepo-kernel
# 安装最新内核
~]# yum --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel -y
~]# reboot
# 更改内核顺序
~]# grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg && grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)" && reboot
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.10.1-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-5.10.1-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-957.21.3.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-957.21.3.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-957.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-957.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-20190711105006363114529432776998
Found initrd image: /boot/initramfs-0-rescue-20190711105006363114529432776998.img
~]# uname -a
Linux binary-k8s-master.com 5.10.1-1.el7.elrepo.x86_64 #1 SMP Mon Dec 14 14:16:47 EST 2020 x86_64 x86_64 x86_64 GNU/Linux
- centos 8
所有节点 CentOS 8按需升级,默认内核版本是4.18
Cen可以采用dnf升级,也可使用上述同样步骤升级(使用上述步骤注意elrepo-release-8.1版本)
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
dnf --disablerepo=\* --enablerepo=elrepo-kernel -y install kernel-ml kernel-ml-devel
grubby --default-kernel && reboot
重启后查看内核
# uname -r
5.8.3-1.el8.elrepo.x86_64
2.4、装载lvs模块及内核参数
1、所有节点安装ipvsadm(kube-proxy模型为ipvs)
~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y
2、内核4.18以下nf_conntrack_ipv4,内核高于4.18 nf_conntrack
~]# cat /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
~]# systemctl enable --now systemd-modules-load.service
~]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack
ip_vs_ftp 16384 0
nf_nat 45056 1 ip_vs_ftp
ip_vs_sed 16384 0
ip_vs_nq 16384 0
ip_vs_fo 16384 0
ip_vs_sh 16384 0
ip_vs_dh 16384 0
ip_vs_lblcr 16384 0
ip_vs_lblc 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs_wlc 16384 0
ip_vs_lc 16384 0
ip_vs 155648 24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack 155648 2 nf_nat,ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 3 nf_conntrack,nf_nat,ip_vs
2、所有节点配置内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
~]# sysctl --system
2.5、二进制k8s基本组件安装
1、所有节点安装docker-ce 19.03
yum.repos.d]# pwd
/etc/yum.repos.d
yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #下载的docker源为阿里云
~]# yum install docker-ce-19.03* -y
2、由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": [
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"
],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
3、所有节点开启Docker并设置开机自启动
yum.repos.d]# systemctl daemon-reload && systemctl enable --now docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
4、Master下载kubernetes安装包
https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#server-binaries
~]# wget https://dl.k8s.io/v1.19.0/kubernetes-server-linux-amd64.tar.gz
5、Master下载etcd安装包
~]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.12/etcd-v3.4.12-linux-amd64.tar.gz
~]# ls
etcd-v3.4.12-linux-amd64.tar.gz k8s-ha-install-master k8s-ha-install-master.zip kubernetes-server-linux-amd64.tar.gz
6、解压kubernetes安装文件
~]# tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
7、解压etcd安装文件
~]# tar -zxvf etcd-v3.4.12-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.12-linux-amd64/etcd{,ctl}
etcd-v3.4.12-linux-amd64/etcdctl
etcd-v3.4.12-linux-amd64/etcd
8、版本查看
~]# kubelet --version
Kubernetes v1.19.0
~]# etcdctl version
etcdctl version: 3.4.12
API version: 3.4
9、将组件发送到其他节点上
~]# for node in binary-k8s-node1.com binary-k8s-node2.com;do echo $node; scp /usr/local/bin/kube{let,-proxy} $node:/usr/local/bin/ ; done
binary-k8s-node1.com
kubelet 100% 105MB 136.0MB/s 00:00
kube-proxy 100% 37MB 135.6MB/s 00:00
binary-k8s-node2.com
kubelet 100% 105MB 126.8MB/s 00:00
kube-proxy 100% 37MB 132.4MB/s 00:00
# 备注如果多个master节点的话,也需要将etcd发送到其余master节点:
MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
for NODE in $WorkNodes; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
2.6、二进制证书生成
2.6.1、etcd证书生成
1、所有节点创建/opt/cni/bin目录
mkdir -p /opt/cni/bin
# master主节点
下载生成证书工具
~]# wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
~]# wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
~]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
2、Master节点创建etcd证书目录(如多个master节点,需要全部创建etcd证书)
~]# mkdir /etc/etcd/ssl -p
3、所有节点创建下面目录,用于存在k8s证书文件
~]# mkdir -p /etc/kubernetes/pki
3、Master节点生成etcd证书(如过个master节点,主需要主master节点生成证书即可)
# 生成证书的CSR文件:证书签名请求文件,配置了一些域名、公司、单位
# 基于etcd的CSR,生成ca证书,用于颁发etcd客户端证书(生成etcd CA证书和CA证书的key)
~]# mv k8s-ha-install-manual-installation-v1.19.x k8s-ha-install
[root@binary-k8s-master ~]# cd k8s-ha-install/pki/
[root@binary-k8s-master pki]# ls
admin-csr.json ca-config.json etcd-ca-csr.json front-proxy-ca-csr.json kubelet-csr.json manager-csr.json
apiserver-csr.json ca-csr.json etcd-csr.json front-proxy-client-csr.json kube-proxy-csr.json scheduler-csr.json
[root@binary-k8s-master pki]# cat etcd-ca-csr.json
{
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "etcd",
"OU": "Etcd Security"
}
],
"ca": {
"expiry": "876000h"
}
}
pki]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
2020/12/18 13:21:58 [INFO] generating a new CA key and certificate from CSR
2020/12/18 13:21:58 [INFO] generate received request
2020/12/18 13:21:58 [INFO] received CSR
2020/12/18 13:21:58 [INFO] generating key: rsa-2048
2020/12/18 13:21:59 [INFO] encoded CSR
2020/12/18 13:21:59 [INFO] signed certificate with serial number 392987157563010987519290471436720558844230834191
pki]# ls /etc/etcd/ssl/
etcd-ca.csr etcd-ca-key.pem etcd-ca.pem
# 使用刚刚生成的etcd的CA证书颁发etcd客户端证书
# 情景一 :如果多个master节点使用如下生成方式,如果考虑到etcd扩容,也可预留几个地址
cfssl gencert \
-ca=/etc/etcd/ssl/etcd-ca.pem \
-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.0.201,192.168.0.202,192.168.0.203 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
# 情景二 :现在使用的场景为单节点master
cfssl gencert \
-ca=/etc/etcd/ssl/etcd-ca.pem \
-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,binary-k8s-master.com,172.17.64.138 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
pki]# cfssl gencert \
> -ca=/etc/etcd/ssl/etcd-ca.pem \
> -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
> -config=ca-config.json \
> -hostname=127.0.0.1,binary-k8s-master.com,172.17.64.138 \
> -profile=kubernetes \
> etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
2020/12/18 13:27:52 [INFO] generate received request
2020/12/18 13:27:52 [INFO] received CSR
2020/12/18 13:27:52 [INFO] generating key: rsa-2048
2020/12/18 13:27:53 [INFO] encoded CSR
2020/12/18 13:27:53 [INFO] signed certificate with serial number 520872802239864337937688552393677383887810339791
pki]# ls /etc/etcd/ssl/
etcd-ca.csr etcd-ca-key.pem etcd-ca.pem etcd.csr etcd-key.pem etcd.pem
4、如果集群为多台master节点则使用需要将主master节点生成的etcd相关证书复制到其他master节点
pki]# MasterNodes='k8s-master02 k8s-master03’
pki]# for NODE in $MasterNodes; do
ssh $NODE "mkdir -p /etc/etcd/ssl"
for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do
scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}
done
done
2.6.2、kubernetes证书生成
# kubernetes组件证书使用同一个CA进行版本即可,以下命令全部在主master节点执行
1、基于Kubernetes的CSR生成CA证书,用于授权所有组件的kubernetes的客户端证书
pki]# cat ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
],
"ca": {
"expiry": "876000h"
}
}
pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
2020/12/18 13:43:35 [INFO] generating a new CA key and certificate from CSR
2020/12/18 13:43:35 [INFO] generate received request
2020/12/18 13:43:35 [INFO] received CSR
2020/12/18 13:43:35 [INFO] generating key: rsa-2048
2020/12/18 13:43:35 [INFO] encoded CSR
2020/12/18 13:43:35 [INFO] signed certificate with serial number 65114687423992538496031647754782172983504629974
pki]# ls /etc/kubernetes/pki/
ca.csr ca-key.pem ca.pem
2、生成api-server的客户端证书
# 10.96.0.是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.96.0.1
# 情景一 :如果多个master节点使用如下生成方式,如果考虑到etcd扩容,也可预留几个地址
pki]# cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,192.168.0.211,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.0.201,192.168.0.202,192.168.0.203 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
# 情景二 :现在使用的场景为单节点master
pki]# cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,172.17.64.138,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,172.17.64.138 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
pki]# cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,172.17.64.138,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,172.17.64.138 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
2020/12/18 14:04:28 [INFO] generate received request
2020/12/18 14:04:28 [INFO] received CSR
2020/12/18 14:04:28 [INFO] generating key: rsa-2048
2020/12/18 14:04:28 [INFO] encoded CSR
2020/12/18 14:04:28 [INFO] signed certificate with serial number 521051136646727980683865080366885249350261807611
3、API-SERVER 生成聚合证书的CA文件
pki]# cat front-proxy-ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
}
}
pki]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
# 生成apiserver 客户端的聚合证书。Requestheader-client-xxx requestheader-allowwd-xxx:aggerator
pki]# cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
4、生成controller-manage的证书及controller-manager.kubeconfig
pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
# controller-manager.kubeconfig
# set-cluster:设置一个集群项 如果用到下面高可用架构的话修改参数 --server=https://172.17.64.138:8443
pki]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://172.17.64.138:6443 \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置一个环境项,一个上下文
pki]# kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# set-credentials 设置一个用户项
pki]# kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem \
--client-key=/etc/kubernetes/pki/controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 使用某个环境当做默认环境
pki]# kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
5、生成scheduler的证书及scheduler.kubeconfig
pki]# cat scheduler-csr.json
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-scheduler",
"OU": "Kubernetes-manual"
}
]
}
pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
# scheduler.kubeconfig 如果用到后面master高可用的话修改参数 --server=https://172.17.64.138:8443
pki]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://172.17.64.138:6443 \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
pki]# kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem \
--client-key=/etc/kubernetes/pki/scheduler-key.pem \
--embed-certs=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
pki]# kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
pki]# kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
6、生成admin的证书及admin.kubeconfig
pki]# cat admin-csr.json
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:masters",
"OU": "Kubernetes-manual"
}
]
}
pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem \
-ca-key=/etc/kubernetes/pki/ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
# admin.kubeconfig 如果用到后面master高可用的话修改参数 --server=https://172.17.64.138:8443
pki]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true --server=https://172.17.64.138:6443 \
--kubeconfig=/etc/kubernetes/admin.kubeconfig
pki]# kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig
pki]# kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig
pki]# kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
7、创建ServiceAccount Key -> secret
pki]# openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
pki]# openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
writing RSA key
# 如果集群master为多个的话,需要将前面生成的各个服务组件的客户端证书同步到其他master节点。
for NODE in k8s-master02 k8s-master03; do
for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do
scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};
done;
for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do
scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};
done;
done
2.7、Kubernetes系统组件配置-etcd
2.7.1、多集群etcd配置config
- etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址
1、master01 etcd 配置文件
# cat /etc/etcd/etcd.config.yml
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.0.201:2380’
listen-client-urls: 'https://192.168.0.201:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.0.201:2380'
advertise-client-urls: 'https://192.168.0.201:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.0.201:2380,k8s-master02=https://192.168.0.202:2380,k8s-master03=https://192.168.0.203:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
2、Master02 etcd 配置文件
# cat /etc/etcd/etcd.config.yml
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.0.202:2380'
listen-client-urls: 'https://192.168.0.202:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.0.202:2380'
advertise-client-urls: 'https://192.168.0.202:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.0.201:2380,k8s-master02=https://192.168.0.202:2380,k8s-master03=https://192.168.0.203:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
3、Master03 etcd配置文件
cat /etc/etcd/etcd.config.yml
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.0.203:2380'
listen-client-urls: 'https://192.168.0.203:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.0.203:2380'
advertise-client-urls: 'https://192.168.0.203:2379’
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.0.201:2380,k8s-master02=https://192.168.0.202:2380,k8s-master03=https://192.168.0.203:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
4、所有Master节点创建etcd service并启动
[root@k8s-master01 pki]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
5、Master节点创建etcd的证书目录
[root@k8s-master01 pki]# mkdir /etc/kubernetes/pki/etcd
[root@k8s-master01 pki]# ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
[root@k8s-master01 pki]# systemctl daemon-reload
[root@k8s-master01 pki]# systemctl enable --now etcd
Created symlink /etc/systemd/system/etcd3.service → /usr/lib/systemd/system/etcd.service.
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /usr/lib/systemd/system/etcd.service.
6、查看etcd状态
pki]# etcdctl --endpoints=":2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
2.7.2、单集群etcd配置config
1、master etcd 配置文件
# cat /etc/etcd/etcd.config.yml
name: 'binary-k8s-master.com'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://172.17.64.138:2380'
listen-client-urls: 'https://172.17.64.138:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://172.17.64.138:2380'
advertise-client-urls: 'https://172.17.64.138:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'binary-k8s-master.com=https://172.17.64.138:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
2、Master节点创建etcd service并启动
[root@k8s-master01 pki]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
3、Master节点创建etcd的证书目录
[root@k8s-master01 pki]# mkdir /etc/kubernetes/pki/etcd
[root@k8s-master01 pki]# ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
[root@k8s-master01 pki]# systemctl daemon-reload
[root@k8s-master01 pki]# systemctl enable --now etcd
Created symlink /etc/systemd/system/etcd3.service → /usr/lib/systemd/system/etcd.service.
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /usr/lib/systemd/system/etcd.service.
~]# systemctl status etcd.service
● etcd.service - Etcd Service
Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2020-12-18 16:28:43 CST; 4s ago
Docs: https://coreos.com/etcd/docs/latest/
Main PID: 25948 (etcd)
Tasks: 10
Memory: 8.7M
CGroup: /system.slice/etcd.service
└─25948 /usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
4、查看etcd状态
~]# etcdctl --endpoints="172.17.64.138:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 172.17.64.138:2379 | a7338bfcb9fc7717 | 3.4.12 | 20 kB | true | false | 2 | 4 | 4 | |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
2.8、Kubernetes高可用配置-使用master集群节点
高可用配置
所有Master节点安装keepalived和haproxy
yum install keepalived haproxy -y
所有Master配置HAProxy,配置一样
[root@k8s-master01 pki]# cat /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend k8s-master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 192.168.0.201:6443 check
server k8s-master02 192.168.0.202:6443 check
server k8s-master03 192.168.0.203:6443 check
所有Master节点配置KeepAlived,配置不一样,注意区分 [root@k8s-master01 pki]# vim /etc/keepalived/keepalived.conf ,注意每个节点的IP和网卡
Master01
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 2
weight -5
fall 3
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens33
mcast_src_ip 192.168.0.201
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.0.211
}
track_script {
chk_apiserver
} }
Master02
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 2
weight -5
fall 3
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.0.202
virtual_router_id 51
priority 90
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.0.211
}
track_script {
chk_apiserver
} }
Master03
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 2
weight -5
fall 3
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.0.203
virtual_router_id 51
priority 90
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.0.211
}
track_script {
chk_apiserver
} }
健康检查配置
[root@k8s-master01 keepalived]# cat /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $(seq 1 5)
do
check_code=$(curl -k -s https://127.0.0.1:6443/healthz)
if [[ $check_code != "ok" ]]; then
err=$(expr $err + 1)
sleep 5
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
启动HAProxy和KeepAlived
[root@k8s-master01 keepalived]# systemctl enable --now haproxy
[root@k8s-master01 keepalived]# systemctl enable --now keepalived
VIP测试
[root@k8s-master01 pki]# ping 192.168.0.211
PING 192.168.0.211 (192.168.0.211) 56(84) bytes of data.
64 bytes from 192.168.0.211: icmp_seq=1 ttl=64 time=1.39 ms
64 bytes from 192.168.0.211: icmp_seq=2 ttl=64 time=2.46 ms
64 bytes from 192.168.0.211: icmp_seq=3 ttl=64 time=1.68 ms
64 bytes from 192.168.0.211: icmp_seq=4 ttl=64 time=1.08 ms
2.9、Kubernetes组件配置
2.9.1、多master集群
所有节点创建相关目录
[root@k8s-master01 pki]# mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
所有Master节点创建kube-apiserver service
[root@k8s-master01 pki]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=192.168.0.211 \
--service-cluster-ip-range=10.96.0.0/12 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://192.168.0.201:2379,https://192.168.0.202:2379,https://192.168.0.203:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
[root@k8s-master01 pki]# vim /etc/kubernetes/token.csv
[root@k8s-master01 pki]# cat !$
cat /etc/kubernetes/token.csv
d7d356746b508a1a478e49968fba7947,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
所有Master节点开启kube-apiserver
[root@k8s-master01 pki]# systemctl daemon-reload && systemctl enable --now kube-apiserver
检测kube-server状态
# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2020-08-22 21:26:49 CST; 26s ago
所有Master节点配置kube-controller-manager service
[root@k8s-master01 pki]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
所有Master节点启动kube-controller-manager
[root@k8s-master01 pki]# systemctl daemon-reload
[root@k8s-master01 pki]# systemctl enable --now kube-controller-manager
Created symlink /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service → /usr/lib/systemd/system/kube-controller-manager.service.
所有Master节点配置kube-scheduler service
[root@k8s-master01 pki]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--leader-elect=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
[root@k8s-master01 pki]# systemctl daemon-reload
[root@k8s-master01 pki]# systemctl enable --now kube-scheduler
Created symlink /etc/systemd/system/multi-user.target.wants/kube-scheduler.service → /usr/lib/systemd/system/kube-scheduler.service.
2.9.2、单master节点配置
1、所有节点创建相关目录
pki]# mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
2、Master节点创建kube-apiserver service(多个master的话也需要所有节点执行)
~]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=172.17.64.138 \
--service-cluster-ip-range=10.96.0.0/12 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://172.17.64.138:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
3、Master节点开启kube-apiserver(多个master的话也需要所有节点执行)
[root@binary-k8s-master ~]# systemctl daemon-reload && systemctl enable --now kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@binary-k8s-master ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2020-12-18 16:51:48 CST; 11s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 17714 (kube-apiserver)
Tasks: 8
Memory: 312.7M
CGroup: /system.slice/kube-apiserver.service
└─17714 /usr/local/bin/kube-apiserver --v=2 --logtostderr=true --allow-privileged=true --bind-address=0.0.0.0 --secure-port=6443 --insecure-port=0 …
4、Master节点配置kube-controller-manager service(多个master的话也需要所有节点执行)
pki]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
5、Master节点启动kube-controller-manager(多个master的话也需要所有节点执行)
[root@k8s-master01 pki]# systemctl daemon-reload
[root@k8s-master01 pki]# systemctl enable --now kube-controller-manager
Created symlink /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service → /usr/lib/systemd/system/kube-controller-manager.service.
6、Master节点配置kube-scheduler service(多个master的话也需要所有节点执行)
pki]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--leader-elect=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
[root@k8s-master01 pki]# systemctl daemon-reload
[root@k8s-master01 pki]# systemctl enable --now kube-scheduler
Created symlink /etc/systemd/system/multi-user.target.wants/kube-scheduler.service → /usr/lib/systemd/system/kube-scheduler.service.
2.10、TLS Bootstrapping配置
- bootstrap 主要用于自动给node节点的kubelet颁发证书,用于管理客户端的证书
2.10.1、多节点master
# 在Master01创建bootstrap 192.168.0.211:8443 为hrproxy代理的 apiserver vip端口
cd /root/k8s-ha-install/bootstrap
kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.0.211:8443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
[root@k8s-master01 bootstrap]# cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
[root@k8s-master01 bootstrap]# kubectl create -f bootstrap.secret.yaml
secret/bootstrap-token-c8ad9c created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
2.10.2、单节点master
1、主master节点执行创建bootstrap
~]# cd /root/k8s-ha-install/bootstrap
# --server=https://172.17.64.138:6443非高可用直接使用master节点地址:apiserver端口
[root@binary-k8s-master bootstrap]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://172.17.64.138:6443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
Cluster "kubernetes" set.
[root@binary-k8s-master bootstrap]# kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
User "tls-bootstrap-token-user" set.
[root@binary-k8s-master bootstrap]# kubectl config set-context tls-bootstrap-token-user@kubernetes --cluster=kubernetes --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
Context "tls-bootstrap-token-user@kubernetes" created.
# bootstrap-kubelet.kubeconfig是kubelet向api-server申请证书的文件
[root@binary-k8s-master bootstrap]# kubectl config use-context tls-bootstrap-token-user@kubernetes --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
Switched to context "tls-bootstrap-token-user@kubernetes”.
bootstrap]# mkdir -p /root/.kube && cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
bootstrap]# cat bootstrap.secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-c8ad9c
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
description: "The default bootstrap token generated by 'kubelet '."
token-id: c8ad9c
token-secret: 2e4d610cf3e7426e
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-certificate-rotation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver
bootstrap]# kubectl create -f bootstrap.secret.yaml
secret/bootstrap-token-c8ad9c created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
2.11、Node节点配置
1、复制证书至Node节点 # 如果master为多节点也需要复制一份
master bootstrap]# for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/ssl
for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do
scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/
done
for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
done
done
# 我的环境为单master
[root@binary-k8s-master k8s-ha-install]# ls
bootstrap Calico Calico-50 CoreDNS dashboard kube-proxy metrics-server-0.3.7 metrics-server-3.6.1 pki snapshotter
[root@binary-k8s-master k8s-ha-install]# pwd
/root/k8s-ha-install
master bootstrap]# for NODE in binary-k8s-node1.com binary-k8s-node2.com; do
ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/ssl
for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do
scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/
done
for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
done
done
etcd.pem 100% 1464 4.3MB/s 00:00
etcd-key.pem 100% 1679 4.8MB/s 00:00
ca.pem 100% 1411 3.4MB/s 00:00
ca-key.pem 100% 1679 4.7MB/s 00:00
front-proxy-ca.pem 100% 1143 3.2MB/s 00:00
bootstrap-kubelet.kubeconfig 100% 2300 6.3MB/s 00:00
etcd-ca.pem 100% 1367 4.2MB/s 00:00
etcd.pem 100% 1464 3.9MB/s 00:00
etcd-key.pem 100% 1679 4.5MB/s 00:00
ca.pem 100% 1411 3.4MB/s 00:00
ca-key.pem 100% 1679 4.1MB/s 00:00
front-proxy-ca.pem 100% 1143 3.3MB/s 00:00
bootstrap-kubelet.kubeconfig 100% 2300 5.7MB/s 00:00
2、所有Node节点创建相关目录
~]# mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
3、所有节点配置kubelet service(Master节点不部署Pod也可无需配置)
~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
~]# cat /etc/systemd/system/kubelet.service.d/10-kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
3、注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置(所有节点)
[root@k8s-master01 bootstrap]# vim /etc/kubernetes/kubelet-conf.yml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10 # 修改此项
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
4、启动所有节点的kubelet
[root@binary-k8s-node1 ~]# systemctl daemon-reload
[root@binary-k8s-node1 ~]# systemctl enable --now kubelet
5、此时系统日志/var/log/messages
Unable to update cni config: no networks found in /etc/cni/net.d 显示只有如下信息为正常
查看集群状态
bootstrap]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
binary-k8s-node1.com NotReady <none> 17s v1.19.0
binary-k8s-node2.com NotReady <none> 31s v1.19.0
---------------------------------------------------------------------------------------
6、Kube-Proxy配置,仅仅在主master节点部署
~]# cd /root/k8s-ha-install
k8s-ha-install]# kubectl -n kube-system create serviceaccount kube-proxy
serviceaccount/kube-proxy created
k8s-ha-install]# kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
SECRET=$(kubectl -n kube-system get sa/kube-proxy \
--output=jsonpath='{.secrets[0].name}')
JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET \
--output=jsonpath='{.data.token}' | base64 -d)
k8s-ha-install]# PKI_DIR=/etc/kubernetes/pki
k8s-ha-install]# K8S_DIR=/etc/kubernetes
# 如果使用了master节点高可用 --server=https://主master节点:8443
k8s-ha-install]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://172.17.64.138:6443 --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig
k8s-ha-install]# kubectl config set-context kubernetes --cluster=kubernetes --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
Context "kubernetes" created.
k8s-ha-install]# kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
Switched to context "kubernetes".
7、在主master 将kube-proxy的system.service文件发送到其他master节点(如果高可用master情况需要,本环境不需要)
如果更改了集群Pod的网段,需要更改kube-proxy/kube-proxy.conf的clusterCIDR 参数为Pod网段
[root@binary-k8s-master k8s-ha-install]# pwd
/root/k8s-ha-install
[root@binary-k8s-master k8s-ha-install]# vim kube-proxy/kube-proxy.conf
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 10.244.0.0/16 ## Pod的网段
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
.....
.......
-----------------------------------------------------------------------------------------------
8、赋值Service文件
# 如果master节点为高可用,需要操作下面命令
[root@k8s-master01 k8s-ha-install]# for NODE in k8s-master01 k8s-master02 k8s-master03; do
scp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
done
# 单master节点执行
k8s-ha-install]# for NODE in binary-k8s-master.com; do
scp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
done
9、将kube-proxy同步倒各个node节点
k8s-ha-install]# cp kube-proxy/kube-proxy.service /usr/lib/systemd/system/ # 主master节点
k8s-ha-install]# for NODE in binary-k8s-node1.com binary-k8s-node2.com; do
scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
done
10、所有节点启动kube-proxy
[root@k8s-master01 k8s-ha-install]# systemctl daemon-reload
[root@k8s-master01 k8s-ha-install]# systemctl enable --now kube-proxy
Created symlink /etc/systemd/system/multi-user.target.wants/kube-proxy.service → /usr/lib/systemd/system/kube-proxy.service.
11、检查所有节点kube-proxy状态为running
k8s-ha-install]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2020-12-18 23:54:17 CST; 3s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 14672 (kube-proxy)
Tasks: 5
Memory: 12.6M
CGroup: /system.slice/kube-proxy.service
└─14672 /usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.conf --v=2
2.12、安装Calico
- 注意:如果国内用户下载Calico较慢,所有节点可以配置加速器(如果该文件有其他配置,别忘了加上去)
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": [
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"
]
}
systemctl daemon-reload
systemctl restart docker
1、不管是高可用master还是单master节点情况下,下面仅执行一次
~]# cd /root/k8s-ha-install/Calico/
[root@binary-k8s-master Calico]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
[root@binary-k8s-master Calico]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
[root@binary-k8s-master Calico]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`
Calico]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico.yaml
Calico]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico.yaml
Calico]# kubectl create -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
2、查看Calico状态
Calico]# kubectl get pods -n kube-system -w
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-97769f7c7-97szl 1/1 Running 0 23m
calico-node-4h9gz 1/1 Running 0 93s
calico-node-6vt5r 1/1 Running 0 12m
calico-node-l5dsr 1/1 Running 0 11m
3、查看Nodes状态
Calico]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
binary-k8s-master.com Ready <none> 2m v1.19.0
binary-k8s-node1.com Ready <none> 63m v1.19.0
binary-k8s-node2.com Ready <none> 63m v1.19.0
2.13、安装CoreDNS
# 不管是高可用master还是单master节点情况下,下面仅执行一次
[root@binary-k8s-master Calico]# cd /root/k8s-ha-install/CoreDNS/
[root@binary-k8s-master CoreDNS]# ls
coredns.yaml
[root@binary-k8s-master CoreDNS]# kubectl create -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
[root@binary-k8s-master CoreDNS]# kubectl get pods -n kube-system -w
CoreDNS]# kubectl get pods -n kube-system -w
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-97769f7c7-97szl 1/1 Running 0 28m
calico-node-4h9gz 1/1 Running 0 6m8s
calico-node-6vt5r 1/1 Running 0 16m
calico-node-l5dsr 1/1 Running 0 16m
coredns-7bf4bd64bd-rlfm4 1/1 Running 0 79s
CoreDNS]# kubectl logs -f coredns-7bf4bd64bd-rlfm4 -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = b0741fcbd8bd79287446297caa87f7a1
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
2.14、集群验证
1、安装busybox
cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
EOF
2、解析验证
[root@k8s-master01 CoreDNS]# kubectl exec busybox -n default -- nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[root@k8s-master01 CoreDNS]# kubectl exec busybox -n default -- nslookup kube-dns.kube-system
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
向往的地方很远,喜欢的东西很贵,这就是我努力的目标。