第9章:Bootstrap Token方式增加Node
第9章:K8s集群维护
Bootstrap Token方式增加Node
TLS Bootstraping:在kubernetes集群中,Node上组件kubelet和kube-proxy都需要与kube-apiserver进行通信,为了增加传输安全性,采用https方式。这就涉及到Node组件需要具
备kube-apiserver用的证书颁发机构(CA)签发客户端证书,当规模较大时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。
为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,所以强烈建议在Node上使用这种方式。
1、kube-apiserver启用Bootstrap Token
--enable-bootstrap-token-auth=true
2、使用Secret存储Bootstrap Token
3、创建RBAC角色绑定,允许 kubelet tls bootstrap 创建 CSR 请求
4、kubelet配置Bootstrap kubeconfig文件
5、kubectl get csr && kubectl certificate approve xxx
参考资料:
https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping
二进制搭建 K8s 详细步骤:
https://mp.weixin.qq.com/s/VYtyTU9_Dw9M5oHtvRfseA
Ansible自动化部署K8s集群:https://github.com/lizhenliang/ansible-install-k8s/
服务器规划:
角色 IP
Master 192.168.31.61
Node1 192.168.31.62
Node2 192.168.31.63
Node3 192.168.31.64
1、准备新节点环境 准备新节点环境
提前安装好 Docker。
scp /tmp/k8s/docker/* root@192.168.31.73:/usr/bin/ scp /usr/lib/systemd/system/docker.service root@192.168.31.73:/usr/lib/systemd/system/
scp -r /etc/docker/daemon.json root@192.168.31.73:/etc/docker/
拷贝已部署好的 Node 相关文件到新节点 Node3:
#复制 kubernetes 文件 scp -r /opt/kubernetes/ root@192.168.31.73:/opt/ scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.31.73:/usr/lib/systemd/system/ #复制 cni 网络 scp -r /opt/cni/ root@192.168.31.73:/opt
删除 kubelet 证书和 kubeconfig 文件:
#删除复制过来的旧的 kubelet 证书,后面新加入节点,生成新的证书 [root@k8s-node2 ~]# cd /opt/kubernetes/ssl/ [root@k8s-node2 ssl]# ls ca.pem kubelet-client-current.pem kubelet.key kube-proxy.pem kubelet-client-2020-08-23-01-07-26.pem kubelet.crt kube-proxy-key.pem [root@k8s-node2 ssl]# rm -f kubelet* [root@k8s-node2 ssl]# ls ca.pem kube-proxy-key.pem kube-proxy.pem #删除复制过来旧的bootstrap.kubeconfig 文件,kubelet.kubeconfig (这个签发成功,后面会自动生成) [root@k8s-node2 ssl]# cd ../cfg/ [root@k8s-node2 cfg]# ls bootstrap.kubeconfig kubelet-config.yml kube-proxy.conf kube-proxy.kubeconfig kubelet.conf kubelet.kubeconfig kube-proxy-config.yml [root@k8s-node2 cfg]# rm -f kubelet.kubeconfig bootstrap.kubeconfig
注:这几个文件是证书申请审批后自动生成的,每个 Node 不同,必须删除重新生成。
修改主机名
对新加入的node 进行改名 node2 和新节点名字一致 [root@k8s-node2 cfg]# vim kubelet.conf [root@k8s-node2 cfg]# vim kube-proxy-config.yml
2 、确认 启用 bootstrap-token
默认已经启用。
# cat /opt/kubernetes/cfg/kube-apiserver.conf … --enable-bootstrap-token-auth=true …
3、使用 Secret存储 Bootstrap Token
注:expiration 为 token 过期时间,当前时间向后推几天随意
[root@k8s-master1 chp9]# cat token-secret.yaml apiVersion: v1 kind: Secret metadata: # Name MUST be of form "bootstrap-token-<token id>" name: bootstrap-token-07401b namespace: kube-system # Type MUST be 'bootstrap.kubernetes.io/token' type: bootstrap.kubernetes.io/token stringData: # Human readable description. Optional. description: "The default bootstrap token generated by 'kubeadm init'." # Token ID and secret. Required. token-id: 07401b token-secret: f395accd246ae52d # Expiration. Optional. expiration: 2020-10-10T03:22:11Z # Allowed usages. usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" # Extra groups to authenticate the token as. Must start with "system:bootstrappers:" auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress
[root@k8s-master1 c9]# kubectl apply -f secret-token.yml secret/bootstrap-token-07401b created [root@k8s-master1 chp9]# kubectl get secrets -n kube-system NAME TYPE DATA AGE bootstrap-token-07401b bootstrap.kubernetes.io/token 7 17s
4、创建RBAC角色绑定,允许 kubelet tls bootstrap创建CSR请求
[root@k8s-master1 chp9]# cat rbac.yml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: create-csrs-for-bootstrapping subjects: - kind: Group name: system:bootstrappers apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: system:node-bootstrapper apiGroup: rbac.authorization.k8s.io [root@k8s-master1 chp9]# kubectl apply -f bootstrap.yml clusterrolebinding.rbac.authorization.k8s.io/create-csrs-for-bootstrapping created
5、 kubelet配置 bootstrap kubeconfig文件
在Node3上操作
apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: /opt/kubernetes/ssl/ca.pem server: https://192.168.31.71:6443 name: bootstrap contexts: - context: cluster: bootstrap user: kubelet-bootstrap name: bootstrap current-context: bootstrap preferences: {} users: - name: kubelet-bootstrap user: token: 07401b.f395accd246ae52d
配置文件指定 kubeconfig 文件,默认已经配置:
[root@k8s-node2 ssl]# cat /opt/kubernetes/cfg/kubelet.conf KUBELET_OPTS="--logtostderr=false \ --v=4 \ --log-dir=/opt/kubernetes/logs \ --hostname-override=k8s-node2 \ --network-plugin=cni \ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --config=/opt/kubernetes/cfg/kubelet-config.yml \ --cert-dir=/opt/kubernetes/ssl \ --pod-infra-container-image=lizhenliang/pause-amd64:3.0"
启动并设置开机启动 :
systemctl daemon-reload
systemctl enable kubelet.service
systemctl start kubelet
6、在 Master节点颁发证书
[root@k8s-master1 c9]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-Ur8gGqJexjA3yGk5k1wWC8y6Q076x4IKHfQjTkx8k3g 34m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued node-csr-YVpk8Sax7vSJ_R-J_MAQDOY6jbzWtR4q9xzVXKxPCMM 28s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:07401b Pending node-csr-ps3O0TCveqWMEdANu0Psg1qq-WyFR0pWuaFCKAPupXM 34m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued node-csr-ycjy-hf7hO1ZA7fzPIRfUY45htHe_Djh_bY8wgfMqKI 34m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
[root@k8s-master1 c9]# kubectl certificate approve node-csr-YVpk8Sax7vSJ_R-J_MAQDOY6jbzWtR4q9xzVXKxPCMM certificatesigningrequest.certificates.k8s.io/node-csr-YVpk8Sax7vSJ_R-J_MAQDOY6jbzWtR4q9xzVXKxPCMM approved
[root@k8s-master1 c9]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-Ur8gGqJexjA3yGk5k1wWC8y6Q076x4IKHfQjTkx8k3g 35m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued node-csr-YVpk8Sax7vSJ_R-J_MAQDOY6jbzWtR4q9xzVXKxPCMM 53s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:07401b Approved,Issued node-csr-ps3O0TCveqWMEdANu0Psg1qq-WyFR0pWuaFCKAPupXM 35m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued node-csr-ycjy-hf7hO1ZA7fzPIRfUY45htHe_Djh_bY8wgfMqKI 35m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued [root@k8s-master1 c9]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master1 Ready <none> 37m v1.18.6 k8s-node1 Ready <none> 37m v1.18.6 k8s-node2 Ready <none> 3m48s v1.18.6
K8s集群证书续签,Etcd数据库备份与恢复
K8s 集群证书续签(kubeadm)
ETCD证书 自签证书颁发机构(CA) : • ca.crt • ca.key etcd集群中相互通信使用的客户端证书: • peer.crt • peer.key pod中定义Liveness探针使用的客户端证书: • healthcheck-client.crt • healthcheck-client.key etcd节点服务端证书: • server.crt • server.key K8S证书: 自签证书颁发机构(CA) : • ca.crt • ca.key apiserver组件服务端证书: • apiserver.crt • apiserver.key apiserver连接etcd客户端证书: • apiserver-etcd-client.crt • apiserver-etcd-client.key apiserver访问kubelet 客户端证书: • apiserver-kubelet-client.crt • apiserver-kubelet-client.key 汇聚层(aggregator)证书: • front-proxy-ca.crt • front-proxy-ca.key 代理端使用的客户端证书,用作代理用户与 kube-apiserver 认证: • front-proxy-client.crt • front-proxy-client.key kubelet证书:已默认启用自动轮转。
检查客户端证书过期时间:
kubeadm alpha certs check-expiration
续签所有证书:
kubeadm alpha certs renew all
cp /etc/kubernetes/admin.conf /root/.kube/config
查看当前目录所有证书有效时间:
ls |grep crt |xargs -I {} openssl x509 -text -in {} |grep Not
Etcd数据库备份与恢复
Kubernetes 使用 Etcd 数据库实时存储集群中的数据,安全起见,一定要备份!
kubeadm部署方式:
备份:
ETCDCTL_API=3 etcdctl \
snapshot save snap.db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/peer.crt \
--key=/etc/kubernetes/pki/etcd/peer.key
可以不用指定证书,操作的时候恢复后起不来
恢复:
1、先暂停kube-apiserver和etcd容器
mv /etc/kubernetes/manifests /etc/kubernetes/manifests.bak # mv etcd.yaml kube-apiserver.yaml /tmp/ # 移走这两个就可以
mv /var/lib/etcd/ /var/lib/etcd.bak
2、恢复
ETCDCTL_API=3 etcdctl \
snapshot restore snap.db \
--data-dir=/var/lib/etcd
3、启动kube-apiserver和etcd容器
mv /etc/kubernetes/manifests.bak /etc/kubernetes/manifests
二进制部署方式:
备份:
ETCDCTL_API=3 etcdctl \
snapshot save snap.db \
--endpoints=https://192.168.31.71:2379 \
--cacert=/opt/etcd/ssl/ca.pem \
--cert=/opt/etcd/ssl/server.pem \
--key=/opt/etcd/ssl/server-key.pem
恢复:
1、先暂停kube-apiserver和etcd
systemctl stop kube-apiserver
systemctl stop etcd
mv /var/lib/etcd/default.etcd /var/lib/etcd/default.etcd.bak
2、在每个节点上恢复
ETCDCTL_API=3 etcdctl snapshot restore snap.db \
--name etcd-1 \
--initial-cluster="etcd-1=https://192.168.31.71:2380,etcd-
2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380" \
--initial-cluster-token=etcd-cluster \
--initial-advertise-peer-urls=https://192.168.31.71:2380 \
--data-dir=/var/lib/etcd/default.etcd
3、启动kube-apiserver和etcd
systemctl start kube-apiserver
systemctl start etcd