部署 kube-controller-manager 高可用集群
目录
前言
该集群包含三个节点,启动后通过竞争选举机制产生一个
leader
节点,其他节点为阻塞状态。当leader
节点不可用时,阻塞节点将会在此选举产生新的leader
,从而保证服务的高可用。为保证通信安全,这里采用x509
证书和私钥,kube-controller-manager
在与apiserver
的安全端口(http 10252
)通信使用;
创建kube-controller-manager证书和私钥
创建证书签名请求
cd /opt/k8s/work
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"10.0.20.11",
"10.0.20.12",
"10.0.20.13",
"node01.k8s.com",
"node02.k8s.com",
"node03.k8s.com"
],
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-controller-manager",
"OU": "4Paradigm"
}
]
}
EOF
- host列表包含所有的kube-controller-manager节点IP(VIP不需要输入)
- CN和O均为system:kube-controller-manager,kubernetes内置的ClusterRoleBindings system:kube-controller-manager赋予kube-controller-manager工作所需权限
生成证书和私钥
cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
-ca-key=/opt/k8s/work/ca-key.pem \
-config=/opt/k8s/work/ca-config.json \
-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
ls kube-controller-manager*pem
将生成的证书和私钥分发到所有master节点
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
scp kube-controller-manager*.pem root@${node_ip}:/etc/kubernetes/cert/
done
创建和分发kubeconfig文件
- kube-controller-manager使用kubeconfig文件访问apiserver
- 该文件提供了apiserver地址、嵌入的CA证书和kube-controller-manager证书
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/work/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context system:kube-controller-manager \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
分发kubeconfig到所有master节点
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
scp kube-controller-manager.kubeconfig root@${node_ip}:/etc/kubernetes/
done
创建 kube-controller-manager 启动文件
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-controller-manager.service.template <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
WorkingDirectory=${K8S_DIR}/kube-controller-manager
ExecStart=/opt/k8s/bin/kube-controller-manager \\
--profiling \\
--cluster-name=kubernetes \\
--controllers=*,bootstrapsigner,tokencleaner \\
--kube-api-qps=1000 \\
--kube-api-burst=2000 \\
--leader-elect \\
--use-service-account-credentials\\
--concurrent-service-syncs=2 \\
--bind-address=0.0.0.0 \\
#--secure-port=10252 \\
--tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \\
#--port=0 \\
--authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--client-ca-file=/etc/kubernetes/cert/ca.pem \\
--requestheader-allowed-names="" \\
--requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \\
--requestheader-extra-headers-prefix="X-Remote-Extra-" \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \\
--experimental-cluster-signing-duration=876000h \\
--horizontal-pod-autoscaler-sync-period=10s \\
--concurrent-deployment-syncs=10 \\
--concurrent-gc-syncs=30 \\
--node-cidr-mask-size=24 \\
--service-cluster-ip-range=${SERVICE_CIDR} \\
--pod-eviction-timeout=6m \\
--terminated-pod-gc-threshold=10000 \\
--root-ca-file=/etc/kubernetes/cert/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--logtostderr=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
参数解释
–port=0:
关闭监听非安全端口(http),同时 –address 参数无效,–bind-address 参数有效;–secure-port=10252、–bind-address=0.0.0.0:
在所有网络接口监听 10252 端口的 https /metrics 请求;–kubeconfig:
指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;–authentication-kubeconfig 和 –authorization-kubeconfig:
kube-controller-manager 使用它连接 apiserver,对 client 的请求进行认证和授权。kube-controller-manager 不再使用 –tls-ca-file 对请求 https metrics 的 Client 证书进行校验。如果没有配置这两个 kubeconfig 参数,则 client 连接 kube-controller-manager https 端口的请求会被拒绝(提示权限不足)。–cluster-signing-*-file:
签名 TLS Bootstrap 创建的证书;–experimental-cluster-signing-duration:
指定 TLS Bootstrap 证书的有效期;–root-ca-file:
放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;- `–service-account-private-key-file:签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 –service-account-key-file 指定的公钥文件配对使用;
–service-cluster-ip-range :
指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;–leader-elect=true:
集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;–controllers=*,bootstrapsigner,tokencleaner:
启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token;–horizontal-pod-autoscaler-*:
custom metrics 相关参数,支持 autoscaling/v2alpha1;–tls-cert-file、–tls-private-key-file:
使用 https 输出 metrics 时使用的 Server 证书和秘钥;–use-service-account-credentials=true:
kube-controller-manager 中各 controller 使用 serviceaccount 访问 kube-apiserver;
替换启动文件,并分发脚本
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for (( i=0; i < 3; i++ ))
do
sed -e "s/##NODE_NAME##/${MASTER_NAMES[i]}/" -e "s/##NODE_IP##/${MASTER_IPS[i]}/" kube-controller-manager.service.template > kube-controller-manager-${MASTER_IPS[i]}.service
done
ls kube-controller-manager*.service
分发到所有master节点
cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_ip in ${MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
scp kube-controller-manager-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-controller-manager.service
done
启动服务
source /opt/k8s/bin/environment.sh
for node_ip in ${MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager"
ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"
done
检查运行状态
source /opt/k8s/bin/environment.sh
for node_ip in ${MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "systemctl status kube-controller-manager|grep Active"
done
检查服务端口
for node_ip in ${MASTER_IPS[@]}
do
echo ">>> ${node_ip}"
ssh root@${node_ip} "netstat -lnpt | grep kube-controlle"
done
输出结果
[root@node01 work]# for node_ip in ${MASTER_IPS[@]}
> do
> echo ">>> ${node_ip}"
> ssh root@${node_ip} "netstat -lnpt | grep kube-controlle"
> done
>>> 10.0.20.11
tcp6 0 0 :::10252 :::* LISTEN 6127/kube-controlle
tcp6 0 0 :::10257 :::* LISTEN 6127/kube-controlle
>>> 10.0.20.12
tcp6 0 0 :::10252 :::* LISTEN 2914/kube-controlle
tcp6 0 0 :::10257 :::* LISTEN 2914/kube-controlle
>>> 10.0.20.13
tcp6 0 0 :::10252 :::* LISTEN 2952/kube-controlle
tcp6 0 0 :::10257 :::* LISTEN 2952/kube-controlle
查看 kube-controller-manager 创建权限
ClusteRole system:kube-controller-manager的权限太小,只能创建secret、serviceaccount等资源,将controller的权限分散到ClusterRole system:controller:xxx中
[root@node01 work]# kubectl describe clusterrole system:kube-controller-manager
Name: system:kube-controller-manager
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
secrets [] [] [create delete get update]
endpoints [] [] [create get update]
serviceaccounts [] [] [create get update]
events [] [] [create patch update]
serviceaccounts/token [] [] [create]
tokenreviews.authentication.k8s.io [] [] [create]
subjectaccessreviews.authorization.k8s.io [] [] [create]
configmaps [] [] [get]
namespaces [] [] [get]
*.* [] [] [list watch]
需要在 kube-controller-manager 的启动参数中添加 –use-service-account-credentials=true 参数,这样 main controller 会为各 controller 创建对应的 ServiceAccount XXX-controller。内置的 ClusterRoleBinding system:controller:XXX 将赋予各 XXX-controller ServiceAccount 对应的 ClusterRole system:controller:XXX 权限。
[root@node01 work]# kubectl get clusterrole|grep controller
system:controller:attachdetach-controller 122m
system:controller:certificate-controller 122m
system:controller:clusterrole-aggregation-controller 122m
system:controller:cronjob-controller 122m
system:controller:daemon-set-controller 122m
system:controller:deployment-controller 122m
system:controller:disruption-controller 122m
system:controller:endpoint-controller 122m
system:controller:expand-controller 122m
system:controller:generic-garbage-collector 122m
system:controller:horizontal-pod-autoscaler 122m
system:controller:job-controller 122m
system:controller:namespace-controller 122m
system:controller:node-controller 122m
system:controller:persistent-volume-binder 122m
system:controller:pod-garbage-collector 122m
system:controller:pv-protection-controller 122m
system:controller:pvc-protection-controller 122m
system:controller:replicaset-controller 122m
system:controller:replication-controller 122m
system:controller:resourcequota-controller 122m
system:controller:route-controller 122m
system:controller:service-account-controller 122m
system:controller:service-controller 122m
system:controller:statefulset-controller 122m
system:controller:ttl-controller 122m
system:kube-controller-manager 122m
以 deployment controller 为例:
[root@node01 work]# kubectl describe clusterrole system:controller:deployment-controller
Name: system:controller:deployment-controller
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
replicasets.apps [] [] [create delete get list patch update watch]
replicasets.extensions [] [] [create delete get list patch update watch]
events [] [] [create patch update]
pods [] [] [get list update watch]
deployments.apps [] [] [get list update watch]
deployments.extensions [] [] [get list update watch]
deployments.apps/finalizers [] [] [update]
deployments.apps/status [] [] [update]
deployments.extensions/finalizers [] [] [update]
deployments.extensions/status [] [] [update]
通过apiserver查看controller-manager状态
[root@node01 work]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
这里看到 controller-manager 的状态已经是 ok
了,在 测试访问apiserver状态 看到的还是 scheduler 是会一样的。
技术男一枚,喜欢做技术分享,把学习的过程,以及遇到问题的解决过程都愿意分享给大家,博客中如有不足,请留言或者联系博主,感谢。
邮箱: sijiayong000@163.com
Q Q: 601566386