二进制安装kubernetes v1.11.2 (第十章 kube-scheduler集群部署)

继续前一章的部署。

十、kube-scheduler集群部署

该集群包含2个节点,启动后通过竞争选举机制产生一个leader节点,其他节点为阻塞状态。当leader节点不可用后,剩余的节点将再次进行选举产生新的leader节点,从而保证服务的可用性。

如下两种情况下使用x509证书:

a. 与 kube-apiserver 的安全端口通信

b. 在安全端口(https,10251)输出 prometheus 格式的 metrics

10.1 下载二进制文件,参考 第三章

10.2 创建 kube-scheduler 证书和私钥

创建证书签名请求:

cat > kube-scheduler-csr.json <<EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.56.20",
      "192.168.56.21"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "4Paradigm"
      }
    ]
}
EOF
  • hosts 列表包含所有 kube-scheduler 节点IP
  • CN为 system:kube-scheduler, O 为 system:kube-scheduler , kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限

生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

10.3 创建和分发 kubeconfig 文件

kubeconfig 文件包含访问 apiserver 的所有信息,如 apiserver地址,CA证书和自身使用的证书

source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
  • 上一步创建的证书、私钥以及 kube-apiserver 地址被写入到 kubeconfig 文件中

分发 kubeconfig 到所有 master 节点

source /opt/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-scheduler.kubeconfig k8s@${master_ip}:/etc/kubernetes/
done

10.4 创建和分发 kube-scheduler systemd unit 文件

cat > kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-scheduler \\
  --address=127.0.0.1 \\
  --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
  --leader-elect=true \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes \\
  --v=2
Restart=on-failure
RestartSec=5
User=k8s

[Install]
WantedBy=multi-user.target
EOF
  • --address 在 127.0.0.1:10251 端口接收 http /metrics 请求; kube-scheduler 目前还不支持接收 https 请求
  • --kubeconfig 指定 kubeconfig 文件路径,kube-scheduler 使用它链接和验证 kube-apiserver
  • --leader-elect=true 集群运行模式,启用选举功能; 被选为 leader 的节点负责处理工作,其他节点为阻塞状态
  • User=k8s  使用 k8s 账户运行

分发 systemd unit 文件到所有 master 节点:

source /opt/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    scp kube-scheduler.service root@${master_ip}:/etc/systemd/system/
done

10.5 启动 kube-scheduler 服务

source /opt/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh root@${master_ip} "mkdir -p /var/log/kubernetes && chown -R k8s /var/log/kubernetes"
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"
done
  • 运行前,必须先创建日志目录

10.6 检查服务运行状态

source /opt/k8s/bin/environment.sh
for master_ip in ${MASTER_IPS[@]}
  do
    echo ">>> ${master_ip}"
    ssh k8s@${master_ip} "systemctl status kube-scheduler|grep Active"
done
  • 确认服务是 Active: active (running) 状态
  • 查看日志: journalctl -u kube-scheduler

10.7 查看输出的 metric

以下命令在 kube-scheduler 节点上执行

[root@k8s-m1 template]# sudo netstat -lnpt|grep kube-sche
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      6800/kube-scheduler

[root@k8s-m1 template]# curl -s http://127.0.0.1:10251/metrics |head
# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0

10.8 测试 kube-scheduler 集群的高可用

找到一个节点,停掉服务,查看其它节点是否获取了leader权限

10.9 查看当前的 leader

kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml

 

posted on 2018-10-29 16:14  冰冰爱学习  阅读(542)  评论(0编辑  收藏  举报

导航