Kubernetes学习之路(四)之Mater节点二进制部署

_____egon新书来袭请看:https://egonlin.com/book.html

  • 1、部署Kubernetes API服务部署

  • apiserver提供集群管理的REST API接口,包括认证授权、数据校验以及集群状态变更等。
  • 只有API Server才能直接操作etcd;
  • 其他模块通过API Server查询或修改数据
  • 提供其他模块之间的数据交互和通信枢纽

三台apiserver

master01 10.1.1.100

master01 10.1.1.101

master01 10.1.1.102

vip:master01 10.1.1.200

(1)准备软件包
# 1、在manager节点下载软件包,然后给每个节点都发一份,即为所有节点都准备好软件包,这样后期就不用准备了
cd /usr/local/src
wget --no-check-certificate https://dl.k8s.io/v1.18.8/kubernetes-server-linux-amd64.tar.gz


#!/bin/bash
for i in 'master01' 'master02' 'master03' 'node01' 'node02' 'node03' 'manager'
do
    scp /usr/local/src/kubernetes-server-linux-amd64.tar.gz root@$i:/usr/local/src
done

# 2、在所有节点执行下述操作
cd /usr/local/src/
tar xf kubernetes-server-linux-amd64.tar.gz


# ====================》补充:一些无关的文件可以删除掉
rm -rf /usr/local/src/kubernetes/kubernetes-src.tar.gz  # go语言的源码包
rm -rf /usr/local/src/kubernetes/server/bin/*.tar  # 删除.tar结尾的,都是一系列docker镜像,我们不用kubeadm部署,所以用不到
rm -rf /usr/local/src/kubernetes/server/bin/*_tag


#=====================》最后只剩下一系列绿色的可执行文件
[root@master01 src]# ll /usr/local/src/kubernetes/server/bin/
总用量 546000
-rwxr-xr-x 1 root root  48140288 8月  14 2020 apiextensions-apiserver
-rwxr-xr-x 1 root root  39821312 8月  14 2020 kubeadm
-rwxr-xr-x 1 root root 120684544 8月  14 2020 kube-apiserver
-rwxr-xr-x 1 root root 110080000 8月  14 2020 kube-controller-manager
-rwxr-xr-x 1 root root  44040192 8月  14 2020 kubectl
-rwxr-xr-x 1 root root 113300248 8月  14 2020 kubelet
-rwxr-xr-x 1 root root  38383616 8月  14 2020 kube-proxy
-rwxr-xr-x 1 root root  42962944 8月  14 2020 kube-scheduler
-rwxr-xr-x 1 root root   1687552 8月  14 2020 mounter



# 3、在master01、master02、master03上执行下述命令
cd /usr/local/src/kubernetes
cp server/bin/kube-apiserver /opt/kubernetes/bin/
cp server/bin/kube-controller-manager /opt/kubernetes/bin/
cp server/bin/kube-scheduler /opt/kubernetes/bin/
(2)在master01执行下述操作,创建生成CSR的 JSON 配置文件
# apiserver作为客户端,需要访问etcd,我们需要一个服务端证书,一个客户都证书,之前部署etcd的时候已经为其生成了服务端证书,此处我们只需要为apiserver制作访问etcd的客户端证书即可
cd /usr/local/src/ssl
cat > kubernetes-csr.json << EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "10.1.1.200",
    "10.1.1.100",
    "10.1.1.101",
    "10.1.1.102",
"10.0.0.1",
"kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "ops" } ] } EOF # 注意:10.1.1.200为代理10.1.1.100、10.1.1.101、10.1.1.102三台节点的vip

# 10.0.0.1指的时service网络的第一个IP地址(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.0.0.1)
(3)在master01生成 kubernetes 证书和私钥

该证书用于apiserver组件作为客户端访问etcd,也用作apiserver的服务端证书

首先master02与master03与master01一样都部署有apiserver,所以需要发送一份

其次除了mananger节点外,所有的worker node节点,即node01、node02、node03也都需要访问apiserver,所以也应该有一份

cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=/opt/kubernetes/ssl/ca-config.json \
   -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes


cp kubernetes*.pem /opt/kubernetes/ssl/
scp kubernetes*.pem master02:/opt/kubernetes/ssl/
scp kubernetes*.pem master03:/opt/kubernetes/ssl/
scp kubernetes*.pem node01:/opt/kubernetes/ssl/
scp kubernetes*.pem node02:/opt/kubernetes/ssl/
scp kubernetes*.pem node03:/opt/kubernetes/ssl/
(4) 在master01创建 kube-apiserver 使用的客户端 token 文件,然后发送给master02与master03
# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
d149190dacf50968d58b069745dda2a2

# vim /opt/kubernetes/ssl/bootstrap-token.csv
d149190dacf50968d58b069745dda2a2,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

# 发送给master02与master03节点的/opt/kubernetes/ssl/
scp /opt/kubernetes/ssl/bootstrap-token.csv master02:/opt/kubernetes/ssl/
scp /opt/kubernetes/ssl/bootstrap-token.csv master03:/opt/kubernetes/ssl/
(5)在master01 创建基础用户名/密码认证配置,然后发送给master02与master03
# vim /opt/kubernetes/ssl/basic-auth.csv
admin,admin,1
readonly,readonly,2

scp /opt/kubernetes/ssl/basic-auth.csv master02:/opt/kubernetes/ssl/
scp /opt/kubernetes/ssl/basic-auth.csv master03:/opt/kubernetes/ssl/
(6) 在master01部署Kubernetes API Server,然scp给master02与master03,在master02与master03上吧--bind-address改为自己主机的ip即可
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
  --bind-address=10.1.1.100 \
  --insecure-bind-address=127.0.0.1 \
  --authorization-mode=Node,RBAC \
  --runtime-config=rbac.authorization.k8s.io/v1 \
  --kubelet-https=true \
  --anonymous-auth=false \
  --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
  --enable-bootstrap-token-auth \
  --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
  --service-cluster-ip-range=10.0.0.0/16 \
  --service-node-port-range=20000-40000 \
  --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://10.1.1.100:2379,https://10.1.1.101:2379,https://10.1.1.102:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/opt/kubernetes/log/api-audit.log \
  --event-ttl=1h \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

EOF
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
  --bind-address=10.1.1.101 \
  --insecure-bind-address=127.0.0.1 \
  --authorization-mode=Node,RBAC \
  --runtime-config=rbac.authorization.k8s.io/v1 \
  --kubelet-https=true \
  --anonymous-auth=false \
  --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
  --enable-bootstrap-token-auth \
  --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
  --service-cluster-ip-range=10.0.0.0/16 \
  --service-node-port-range=20000-40000 \
  --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://10.1.1.100:2379,https://10.1.1.101:2379,https://10.1.1.102:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/opt/kubernetes/log/api-audit.log \
  --event-ttl=1h \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

EOF
在master02上执行
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
  --bind-address=10.1.1.102 \
  --insecure-bind-address=127.0.0.1 \
  --authorization-mode=Node,RBAC \
  --runtime-config=rbac.authorization.k8s.io/v1 \
  --kubelet-https=true \
  --anonymous-auth=false \
  --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
  --enable-bootstrap-token-auth \
  --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
  --service-cluster-ip-range=10.0.0.0/16 \
  --service-node-port-range=20000-40000 \
  --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://10.1.1.100:2379,https://10.1.1.101:2379,https://10.1.1.102:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/opt/kubernetes/log/api-audit.log \
  --event-ttl=1h \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

EOF
在master03上执行
(7) 在master01、master02、master03启动API Server服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

# 查看
[root@master01 ssl]# netstat -tulnp |grep kube-apiserver
tcp 0 0 10.1.1.100:6443 0.0.0.0:* LISTEN 1432/kube-apiserver
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 1432/kube-apiserver

从监听端口可以看到api-server监听在6443端口,同时也监听了本地的8080端口,是提供kube-schduler和kube-controller在本地使用的。

  • 2、部署Controller Manager服务

  • controller-manager由一系列的控制器组成,它通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态。
(1)在master01、master02、master03上执行下述命令,部署Controller Manager服务,文件内容一模一样
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/kube-controller-manager \ --address=127.0.0.1 \ --master=http://127.0.0.1:8080 \ --allocate-node-cidrs=true \ --service-cluster-ip-range=10.0.0.0/16 \ --cluster-cidr=10.2.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem \ --leader-elect=true \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target

EOF
注意:详解见https://www.cnblogs.com/linhaifeng/articles/15175197.html
-service-cluster-ip-range
--cluster-cidr
–cluster-cidr是集群中pod使用的网段
与service的cluster ip网段不是同一个网段。

kube-controller-manager 的 cluster-cidr 主要用于配置 Node 的 Spec.PodCIDR。对于某些依赖该特性的网络插件,比如 flannel,可能是必须的,
不依赖该特性的网络插件就可以忽略。这个参数也会影响使用 cloud provider 场景下的集群网络配置,比如 aws 路由配置。
(2)在master01、master02、master03上执行下述命令,启动Controller Manager
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

# 验证
netstat -tulnp |grep kube-controlle tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 5112/kube-controlle

从监听端口上,可以看到kube-controller监听在本地的10252端口,外部是无法直接访问kube-controller,需要通过api-server才能进行访问。

  • 3、部署Kubernetes Scheduler

  • scheduler负责分配调度Pod到集群内的node节点
  • 监听kube-apiserver,查询还未分配的Node的Pod
  • 根据调度策略为这些Pod分配节点
(1)在master01、master02、master03上执行下述命令,部署Controller Manager服务,文件内容一模一样
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --leader-elect=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

EOF
(2)在master01、master02、master03上执行下述命令,启动kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

# 验证
netstat -tulnp |grep kube-scheduler
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      5172/kube-scheduler 

从kube-scheduler的监听端口上,同样可以看到监听在本地的10251端口上,外部无法直接访问,同样是需要通过api-server进行访问。

  • 4、部署haproxy+keepalived

(1)环境说明
master01  10.1.1.100
master02  10.1.1.101
master03  10.1.1.102
vip:10.1.1.200
(2)安装部署haproxy(maseter01、master02、master03都安装)
#1、安装软件
yum install haproxy -y

#2、修改配置,三台都一样
cat > /etc/haproxy/haproxy.cfg << EOF
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

defaults
    mode                    tcp #支持https
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000


    #use_backend static          if url_static
    #default_backend             app
listen stats #网页形式
    mode http
    bind *:9443
    stats  uri       /admin/stats
    monitor-uri      /monitoruri
frontend showDoc  # 8000就haproxy监听的端口
   
    bind *:8000
    use_backend      app #必须和下面的名称一致

backend app
    balance     roundrobin
    server  app1 10.1.1.100:6443 check
    server  app2 10.1.1.101:6443 check
    server  app3 10.1.1.102:6443 check

EOF

#3、启动haproxy,三台都一样
systemctl enable haproxy
systemctl start haproxy
systemctl sttus haproxy

netstat -tunlp |grep haproxy
(3)安装部署keepalived(maseter01、master02、master03都安装,注意配置i文件中的网卡名与自己的对应上)
#1、安装
yum install keepalived -y

#2、配置,master01的priority设置为100,其余两个节点分别为99、98
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
   script_user root 
   enable_script_security
}

vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh"
    interval 2
    weight -20
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 251
    priority 100
    advert_int 1

    nopreempt

    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.1.1.200
    }
} 

EOF

#补充知识

通常如果master服务死掉后backup会变成master,但是当master服务又好了的时候 master此时会抢占VIP,
这样就会发生两次切换对业务繁忙的网站来说是不好的。所以我们要在配置文件加入 nopreempt 非抢占,
但是这个参数只能用于state 为backup,故我们在用HA的时候最好master 和backup的state都设置成backup 
让其通过priority来竞争。
priority数字越大,优先级越高,节点优先级,值范围0~254,MASTER>BACKUP

#3、创建脚本文件,三台机器均一样
touch /etc/keepalived/check_port.sh
chmod +x /etc/keepalived/check_port.sh

vim /etc/keepalived/check_port.sh 
#!/bin/bash count=$(ps -C haproxy --no-header|wc -l) #1.判断 Nginx 是否存活,如果不存活则尝试启动 Nginx if [ $count -eq 0 ];then systemctl start haproxy sleep 3 #2.等待 3 秒后再次获取一次 haproxy 状态 count=$(ps -C haproxy --no-header|wc -l) #3.再次进行判断, 如haproxy 还不存活则停止 Keepalived,让地址进行漂移,并退出脚本 if [ $count -eq 0 ];then systemctl stop keepalived fi fi #4、启动keepalived,三台都一样 systemctl enable keepalived systemctl start keepalived systemctl status keepalived
  • 5、部署kubectl 命令行工具

kubectl用于日常直接管理K8S集群,那么kubectl要进行管理k8s,就需要和k8s的组件进行通信,也就需要用到证书。此时kubectl需要单独部署,也是因为kubectl也是需要用到证书,而前面的kube-apiserver、kube-controller、kube-scheduler都是不需要用到证书,可以直接通过服务进行启动。

在manager、master01、master02、master03、node01、node02、node03上执行下述操作

(1)准备二进制命令包
cp /usr/local/src/kubernetes/server/bin/kubectl /opt/kubernetes/bin/
(2)创建 admin 证书签名请求
cd /usr/local/src/ssl/
cat > admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "ops"
    }
  ]
}

EOF
(3)生成 admin 证书和私钥
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=/opt/kubernetes/ssl/ca-config.json \
   -profile=kubernetes admin-csr.json | cfssljson -bare admin

cp admin*.pem /opt/kubernetes/ssl/
scp admin*.pem 到其他节点
(4)设置集群参数(ip与端口:采用vip:haproxy监听的端口号)
kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=https://10.1.1.200:8000
(5)设置客户端认证参数
kubectl config set-credentials admin \
   --client-certificate=/opt/kubernetes/ssl/admin.pem \
   --embed-certs=true \
   --client-key=/opt/kubernetes/ssl/admin-key.pem
(6)设置上下文参数
kubectl config set-context kubernetes \
   --cluster=kubernetes \
   --user=admin
(7)设置默认上下文
kubectl config use-context kubernetes

上面(4)-->(7)的配置是为了在家目录下生成config文件,之后kubectl和api通信就需要用到该文件,这也就是说如果在其他节点上需要用到这个kubectl,就需要将该文件拷贝到其他节点。

cat ~/.kube/config 
(8)使用kubectl工具
kubectl get cs

NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}

 

posted @ 2021-08-20 19:53  linhaifeng  阅读(522)  评论(0编辑  收藏  举报