[kubernetes]二进制部署k8s集群

0. 前言

采用二进制部署三主三工作节点的k8s集群,工作节点和Master节点共用服务器,因此只用到了三台服务器。master采用haproxy + keepalive实现高可用。实际生产环境中,建议主节点和工作节点分离。

另外本文用到的证书都是自签名的,如有CA认证中心,那么应该使用CA认证中心颁发的证书。

1. 环境信息

  • docker版本:19.3.15
  • k8s版本:1.21.14
IP 主机名 系统版本 配置 备注
192.168.8.21 k8s-1 CentOS 7.6 4C4G etcd、master、haproxy、keepalived、node
192.168.8.22 k8s-2 CentOS 7.6 4C4G etcd、master、haproxy、keepalived、node
192.168.8.23 k8s-3 CentOS 7.6 4C4G etcd、master、node
192.168.8.24 nil nil nil haproxy + keepalived的虚拟IP
169.169.0.1 nil nil nil Master Service虚拟服务的Cluster IP地址,集群内虚拟IP
169.169.0.100 nil nil nil 集群DNS服务的IP地址,集群内虚拟IP

2. openssl生成证书

为etcd和k8s启用CA认证的安全机制,需要CA证书进行配置。如果组织能够提供统一的CA认证中心,则直接使用组织颁发的CA证书即可。如果没有统一的CA认证中心,则可以通过颁发自签名的CA证书来完成安全配置。

etcd和k8s在制作CA证书时,均需要基于CA根证书。本文以k8s和etcd使用同一套的CA根证书为例。

# 生成私钥文件ca.key
openssl genrsa -out ca.key 2048
# 根据私钥文件生成根证书文件ca.crt
# /CN为master的主机名或IP地址
# days为证书的有效期
openssl req -x509 -new -nodes -key ca.key -subj "/CN=192.168.8.21" -days 36500 -out ca.crt

生成的两个ca文件放到 /etc/kubernetes/pki 目录下。

3. 部署安全的etcd高可用集群

etcd.service内容

[Unit]
Description=etcd key-value store
Documentation=https://github.com/etcd-io/etcd
After=network.target

[Service]
EnvironmentFile=/home/apps/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd
Restart=always

[Install]
WantedBy=multi-user.target

etcd_ssl.cnf

[ req ]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[ req_distinguished_name ]


[ v3_req ]

basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[ alt_names ]
IP.1 = 192.168.8.21
IP.2 = 192.168.8.22
IP.3 = 192.168.8.23

创建etcd的服务端证书

openssl genrsa -out etcd_server.key 2048
openssl req -new -key etcd_server.key -config etcd_ssl.cnf -subj "/CN=etcd-server" -out etcd_server.csr
openssl x509 -req -in etcd_server.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_server.crt

创建etcd的客户端证书

openssl genrsa -out etcd_client.key 2048
openssl req -new -key etcd_client.key -config etcd_ssl.cnf -subj "/CN=etcd-client" -out etcd_client.csr
openssl x509 -req -in etcd_client.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile etcd_ssl.cnf -out etcd_client.crt

etcd.conf配置示例

ETCD_NAME=etcd1
ETCD_DATA_DIR=/home/apps/etcd/data

ETCD_CERT_FILE=/home/apps/etcd/pki/etcd_server.crt
ETCD_KEY_FILE=/home/apps/etcd/pki/etcd_server.key
ETCD_TRUSTED_CA_FILE=/etc/kubernetes/pki/ca.crt
ETCD_CLIENT_CERT_AUTH=true
ETCD_LISTEN_CLIENT_URLS=https://192.168.8.21:2379
ETCD_ADVERTISE_CLIENT_URLS=https://192.168.8.21:2379

ETCD_PEER_CERT_FILE=/home/apps/etcd/pki/etcd_server.crt
ETCD_PEER_KEY_FILE=/home/apps/etcd/pki/etcd_server.key
ETCD_PEER_TRUSTED_CA_FILE=/etc/kubernetes/pki/ca.crt
ETCD_LISTEN_PEER_URLS=https://192.168.8.21:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=https://192.168.8.21:2380

ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.8.21:2380,etcd2=https://192.168.8.22:2380,etcd3=https://192.168.8.23:2380"
ETCD_INITIAL_CLUSTER_STATE=new

验证

etcdctl --cacert=/etc/kubernetes/pki/ca.crt --cert=/home/apps/etcd/pki/etcd_client.crt --key=/home/apps/etcd/pki/etcd_client.key --endpoints=https://192.168.8.21:2379,https://192.168.8.22:2379,https://192.168.8.23:2379 endpoint health

4. 部署master

4.1 下载Kubernetes服务的二进制文件

  1. 下载二进制文件。https://github.com/kubernetes/kubernetes/releases。可以在changelog中找到二进制包的下载链接,比如:k8s 1.21。下载server binary和node binary即可
  2. 解压二进制压缩包,将可执行文件移到/usr/bin
tar xf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
find ./* -perm 755 -type f -exec mv {} /usr/bin \;

解压后主要文件说明

文件名 说明
apiextensions-apiserver 提供实现自定义资源对象的扩展API Server
kubeadm Kubernetes集群安装的命令行工具
kube-aggregator 聚合API Server程序
kube-apiserver kube-apiserver主程序
kube-apiserver.docker_tag kube-apiserver的docker镜像tag
kube-apiserver.tar kube-apiserver的docker镜像文件
kube-controller-manager kube-controller-manager主程序
kube-controller-manager.docker_tag kube-controller-manager的docker镜像tag
kube-controller-manager.tar kube-controller-manager的docker镜像文件
kubectl kubectl客户端命令行工具
kubelet kubelet主程序
kube-proxy kube-proxy主程序
kube-proxy.docker_tag kube-proxy的docker镜像tag
kube-proxy.tar kube-proxy的docker镜像文件
kube-scheduler kube-scheduler主程序
kube-scheduler.docker_tag kube-scheduler的docker镜像tag
kube-scheduler.tar kube-scheduler的docker镜像文件

4.2 部署kube-apiserver服务

  1. 制作ca证书。编辑master_ssl.cnf文件
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s-1
DNS.6 = k8s-2
DNS.7 = k8s-3
IP.1 = 169.169.0.1
IP.2 = 192.168.8.21
IP.3 = 192.168.8.22
IP.4 = 192.168.8.23
IP.5 = 192.168.8.24

DNS.5 ~ DNS.7为三台服务器的主机名,另行设置/etc/hosts

IP.1为Master Service虚拟服务的Cluster IP地址,IP.2 ~ IP.4为apiserver的服务器IP,IP.5为负载均衡器的IP地址,可以为虚拟IP

  1. 制作证书文件。证书文件生成后将apiserver.crtapiserver.key放到/etc/kubernetes/pki/
openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -config master_ssl.cnf -subj "/CN=192.168.8.21" -out apiserver.csr
# ca.crt和ca.key是 "2. openssl生成证书"中的两个证书文件
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 36500 -extensions v3_req -extfile master_ssl.cnf -out apiserver.crt
  1. 为kube-apiserver创建 systemd 服务配置文件。/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=always

[Install]
WantedBy=multi-user.target

其中/etc/kubernetes/apiserver的文件内容如下:

KUBE_API_ARGS="--insecure-port=0 \
--secure-port=6443 \
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt \
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key \
--client-ca-file=/etc/kubernetes/pki/ca.crt \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa-key.pem \
--apiserver-count=3 --endpoint-reconciler-type=master-count \
--etcd-servers=https://192.168.8.21:2379,https://192.168.8.22:2379,https://192.168.8.23:2379 \
--etcd-cafile=/etc/kubernetes/pki/ca.crt \
--etcd-certfile=/home/apps/etcd/pki/etcd_client.crt \
--etcd-keyfile=/home/apps/etcd/pki/etcd_client.key \
--service-cluster-ip-range=169.169.0.0/16 \
--service-node-port-range=30000-32767 \
--allow-privileged=true \
--logtostderr=false --log-dir=/var/log/kubernetes --v=0"

主要参数说明:

参数 说明
insecure-port HTTP端口号,默认值为8080,0表示关闭http访问
secure-port HTTPS端口号,默认值为6443
tls-cert-file 服务端CA证书的绝对路径
tls-private-key-file 服务端CA证书私钥的绝对路径
apiserver-count apiserver的实例数
endpoint-reconciler-type apiserver-count参数的依赖项
allow-privileged 是否允许容器以特权模式运行
logtostderr 是否将日志输出到标准错误,默认为true
log-dir 日志目录。需要事先创建
--v 日志级别
  1. 使用cfssl创建sa.pub和sa-key.pem
cat<<EOF > sa-csr.json 
{
    "CN":"sa",
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"BeiJing",
            "ST":"BeiJing",
            "O":"k8s",
            "OU":"System"
        }
    ]
}
EOF

# cfssl和cfssljson可自行在GitHub搜索下载
cfssl gencert -initca sa-csr.json | cfssljson -bare sa -

openssl x509 -in sa.pem -pubkey -noout > sa.pub
  1. 在三台服务器启动apiserver
systemctl daemon-reload
systemctl start kube-apiserver && systemctl enable kube-apiserver
systemctl status kube-apiserver
  1. 创建客户端CA证书。将生成的client.crt和client.key放到/etc/kubernetes/pki
openssl genrsa -out client.key 2048
# /CN的名称用于标识连接apiserver的客户端用户名称
openssl req -new -key client.key -subj "/CN=admin" -out client.csr
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 36500

4.3 创建客户端连接apiserver所需的kubeconfig配置文件

为kube-controller-manager、kube-scheduler、kubelet和kube-proxy统一创建一个kubeconfig文件作为连接apiserver的配置文件,后续kubectl连接apiserver的配置文件。

apiVersion: v1
kind: Config
clusters:
- name: default
  cluster:
    server: https://192.168.8.24:9443
    certificate-authority: /etc/kubernetes/pki/ca.crt
users:
- name: admin
  user:
    client-certificate: /etc/kubernetes/pki/client.crt
    client-key: /etc/kubernetes/pki/client.key
contexts:
- context:
    cluster: default
    user: admin
  name: default
current-context: default

将 kubeconfig 文件保存到 /etc/kubernetes 目录下。

4.4 部署kube-controller-manager

  1. 创建systemd配置文件。/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=always

[Install]
WantedBy=multi-user.target
  1. 编辑 /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--leader-elect=true \
--service-cluster-ip-range=169.169.0.0/16 \
--service-account-private-key-file=/etc/kubernetes/pki/apiserver.key \
--root-ca-file=/etc/kubernetes/pki/ca.crt \
--log-dir=/var/log/kubernetes --logtostderr=false --v=0"
  1. 启动
systemctl daemon-reload
systemctl start kube-controller-manager && systemctl enable kube-controller-manager
systemctl status kube-controller-manager

4.5 部署kube-scheduler

  1. 创建systemd配置文件。/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=always

[Install]
WantedBy=multi-user.target
  1. 编辑 /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--leader-elect=true \
--logtostderr=false --log-dir=/var/log/kubernetes --v=0"
  1. 启动
systemctl daemon-reload
systemctl start kube-scheduler && systemctl enable kube-scheduler
systemctl status kube-scheduler

5. 使用HAProxy和keepalived部署高可用负载均衡器

5.1 部署haproxy

  1. 编辑 haproxy.cfg
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/lib/haproxy.pid
    maxconn     4096
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option                  http-server-close
    option                  forwardfor    except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend  kube-apiserver
    mode                 tcp
    bind                 *:9443
    option               tcplog
    default_backend      kube-apiserver

listen stats
    mode                 http
    bind                 *:8888
    stats auth           admin:password
    stats refresh        5s
    stats realm          HAProxy\ Statistics
    stats uri            /stats
    log                  127.0.0.1 local3 err

backend kube-apiserver
    mode        tcp
    balance     roundrobin
    server  k8s-master1 192.168.8.21:6443 check
    server  k8s-master2 192.168.8.22:6443 check
    server  k8s-master3 192.168.8.23:6443 check

haproxy.cfg 主要配置说明:

  • frontend:haproxy的监听协议和端口号,使用tcp,端口号为 9443
  • backend:后端三个 apiserver 的地址。balance 字段用于设置负载均衡策略,roundrobin为轮询模式
  • listen stats:状态监控的服务配置。stats uri 用于配置访问 URL 路径。
  1. 编辑docker-compose.yaml
version: "3"
services:
  haproxy:
    image: "haproxytech/haproxy-debian:2.3"
    container_name: k8s-haproxy
    network_mode: host
    restart: always
    volumes:
      - /home/apps/haproxy/conf/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
  1. 在192.168.8.21和192.168.8.22启动haproxy。启动后访问 8888 端口,然后输入账号密码即可访问状态监控的web页面
docker-compose up -d

5.2 部署keepalived

  1. 编辑192.168.8.11的 keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_1
}

vrrp_script checkhaproxy
{
    script "/usr/bin/check-haproxy.sh"
    interval 2
    weight -30
}

vrrp_instance VI_1 {
    state MASTER
    interface ens192
    virtual_router_id 51
    priority 100
    advert_int 1

    virtual_ipaddress {
        192.168.8.24/16 dev ens192
    }

    authentication {
        auth_type PASS
        auth_pass password
    }

    track_script {
        checkhaproxy
    }
}

主要配置说明:

  • vrrp_instance VI_1:设置虚拟路由 VRRP 的名称
  • state:其中一个设置为 MASTER,其它 keepalived设置为 BACKUP
  • interface:待设置 VIP 的网卡名称
  • virtual_ipaddress:VIP地址
  • authentication:访问keepalive服务的鉴权信息
  • track_script:haproxy健康检测脚本
  1. 编辑 check-haproxy.sh。记得赋予可执行权限
#!/bin/bash

count=`netstat -apn | grep 9443 | wc -l`

if [ $count -gt 0 ]; then
    exit 0
else
    exit 1
fi
  1. 编辑192.168.8.22的 keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_2
}

vrrp_script checkhaproxy
{
    script "/usr/bin/check-haproxy.sh"
    interval 2
    weight -30
}

vrrp_instance VI_1 {

    state BACKUP
    interface ens192
    virtual_router_id 51
    priority 100
    advert_int 1

    virtual_ipaddress {
        192.168.8.24/16 dev ens192
    }

    authentication {
        auth_type PASS
        auth_pass password
    }

    track_script {
        checkhaproxy
    }
}
  1. 在8.21和8.22分别启动keepalived的docker容器
docker run -d --name k8s-keepalived --restart=always --net=host --cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW -v ${PWD}/keepalived.conf:/container/service/keepalived/assets/keepalived.conf -v ${PWD}/check-haproxy.sh:/usr/bin/check-haproxy.sh osixia/keepalived:2.0.20 --copy-service
  1. 测试。执行:curl -v -k https://192.168.8.24:9443,如有类似以下输出,则说明通过vip成功访问到后端服务
* About to connect() to 192.168.8.24 port 9443 (#0)
*   Trying 192.168.8.24...
* Connected to 192.168.8.24 (192.168.8.24) port 9443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* NSS: client certificate not found (nickname not specified)
* Server certificate:
*       subject: CN=192.168.8.21
*       start date: Jan 07 10:20:31 2023 GMT
*       expire date: Dec 14 10:20:31 2122 GMT
*       common name: 192.168.8.21
*       issuer: CN=192.168.8.21
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 192.168.8.24:9443
> Accept: */*
> 
< HTTP/1.1 401 Unauthorized
< Cache-Control: no-cache, private
< Content-Type: application/json
< Date: Sat, 07 Jan 2023 17:30:30 GMT
< Content-Length: 165
< 
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
* Connection #0 to host 192.168.8.24 left intact

6. 部署node

在node上需要部署docker、kubelet、kube-proxy,加入k8s集群后,还需要部署CNI网络插件、DNS插件等。

6.1 部署kubelet

  1. 编辑 /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
After=docker.target

[Service]
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=always

[Install]
WantedBy=multi-user.target
  1. 编辑 /etc/kubernetes/kubelet。注意修改hostname-override中的IP为Node节点自己的IP
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--config=/etc/kubernetes/kubelet.config \
--hostname-override=192.168.8.21 \
--network-plugin=cni \
--logtostderr=false --log-dir=/var/log/kubernetes --v=0"

主要参数说明:

参数 说明
--kubeconfig 设置与 apiserver 连接的配置,可以与 controller-manager 的 kubeconfig 相同。新的Node节点注意拷贝客户端相关证书文件,比如ca.crt, client.key, client.crt
--config kubelet 配置文件,设置可以让多个Node共享的配置参数。
--hostname-override 本Node在集群中的名称,默认值为主机名
--network-plugin 网络插件类型,推荐使用CNI网络插件
  1. 编辑 /etc/kubernetes/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
cgroupDriver: systemd
clusterDNS: ["169.169.0.100"]
clusterDomain: cluster.local
authentication:
  anonymous:
    enabled: true

主要参数说明:

参数 说明
address 服务监听IP地址
port 服务监听端口号,默认值为10250
cgroupDriver cgroupDriver驱动,默认值为cgroupfs,可选 systemd
clusterDNS 集群DNS服务的IP地址
clusterDomain 服务DNS域名后缀
authentication 是否允许匿名访问或者是否使用webhook鉴权
  1. 在Node主机启动kubelet服务并设置为开机自启动
systemctl daemon-reload
systemctl start kubelet && systemctl enable kubelet
systemctl status kubelet

6.2 部署kube-proxy

  1. 编辑 /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=always

[Install]
WantedBy=multi-user.target
  1. 编辑 /etc/kubernetes/proxy。注意修改hostname-override为Node主机自己的IP
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--hostname-override=192.168.8.21 \
--proxy-mode=iptables \
--logtostderr=false --log-dir=/var/log/kubernetes --v=0"
  1. 启动
systemctl daemon-reload
systemctl start kube-proxy && systemctl enable kube-proxy
systemctl status kube-proxy

6.3 部署Calico CNI插件

  1. 在master节点通过kubectl查询自动注册到 k8s 的 node 信息。由于 Master 开启了 https 认证,所以 kubectl 也需要使用客户端 CA证书连接Master,可以直接使用 kube-controller-manager 的 kubeconfig 文件
kubectl --kubeconfig=/etc/kubernetes/kubeconfig  get nodes

# 命令执行结果
NAME           STATUS     ROLES    AGE     VERSION
192.168.8.21   NotReady   <none>   7m52s   v1.21.14
192.168.8.22   NotReady   <none>   7m53s   v1.21.14
192.168.8.23   NotReady   <none>   7m54s   v1.21.14
  1. 部署calico
# calico.yaml的下载地址:https://docs.projectcalico.org/manifests/calico.yaml
# 内网离线部署的话,需要提前下载calico/kube-controllers, calico/cni, calico/node, k8s.gcr.io/pause 的docker镜像,具体版本号见 calico.yaml。k8s.gcr.io/pause的docker版本镜像号可根据pod日志查看
# pause镜像需要节点先导入
kubectl --kubeconfig=/etc/kubernetes/kubeconfig  apply -f calico.yaml

# 查看node状态, status需要全部为 ready
kubectl --kubeconfig=/etc/kubernetes/kubeconfig  get nodes
# 命令执行结果
NAME           STATUS   ROLES    AGE   VERSION
192.168.8.21   Ready    <none>   90m   v1.21.14
192.168.8.22   Ready    <none>   90m   v1.21.14
192.168.8.23   Ready    <none>   90m   v1.21.14

# 查看pod状态, READY需要全部为 1/1
kubectl --kubeconfig=/etc/kubernetes/kubeconfig  get pods -n kube-system
# 命令执行结果
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-846d7f49d8-4n572   1/1     Running   0          36m
calico-node-j857l                          1/1     Running   0          36m
calico-node-k8txx                          1/1     Running   0          36m
calico-node-n2cdd                          1/1     Running   0          36m

7. 部署CoreDNS服务

nil

补充

Node使用token认证

k8s处理提供基于CA证书的认证方式,也提供基于HTTP Token的简单认证方式。各客户端组件与API Server之间的通信方式仍然采用HTTPS,但不采用CA数字证书。这种认证方式与CA证书相比,安全性很低,不建议在生产环境中使用。

基于Token认证的配置过程如下:

  1. 创建包括用户名、密码和UID的文件 token_auth_file,将其放置在合适的目录下。该文件为纯文本文件,用户名、密码都是明文。
cat /etc/kubernetes/token_auth_file
admin,admin,1
system,system,2
  1. 配置 apiserver 启动参数
--token-auth-file=/etc/kubernetes/token_auth_file

问题记录

apiserver启动失败,日志提示"service-account-issuer is a required flag"

  • 原因:1.20后需要添加service-account-issuer参数
  • 解决:生成sa证书和pub
cat<<EOF > sa-csr.json 
{
    "CN":"sa",
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "names":[
        {
            "C":"CN",
            "L":"BeiJing",
            "ST":"BeiJing",
            "O":"k8s",
            "OU":"System"
        }
    ]
}
EOF

# cfssl和cfssljson可自行在GitHub搜索下载
cfssl gencert -initca sa-csr.json | cfssljson -bare sa -

openssl x509 -in sa.pem -pubkey -noout > sa.pub

部署calico后status还是NotReady

  1. 查看pod信息
kubectl --kubeconfig=/etc/kubernetes/kubeconfig  get pod -n kube-system
# 执行结果
NAME                                       READY   STATUS     RESTARTS   AGE
calico-kube-controllers-846d7f49d8-4n572   0/1     Pending    0          12m
calico-node-j857l                          0/1     Init:0/3   0          12m
calico-node-k8txx                          0/1     Init:0/3   0          12m
calico-node-n2cdd                          0/1     Init:0/3   0          12m

# 查看node的日志
kubectl --kubeconfig=/etc/kubernetes/kubeconfig describe pod calico-node-j857l -n kube-system
# 结果显示k8s.gcr.io/pause:3.4.1的镜像拉取失败, 该镜像需要一点魔法才能拉取, 可以先拉取国内镜像,然后重新打tag
docker pull 'registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1'
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1 'k8s.gcr.io/pause:3.4.1'

参考

  1. 博客园 - 苏冷 - 二进制方式部署K8S(kubernetes)集群(测试、学习环境)-单主双从
  2. 《Kubenetes权威指南 - 第五版》
  3. 博客园 - 二进制安装部署k8s高可用集群V1.20
  4. 腾讯云 - kube-apiserver启动命令参数解释
  5. CSDN - kubernetes 1.21.10 apiserver报错 Error: service-account-issuer is a required flag
posted @ 2023-01-08 17:38  花酒锄作田  阅读(504)  评论(0编辑  收藏  举报