k8s-1.19.16 二进制安装

kubernetes版本:1.19.16  下载地址:https://storage.googleapis.com/kubernetes-release/release/v1.19.16/kubernetes-server-linux-amd64.tar.gz

etcd版本:3.4.13

docker版本:19.03.14

 

主机名 ip地址 安装的组件
k8s-master1 10.10.22.20 apiserver、controller-manager、scheduler、etcd、docker
k8s-node1 10.10.22.121 apiserver、controller-manager、scheduler、etcd、docker
k8s-node2 10.10.22.211 apiserver、controller-manager、scheduler、etcd、docker

 

 一、配置基础环境

1、配置主机名

 k8s-master1(10.10.22.20)

hostnamectl set-hostname k8s-master1

k8s-node1(10.10.22.121)

hostnamectl set-hostname k8s-node1

k8s-node2(10.10.22.211)

hostnamectl set-hostname k8s-node2

 

2、配置/etc/hosts文件

k8s-master1(10.10.22.20)

[root@k8s-master1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1  k8s-master1 master1
10.10.22.20 k8s-master1 master1
10.10.22.121 k8s-node1 node1
10.10.22.211 k8s-node2 node2

k8s-node1(10.10.22.121)

[root@k8s-node1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1  k8s-node1 node1
10.10.22.20 k8s-master1 master1
10.10.22.121 k8s-node1 node1
10.10.22.211 k8s-node2 node2

 k8s-node2(10.10.22.211)

[root@k8s-node2 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1   k8s-node2 node2
10.10.22.211 k8s-node2 node2
10.10.22.20 k8s-master1 master1
10.10.22.121 k8s-node1 node1

3、同步时间

全部节点

ntpdate cn.pool.ntp.org

yum install chrony

 

4、配置免密登录:

ssh-keygen -t rsa        一路回车,不输入密码
ssh-copy-id -i .ssh/id_rsa.pub k8s-master1    把本地的ssh公钥文件安装到远程主机对应的账户

k8s-master1(10.10.22.20)

ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub k8s-master1
ssh-copy-id -i .ssh/id_rsa.pub k8s-node1
ssh-copy-id -i .ssh/id_rsa.pub k8s-node2

k8s-node1(10.10.22.121)

ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub k8s-master1
ssh-copy-id -i .ssh/id_rsa.pub k8s-node1
ssh-copy-id -i .ssh/id_rsa.pub k8s-node2

k8s-node2(10.10.22.211)

ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub k8s-master1
ssh-copy-id -i .ssh/id_rsa.pub k8s-node1
ssh-copy-id -i .ssh/id_rsa.pub k8s-node2

5、三关一清

全部节点

systemctl disable firewalld
systemctl stop firewalld
setenforce 0
sed -i '/SELINUX/s/enforcing/disabled/g' /etc/selinux/config
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEPT

 

6、开启内核模块

全部节点

modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack
modprobe nf_conntrack_ipv4
modprobe br_netfilter
modprobe overlay

 7、开启模块自动加载服务

全部节点

cat << EOF > /etc/modules-load.d/k8s-modules.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
nf_conntrack_ipv4
br_netfilter
overlay
EOF

重启服务并设置开机自启

systemctl enable systemd-modules-load
systemctl restart systemd-modules-load

8、内核优化

全部节点

cat <<EOF > /etc/sysctl.d/kubernetes.conf
# 开启数据包转发功能(实现vxlan)
net.ipv4.ip_forward=1
# iptables对bridge的数据进行处理
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-arptables=1
# 关闭tcp_tw_recycle,否则和NAT冲突,会导致服务不通
net.ipv4.tcp_tw_recycle=0
# 不允许将TIME-WAIT sockets重新用于新的TCP连接
net.ipv4.tcp_tw_reuse=0
# socket监听(listen)的backlog上限
net.core.somaxconn=32768
# 最大跟踪连接数,默认 nf_conntrack_buckets * 4
net.netfilter.nf_conntrack_max=1000000
# 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.swappiness=0
# 计算当前的内存映射文件数。
vm.max_map_count=655360
# 内核可分配的最大文件数
fs.file-max=6553600
# 持久连接
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=10
EOF

 让配置生效

sysctl -p /etc/sysctl.d/kubernetes.conf

9、安装基础软件包

全部节点:

yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++\
make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel\
autoconf automake zlib-devel  python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate

 二、安装Docker环境

 全部节点:

sudo yum install -y yum-utils
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce-19.03.14 docker-ce-cli-19.03.14 containerd.io

 开启docker

systemctl enable docker && systemctl start docker

修改镜像地址和cgroup:

cat << EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

 重启docker

systemctl daemon-reload
systemctl restart docker
systemctl status docke
docker info | grep Cgrou

三、搭建etcd集群

1、安装签发证书工具

mkdir /root/k8s-hxu
cd /root/k8s-hxu
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
cp cfssl_1.6.1_linux_amd64 /usr/local/bin/cfssl
cp cfssljson_1.6.1_linux_amd64 /usr/local/bin/cfssljson
cp cfssl-certinfo_1.6.1_linux_amd64 /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl
chmod +x /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl-certinfo

 

 2、配置ca证书

生成ca证书请求文件

[root@k8s-master1 k8s-hxu]# vim ca-csr.json 
{
  "CN": "kubernetes",
  "key": {
      "algo": "rsa",
      "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "bj",
      "O": "k8s",
      "OU": "system"
    }
  ],
  "ca": {
          "expiry": "8760h"
  }
}

 生成ca证书json文件

[root@k8s-master1 k8s-hxu]# vim ca-config.json 
{
  "signing": {
      "default": {
          "expiry": "8760h"
        },
      "profiles": {
          "kubernetes": {
              "usages": [
                  "signing",
                  "key encipherment",
                  "server auth",
                  "client auth"
              ],
              "expiry": "8760h"
          }
      }
  }
}

 

cfssl gencert -initca ca-csr.json  | cfssljson -bare ca

注: CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法; O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)

3、生成etcd证书

[root@k8s-master1 k8s-hxu]# cat etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "10.10.22.20",
    "10.10.22.121",
    "10.10.22.211",
    "10.10.22.222", \\222、223、192*预留出来,作扩容用
    "10.10.22.223",
    "192.168.7.50"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
    "C": "CN",
    "ST": "Beijing",
    "L": "bj",
    "O": "k8s",
    "OU": "system"
  }]
}

 生成etcd证书文件

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

 

 

4、部署etcd集群

下载地址:https://github.com/etcd-io/etcd/releases/tag/v3.4.13

tar -zxf etcd-v3.4.13-linux-amd64.tar.gz
chown root:root etcd-v3.4.13-linux-amd64/etcd*
cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin

把这两个二进制程序文件拷贝到其他etc节点的/usr/local/bin/目录下

scp -p /usr/local/bin/etcd* node1:/usr/local/bin/
scp -p /usr/local/bin/etcd* node2:/usr/local/bin/

 创建配置文件

[root@k8s-master1 k8s-hxu]#cd /root/k8s-hxu/
[root@k8s-master1 k8s-hxu]#vim etcd.conf

 etcd.conf文件内容:

#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.10.22.20:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.22.20:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.22.20:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.22.20:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://10.10.22.20:2380,etcd2=https://10.10.22.121:2380,etcd3=https://10.10.22.211:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

 etcd.conf配置解释:

ETCD_NAME:节点名称,集群中唯一 
ETCD_DATA_DIR:数据目录 
ETCD_LISTEN_PEER_URLS:集群通信监听地址 
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址 
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址 
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址 
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

创建启动服务文件

[root@k8s-master1 k8s-hxu]# cat etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

 拷贝etcd的证书到上面编辑的启动文件中定义的目录下

[root@k8s-master1 k8s-hxu]# mkdir -p /etc/etcd/ssl
[root@k8s-master1 k8s-hxu]# cp *.pem /etc/etcd/ssl/
[root@k8s-master1 k8s-hxu]# ll /etc/etcd/ssl/
总用量 16
-rw-------. 1 root root 1679 3月  24 07:07 ca-key.pem
-rw-r--r--. 1 root root 1298 3月  24 07:07 ca.pem
-rw-------. 1 root root 1679 3月  24 07:07 etcd-key.pem
-rw-r--r--. 1 root root 1444 3月  24 07:07 etcd.pem
[root@k8s-master1 k8s-hxu]# cp etcd.conf /etc/etcd/
[root@k8s-master1 k8s-hxu]# ll /etc/etcd/
总用量 4
-rw-r--r--. 1 root root 514 3月  24 07:08 etcd.conf
drwxr-xr-x. 2 root root  74 3月  24 07:07 ssl
复制启动服务文件
[root@k8s-master1 k8s-hxu]# cp etcd.service /usr/lib/systemd/system

把文件同步到node1和node2

for i in k8s-node1 k8s-node2;do rsync -vaz etcd.conf $i:/etc/etcd/;done
for i in k8s-node1 k8s-node2;do rsync -vaz *.pem $i:/etc/etcd/ssl/;done
for i in k8s-node1 k8s-node2;do rsync -vaz etcd.service $i:/usr/lib/systemd/system/;done

 修改etc节点配置:

[root@k8s-node1 ssl]# cat /etc/etcd/etcd.conf 
#[Member]
ETCD_NAME="etcd2"  #名称换为当前名称
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.10.22.121:2380"  #换为当前IP
ETCD_LISTEN_CLIENT_URLS="https://10.10.22.121:2379" #换为当前IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.22.121:2380" #换为当前IP
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.22.121:2379" #换为当前IP
ETCD_INITIAL_CLUSTER="etcd1=https://10.10.22.20:2380,etcd2=https://10.10.22.121:2380,etcd3=https://10.10.22.211:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

 设置/etc/hosts

设置开机自启:

systemctl daemon-reload   //修改完服务文件后,需要执行该操作
systemctl start etcd
systemctl enable etcd

 查看etcd集群状态,在 etcd 集群中任意节点执行即可。

[root@k8s-master1 system]# /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://10.10.22.20:2379,https://10.10.22.121:2379,https://10.10.22.211:2379  endpoint health
+---------------------------+--------+-------------+-------+
|         ENDPOINT          | HEALTH |    TOOK     | ERROR |
+---------------------------+--------+-------------+-------+
| https://10.10.22.121:2379 |   true | 589.54713ms |       |
| https://10.10.22.211:2379 |   true |  597.8429ms |       |
|  https://10.10.22.20:2379 |   true | 806.22191ms |       |
+---------------------------+--------+-------------+-------+

成功开启etcd之后,将etcd.conf中的new改成existing

 

 

四、安装kubernetes组件

1、下载安装包

[root@k8s-master1 ~]# tar -zxf kubernetes-server-linux-amd64.tar.gz -C k8s-hxu/
[root@k8s-master1 ~]# cd k8s-hxu/kubernetes/server/bin
[root@k8s-master1 bin]# ls
apiextensions-apiserver  kube-apiserver             kube-controller-manager             kubectl     kube-proxy.docker_tag  kube-scheduler.docker_tag
kubeadm                  kube-apiserver.docker_tag  kube-controller-manager.docker_tag  kubelet     kube-proxy.tar         kube-scheduler.tar
kube-aggregator          kube-apiserver.tar         kube-controller-manager.tar         kube-proxy  kube-scheduler         mounter
[root@k8s-master1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
[root@k8s-master1 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-node1:/usr/local/bin/
[root@k8s-master1 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-node2:/usr/local/bin/

2、部署api-server组件 

启动TLS Bootstrapping 机制

  Master apiserver启用TLS认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。

  为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

  Bootstrap 是很多系统中都存在的程序,比如 Linux 的bootstrap,bootstrap 一般都是作为预先配置在开启或者系统启动的时候加载,这可以用来生成一个指定环境。Kubernetes 的 kubelet 在启动时同样可以加载一个这样的配置文件,这个文件的内容类似如下形式:

 

apiVersion: v1
clusters: null
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user: {}

 

#TLS bootstrapping 具体引导过程

  1. TLS 作用
    TLS 的作用就是对通讯加密,防止中间人窃听;同时如果证书不信任的话根本就无法与 apiserver 建立连接,更不用提有没有权限向apiserver请求指定内容。

  2. RBAC 作用
    当 TLS 解决了通讯问题后,那么权限问题就应由 RBAC 解决(可以使用其他权限模型,如 ABAC);RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O字段作为用户组.

以上说明:第一,想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成信任关系,建立 TLS 连接;第二,可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。

#kubelet 首次启动流程
TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,然后用于连接 apiserver;那么第一次启动时没有证书如何连接 apiserver ?

在apiserver 配置中指定了一个 token.csv 文件,该文件中是一个预设的用户配置;同时该用户的Token 和 由apiserver 的 CA签发的用户被写入了 kubelet 所使用的 bootstrap.kubeconfig 配置文件中;这样在首次请求时,kubelet 使用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立 TLS 通讯,使用 bootstrap.kubeconfig 中的用户 Token 来向 apiserver 声明自己的 RBAC 授权身份.

首次启动时,可能与遇到 kubelet 报 401 无权访问 apiserver 的错误;这是因为在默认情况下,kubelet 通过 bootstrap.kubeconfig 中的预设用户 Token 声明了自己的身份,然后创建 CSR 请求;但是不要忘记这个用户在我们不处理的情况下他没任何权限的,包括创建 CSR 请求;所以需要创建一个 ClusterRoleBinding,将预设用户 kubelet-bootstrap 与内置的 ClusterRole system:node-bootstrapper 绑定到一起,使其能够发起 CSR 请求。稍后安装kubelet的时候演示。

创建token.csv文件

[root@k8s-master1 k8s-hxu]# cat << EOF > token.csv
> $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
> EOF
[root@k8s-master1 k8s-hxu]# cat token.csv 
f2664b3d61c513985fe970b0bd459264,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

 生成了一个文件,文件内容如下
#格式:token,用户名,UID,用户组

创建csr请求文件,替换为自己机器的IP

[root@k8s-master1 k8s-hxu]# vim kube-apiserver-csr.json

 

[root@k8s-master1 k8s-hxu]# cat kube-apiserver-csr.json 
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "10.10.22.20",
    "10.10.22.121",
    "10.10.22.211",
    "192.168.7.13",
    "192.168.7.50",
    "10.255.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "bj",
      "O": "k8s",
      "OU": "system"
    }
  ]
}

 

注: 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.255.0.1)

生成证书

[root@k8s-master1 k8s-hxu]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
2022/05/03 15:44:50 [INFO] generate received request
2022/05/03 15:44:50 [INFO] received CSR
2022/05/03 15:44:50 [INFO] generating key: rsa-2048
2022/05/03 15:44:53 [INFO] encoded CSR
2022/05/03 15:44:54 [INFO] signed certificate with serial number 460095770187943391876520391572312079292537098932
[root@k8s-master1 k8s-hxu]# ll -t
总用量 57264
-rw-r--r--. 1 root      root          1269 5月   3 15:44 kube-apiserver.csr
-rw-------. 1 root      root          1679 5月   3 15:44 kube-apiserver-key.pem
-rw-r--r--. 1 root      root          1586 5月   3 15:44 kube-apiserver.pem

 生成token文件

[root@k8s-master1 k8s-hxu]# cat << EOF > token.csv
> $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
> EOF
[root@k8s-master1 k8s-hxu]# cat token.csv 
bcfcffee295879be73c4f29bfa7bcbea,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

 创建api-server的配置文件,替换成自己的ip

[root@k8s-master1 k8s-hxu]# cat kube-apiserver.conf 
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=10.10.22.20 \
  --secure-port=6443 \
  --advertise-address=10.10.22.20 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://10.10.22.20:2379,https://10.10.22.121:2379,https://10.10.22.211:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"

 注解:

--logtostderr:启用日志
--v:日志等级
--log-dir:日志目录
--etcd-servers:etcd集群地址
--bind-address:监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--insecure-port=0:将缺省的8080端口进行关闭。
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserver https证书
--etcd-xxxfile:连接Etcd集群证书 –
-audit-log-xxx:审计日志

 创建服务启动文件

[root@k8s-master1 k8s-hxu]# cat kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

创建apiserver配置文件目录,并将证书和配置文件移到该目录。

[root@k8s-master1 k8s-hxu]# mkdir -p /etc/kubernetes/ssl
[root@k8s-master1 k8s-hxu]# cp ca*.pem kube-apiserver*.pem /etc/kubernetes/ssl/
[root@k8s-master1 k8s-hxu]# ll /etc/kubernetes/ssl
总用量 16
-rw-------. 1 root root 1679 5月   3 17:10 ca-key.pem
-rw-r--r--. 1 root root 1298 5月   3 17:10 ca.pem
-rw-------. 1 root root 1679 5月   3 17:10 kube-apiserver-key.pem
-rw-r--r--. 1 root root 1586 5月   3 17:10 kube-apiserver.pem
[root@k8s-master1 k8s-hxu]# cp token.csv kube-apiserver.conf /etc/kubernetes/
[root@k8s-master1 k8s-hxu]# cp kube-apiserver.service /usr/lib/systemd/system

下面把这些生成的文件都拷贝到其他两个master控制节点的对应目录下(千万不要漏)

rsync -vaz token.csv kube-apiserver.conf k8s-node1:/etc/kubernetes/
rsync -vaz token.csv kube-apiserver.conf k8s-node2:/etc/kubernetes/
rsync -vaz kube-apiserver*.pem ca*.pem k8s-node1:/etc/kubernetes/ssl/
rsync -vaz kube-apiserver*.pem ca*.pem k8s-node2:/etc/kubernetes/ssl/
rsync -vaz kube-apiserver.service k8s-node1:/usr/lib/systemd/system/
rsync -vaz kube-apiserver.service k8s-node2:/usr/lib/systemd/system/

注:k8s-node1和k8s-node2配置文件kube-apiserver.conf的IP地址修改为实际的本机IP,修改以下两个字段即可 --bind-address=192.168.7.12 --advertise-address=192.168.7.12

加载启动文件并启动

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

 使用curl命令请求访问一下

[root@k8s-node1 system]# curl --insecure https://10.10.22.20:6443/
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}

上面看到401,这个是正常的的状态,还没认证 

此处有个问题,尚未解决:

node1、node2的apiserver都启起来了,master启不起来,不过说没启起来吧,其实也起来了。systemctl status kube-apiserver 显示结果是没启起来。netstat -anpt | grep 6443 端口是启起来了。看日志也没看出啥毛病。curl --insecure https://10.10.22.20:6443/的结果如上所述,也没毛病。为了排除服务的问题我把启动kube-apiserver的指令找出来,手动执行了一下,运行过程看着也正常。指令如下:

/usr/local/bin/kube-apiserver --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --anonymous-auth=false --bind-address=10.10.22.20 --secure-port=6443 --advertise-address=10.10.22.20 --insecure-port=0 --authorization-mode=Node,RBAC --runtime-config=api/all=true --enable-bootstrap-token-auth --service-cluster-ip-range=10.255.0.0/16 --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-issuer=https://kubernetes.default.svc.cluster.local --etcd-cafile=/etc/etcd/ssl/ca.pem --etcd-certfile=/etc/etcd/ssl/etcd.pem --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem --etcd-servers=https://10.10.22.20:2379,https://10.10.22.121:2379,https://10.10.22.211:2379 --enable-swagger-ui=true --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/kube-apiserver-audit.log --event-ttl=1h --alsologtostderr=true --logtostderr=false --log-dir=/var/log/kubernetes --v=4

下图是master上的apiserver运行状态的部分截图:

 

 是不是没问题?可能是启动服务配置错误,或者环境变量有问题。不知道,先放着吧。

3、部署kubectl组件

Kubectl是客户端工具,操作k8s资源的,如增删改查等。
Kubectl操作资源的时候,怎么知道连接到哪个集群,需要一个文件/etc/kubernetes/admin.conf,kubectl会根据这个文件的配置,去访问k8s资源。/etc/kubernetes/admin.con文件记录了访问的k8s集群,和用到的证书。

可以设置一个环境变量KUBECONFIG
[root@ k8s-master1 ~]# export KUBECONFIG =/etc/kubernetes/admin.conf
这样在操作kubectl,就会自动加载KUBECONFIG来操作要管理哪个集群的k8s资源了

也可以按照下面方法,这个是在kubeadm初始化k8s的时候会告诉我们要用的一个方法
[root@ k8s-master1 ~]# cp /etc/kubernetes/admin.conf /root/.kube/config
这样我们在执行kubectl,就会加载/root/.kube/config文件,去操作k8s资源了

如果设置了KUBECONFIG,那就会先找到KUBECONFIG去操作k8s,如果没有KUBECONFIG变量,那就会使用/root/.kube/config文件决定管理哪个k8s集群的资源

创建csr请求文件

[root@k8s-master1 k8s-hxu]# cat admin-csr.json 
{
  "CN": "admin",
  "hosts": [
    "10.10.22.20",
    "10.10.22.121",
    "10.10.22.211",
    "10.10.22.212",
    "10.10.22.213"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "bj",
      "O": "system:masters",
      "OU": "system"
    }
  ]
}

 说明: 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权; kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限; O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限; 注: 这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group; "O": "system:masters", 必须是system:masters,否则后面kubectl create clusterrolebinding报错。

[root@k8s-master1 k8s-hxu]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
2022/05/03 22:56:25 [INFO] generate received request
2022/05/03 22:56:25 [INFO] received CSR
2022/05/03 22:56:25 [INFO] generating key: rsa-2048
2022/05/03 22:56:35 [INFO] encoded CSR
2022/05/03 22:56:35 [INFO] signed certificate with serial number 275271424541391890619684190042564344216629561367
[root@k8s-master1 k8s-hxu]# ll
总用量 57288
-rw-r--r--. 1 root      root          1082 5月   3 22:56 admin.csr
-rw-r--r--. 1 root      root           325 5月   3 22:55 admin-csr.json
-rw-------. 1 root      root          1679 5月   3 22:56 admin-key.pem
-rw-r--r--. 1 root      root          1444 5月   3 22:56 admin.pem

 

[root@k8s-master1 k8s-hxu]# cp admin*.pem /etc/kubernetes/ssl/

 创建kubeconfig配置文件

kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书(这里如果报错找不到kubeconfig路径,请手动复制到相应路径下,没有则忽略)

1.设置集群参数

[root@k8s-master1 k8s-hxu]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.22.20:6443 --kubeconfig=kube.config
Cluster "kubernetes" set.
[root@k8s-master1 k8s-hxu]# cat kube.config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURrRENDQW5pZ0F3SUJBZ0lVQ0VkOUtzWmhDMmI4d04rZHpXSHU0aVVhTDNjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4Q3pBSkJnTlZCQWNUQW1KcQpNUXd3Q2dZRFZRUUtFd05yT0hNeER6QU5CZ05WQkFzVEJuTjVjM1JsYlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1ClpYUmxjekFlRncweU1qQXpNak15TWpBMU1EQmFGdzB5TXpBek1qTXlNakExTURCYU1HQXhDekFKQmdOVkJBWVQKQWtOT01SQXdEZ1lEVlFRSUV3ZENaV2xxYVc1bk1Rc3dDUVlEVlFRSEV3SmlhakVNTUFvR0ExVUVDaE1EYXpoegpNUTh3RFFZRFZRUUxFd1p6ZVhOMFpXMHhFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHClNJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUM3KzNvN0QwME40K3JJRGxDMTlaeDdUOUs2SjFwaGsyZnQKUmthZW1DQ2RVUnR5b1NsRERsMnlPWEhkWE91UVZ2QkhYZGF4VHF0S3M4emF6SE8ranprWHN5dThLZ0VBeTlWVAowN0ZsOTF3eHo5clRSTXU2dmtKRW9BcytIRUVDUWdyaU04S3V0ZDFuQjh4ODNRa1d0bTNxM0J4RmprRnJBVHozCjB0MlliV29wajBybkdoamV1L3pRV2hKL2tFcXBsVG5pS0Q4RFM4WUh2eVJDYUc2RUpGR3lQYk5wNytNbWp4RloKZE1HM0YzcXNVRlFZYVBQcDU3aTZ5T2tsYUZOZEFKVmZoTnFXK2oxVEJGckNoZWhrUUNLYnJuVFBVRllZTUowawpncVViV09qOVd0dW1JR21RdVU4N1Rtc014V3duWU9qQ21QQ0NnUHoyMW1lcitOSitoNFczQWdNQkFBR2pRakJBCk1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTeEI3SUwKRjdaRlUwTHA4Wm9nUHRPN1VML0hiakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBS2JYcWV1VzlLdlhMTlUyRApPajQra21vdFprd1dtVXkxeVh1ZHpZRFlPNXBlejdFdWNpazhqRDB1bGI5dXRRZjF4RXY5VzB3V0ZHelVuSXVrCnJmNzgwVFI4ZENFZlpROVN2ZUw2eTNMNlVMa3dEbjFtVnZOb2phT25MZzE3amZWVC92ZENZbGwzbGM2SHAzUnEKNmJBb0ZVK09EQnd5Z05xbC9VV1NLWnNnL3pMZlZhc25UVTBqY2hOekxDV1ZxWTdxZ2RyVUg2RUwrWXlFNzdxYgpCSkxEVkVxN0U5cUpEc1pHTCt1MUJvWElUU3ZITjRJZFRSdzBpVnoxNmZPVWs2Mkx3UjBFRzFhamprSnUzcnRZCnp1VEVKSnBjcFFVdjMwR1lEanZuanVac0kyTTZEc2tSeUdIRTBpRDR6RWttaFUwVy9BUjVRYmVWT2hlakdOOSsKQU91TnJBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.10.22.20:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

 2.设置客户端认证参数

[root@k8s-master1 k8s-hxu]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
User "admin" set.
[root@k8s-master1 k8s-hxu]# cat kube.config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURrRENDQW5pZ0F3SUJBZ0lVQ0VkOUtzWmhDMmI4d04rZHpXSHU0aVVhTDNjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4Q3pBSkJnTlZCQWNUQW1KcQpNUXd3Q2dZRFZRUUtFd05yT0hNeER6QU5CZ05WQkFzVEJuTjVjM1JsYlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1ClpYUmxjekFlRncweU1qQXpNak15TWpBMU1EQmFGdzB5TXpBek1qTXlNakExTURCYU1HQXhDekFKQmdOVkJBWVQKQWtOT01SQXdEZ1lEVlFRSUV3ZENaV2xxYVc1bk1Rc3dDUVlEVlFRSEV3SmlhakVNTUFvR0ExVUVDaE1EYXpoegpNUTh3RFFZRFZRUUxFd1p6ZVhOMFpXMHhFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHClNJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUM3KzNvN0QwME40K3JJRGxDMTlaeDdUOUs2SjFwaGsyZnQKUmthZW1DQ2RVUnR5b1NsRERsMnlPWEhkWE91UVZ2QkhYZGF4VHF0S3M4emF6SE8ranprWHN5dThLZ0VBeTlWVAowN0ZsOTF3eHo5clRSTXU2dmtKRW9BcytIRUVDUWdyaU04S3V0ZDFuQjh4ODNRa1d0bTNxM0J4RmprRnJBVHozCjB0MlliV29wajBybkdoamV1L3pRV2hKL2tFcXBsVG5pS0Q4RFM4WUh2eVJDYUc2RUpGR3lQYk5wNytNbWp4RloKZE1HM0YzcXNVRlFZYVBQcDU3aTZ5T2tsYUZOZEFKVmZoTnFXK2oxVEJGckNoZWhrUUNLYnJuVFBVRllZTUowawpncVViV09qOVd0dW1JR21RdVU4N1Rtc014V3duWU9qQ21QQ0NnUHoyMW1lcitOSitoNFczQWdNQkFBR2pRakJBCk1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTeEI3SUwKRjdaRlUwTHA4Wm9nUHRPN1VML0hiakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBS2JYcWV1VzlLdlhMTlUyRApPajQra21vdFprd1dtVXkxeVh1ZHpZRFlPNXBlejdFdWNpazhqRDB1bGI5dXRRZjF4RXY5VzB3V0ZHelVuSXVrCnJmNzgwVFI4ZENFZlpROVN2ZUw2eTNMNlVMa3dEbjFtVnZOb2phT25MZzE3amZWVC92ZENZbGwzbGM2SHAzUnEKNmJBb0ZVK09EQnd5Z05xbC9VV1NLWnNnL3pMZlZhc25UVTBqY2hOekxDV1ZxWTdxZ2RyVUg2RUwrWXlFNzdxYgpCSkxEVkVxN0U5cUpEc1pHTCt1MUJvWElUU3ZITjRJZFRSdzBpVnoxNmZPVWs2Mkx3UjBFRzFhamprSnUzcnRZCnp1VEVKSnBjcFFVdjMwR1lEanZuanVac0kyTTZEc2tSeUdIRTBpRDR6RWttaFUwVy9BUjVRYmVWT2hlakdOOSsKQU91TnJBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.10.22.20:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQvakNDQXVhZ0F3SUJBZ0lVTURlWSt6cGZ6clQxL2RCRzVtbFdENGhvU0Jjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4Q3pBSkJnTlZCQWNUQW1KcQpNUXd3Q2dZRFZRUUtFd05yT0hNeER6QU5CZ05WQkFzVEJuTjVjM1JsYlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1ClpYUmxjekFlRncweU1qQTFNRE14TkRVeU1EQmFGdzB5TXpBMU1ETXhORFV5TURCYU1HWXhDekFKQmdOVkJBWVQKQWtOT01SQXdEZ1lEVlFRSUV3ZENaV2xxYVc1bk1Rc3dDUVlEVlFRSEV3SmlhakVYTUJVR0ExVUVDaE1PYzNsegpkR1Z0T20xaGMzUmxjbk14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVPTUF3R0ExVUVBeE1GWVdSdGFXNHdnZ0VpCk1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRRHVRRjFwZStVNUR2Z0YycitSblZnU2RDS3UKelUzSUNrdm9TTlQyWW03RnFrczhLd09ON296MTZzaXQ4WFhNcEUvYmRRbzZnS1dIODB0S0t3eGVLMS9NQ0cvaQpSK2VWNDJac0dUUTFVTm1MZHF4M0IyR3ZrTjJUbEFsRFc1RVFYSXRBRWZQeXBmNis2NTJENllObkNQZ2ZYYUE4CmJPaWJBWkJMMmEreEhCMlB1bGNjTWZTQWZUWElsL1RkVmRrZW9kK1lYQXlZRi9vb3dCSHRjdHcyU0YzbElZcU8KR094cXFCL2E5RzBZYVRJalZaSkdNSGlIeExORkpqcytBQkRYMW42WlJ3T001UmRYdXhsMEZhNFNEbTFpSTFYNQpKanZIUWlZd1hQZjI0MFRJeTA1eHNNczlNUFBrRUNXS0JFWndHVWhrRXJDRHIwMDk3TXlISGovdWRzMXBBZ01CCkFBR2pnYWt3Z2FZd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCUTRBTXpFQkNKWDBUQnJyQUJjc0tiTApDYkNVL2pBZkJnTlZIU01FR0RBV2dCU3hCN0lMRjdaRlUwTHA4Wm9nUHRPN1VML0hiakFuQmdOVkhSRUVJREFlCmh3UUtDaFlVaHdRS0NoWjVod1FLQ2hiVGh3UUtDaGJVaHdRS0NoYlZNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUIKQVFDMi96dVNLVFN1NTE3T1FFNkoycjhZWWYvRVVHUlpMMmJWdXBwcGRtRTlScVVlbS85a0lFSUx3MjFtZEkzawpUY0RWTGxUNE5VaGY2YThma0pTRi93QXNRZ1A3RkV3d3IyU2lzdytiZFlVaTdvMFBWMmxCMUtGMVlEaE9LRk5pCmVHdVBTYUorcGRCWVdSVUNUcCt3ZFdYeWtWNWVkbFlXYm1KM1FMK2FZREc3cXFRMXBWSnVtYTB0VkxxQnpLUXAKekt0UHozeDBIVndlaDEvNEdySDVSaGlzR1EvU0VWcWxvbFJseUpQZElBVm9EK211VDNQT1F6TDhDZkhxVXJiOAptRERYc0NXK1Z1UDZTaFhocmY2bGZCcnljK01RQlh5alFwQ0JNL2E1L2U1RC9neWpmcW5STGVXNXZCY29qNGpSCkhMaVNleTVTN3N2Nml1cGRReUpPdmVrRQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBN2tCZGFYdmxPUTc0QmRxL2taMVlFblFpcnMxTnlBcEw2RWpVOW1KdXhhcExQQ3NECmplNk05ZXJJcmZGMXpLUlAyM1VLT29DbGgvTkxTaXNNWGl0ZnpBaHY0a2ZubGVObWJCazBOVkRaaTNhc2R3ZGgKcjVEZGs1UUpRMXVSRUZ5TFFCSHo4cVgrdnV1ZGcrbURad2o0SDEyZ1BHem9td0dRUzltdnNSd2RqN3BYSERIMApnSDAxeUpmMDNWWFpIcUhmbUZ3TW1CZjZLTUFSN1hMY05raGQ1U0dLamhqc2FxZ2YydlJ0R0dreUkxV1NSakI0Cmg4U3pSU1k3UGdBUTE5WittVWNEak9VWFY3c1pkQld1RWc1dFlpTlYrU1k3eDBJbU1GejM5dU5FeU10T2NiREwKUFREejVCQWxpZ1JHY0JsSVpCS3dnNjlOUGV6TWh4NC83bmJOYVFJREFRQUJBb0lCQVFEakVwbDFOYzVNeVlWKwpIdlRDVmhKZzFDdFNLdjVkRCtNMDZtVit4bVlKSXJzK0Iwa0Y5enlHRFZWaTQyV0F1NElaQ2IzTDhGelQ2Ly93CkdvTlpKVUhTZHFBY0xLZitaWk55cDdyb3Jid0pmZnYySGlUdWJjV2hLRkNEMER1OE9sZkZvdGE4aDVUNlpobmsKWmFVRmlMampQQnJDUEpLZFdhb3JnTGhBdHlrOWwyYzI0UjY0Vk5NUzA1SmZtQll5T0JIMWxxVHZhNDArY0tMVAovaHJYTkJjZUQ4SjNmZ05sQ1U1c1RBK2x2Q0xkb044OVRHS29FckhiaW01N1daUXdEemRtQ2JYN2ZzN0RIbm9VCjhPa1dNRHdycXB3ZitkaDVDZlU3TCtUT01aZGJpSmhCTjFNOHY5SG1yQmpIRG9qbzNuY2pRbEN6UEk2YjNBNzAKNXdhK3ZoemhBb0dCQVBkcnBlYzdXMUNGQmtjTUVNNHM1RWRmRU9nSzYrWC9BSFhVQys0aUk2YmM4NGRBSG1NcApERXJRUnlRb2FubDFIb25zSjR0Y2tUbU9pdFNPaXArSjFXMVZmWS91RytQSFFzUTB0aHN0ZnE0TVZBdjUwb3RVClcrYjdtcUl3aXFrRDV5ZGVmOFdMRFhxTnJyQlUwa2tuejdpMjhaTWI4RGFubnV4ckM5N05rR2R0QW9HQkFQYUQKVXF5eUIvekNCNHZ6Y09mYUROdlZmQTFTS1B4YjdEWnB2cjJwZExNbmRzcXc3Q2V1NjdyOE1OcmNhK2NiOVE5TApPWFlCMytJRTJyQ2ZnZGFjMm5oV00zUjN3Y0hyOFh6WmZTeEpZeDhlQ3QyWEFRZHVmTmM5TW9rUUhncjVGT1NNCmpLL3dUdkNlTDdBaGxTeExaWVZvRnFHUEVnYWtiTVE2VDM5Z1YxUnRBb0dCQU12NGh0RFY4ald6TkxXbGtNVW4KNVJtaG1jSnlIbitCZGRPdGVCaGROSjcvVUJTVUcza01BZ0k3S2lyNDFxNUNpMmFRdFJrQ3V3YUVLSmVLMjJVaQpzRHh1V2hFcDd2d2M3VUhyWXFXTkgvNUVVNVY3NHNMU1RPRmpVdHVhd1BVTkxxY2FGS082T3VacG56WG05MlV2CjJPTWlqbzBFWDBmdmIramZadTNLOGQwUkFvR0FHakhTTXkrbjBaLzhsVTZGRE40S3g4RmpzVGF0ekVNb1VvL2kKQ04xYzNUeXdUdEdHQnFGN3d4N1JRakJ4OXRqdHJYWmM0TUZLUFFZdkJ0MnNPbFhva1NqM3hzU0Mva3hJR1BBegpjT1ZMZHg4R0lJM1BPaTd2YlIrL292am5lRnNIY1ZIT0VWUUR6MlcvdzRPT0Ntcm9tc2g0dnlvb3pEUGtxdVZYClZUMnppZkVDZ1lFQTZ3NmhWYjN0TGR6TlNQRHhEV01FQ2JLNCs1WllVOWtRdnFzZVoyZ3NMeVlXcHYvWmcyeXEKWmxKaDhsS1ZxZVUra1I3R1ZScGhhbUJEaDNqVkVyS3lIRk1DWGJ5OG5idEZQNkJERERaWDQ5cElTWi9paUc2bAo2azVMdmhodENraC9uTnY3UmZJcmNuU1FnVkxxZEVnV1dHSW9JUDVCY1VqYkNnSWJjUzIxbGFBPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

 

3.设置上下文参数
在上面的截图中可以看到contexts: null 字段是无信息的,下面配置上下文参数使他们之间建立连接(也就是admin用户对应个集群),有上下文之后再配置当前上下文(current-context: ""字段),他就会吧哪个用户对应哪个集群建立起连接了

[root@k8s-master1 k8s-hxu]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
Context "kubernetes" created.

设置完之后再查看文件context字段就被加上了内容

[root@k8s-master1 k8s-hxu]# cat kube.config
...
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes
...

 4.设置默认上下文,此时admin用户就可以访问kubernetes了

[root@k8s-master1 k8s-hxu]# kubectl config use-context kubernetes --kubeconfig=kube.config
Switched to context "kubernetes".

[root@k8s-master1 k8s-hxu]# cat kube.config
...
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes
current-context: kubernetes
kind: Config
...

 我们在执行kubectl的时候就需要让他加载kube.config文件,如何让他加载此文件?执行以下操作

[root@k8s-master1 k8s-hxu]# mkdir ~/.kube
[root@k8s-master1 k8s-hxu]# cp kube.config ~/.kube/config
[root@k8s-master1 k8s-hxu]# kubectl get node
No resources found

 现在虽然可以查看资源但是还没有全新可以创建相关的资源,下面给他授权让它可以创建资源

5.授权kubernetes证书访问kubelet api权限

[root@k8s-master1 k8s-hxu]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

注解: # clusterrolebinding是无名称空间限制的,对任何名称空间都有效的 # 给clusterrolebinding起个名叫kube-apiserver:kubelet-apis # 要把kubernetes这个 用户通过clusterrolebinding绑定到system:kubelet-api-admin这个集群内置的一个clusterrole

6、把配置文件同样拷贝到master2和master3上,先创建.kube目录再拷贝

rsync -vaz /root/.kube/config k8s-node1:/root/.kube/
rsync -vaz /root/.kube/config k8s-node2:/root/.kube/

这样在其它两个master上也可以查看和操作集群资源了

查看集群组件状态

[root@k8s-master1 k8s-hxu]# kubectl cluster-info
Kubernetes master is running at https://10.10.22.20:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master1 k8s-hxu]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-1               Healthy     {"health":"true"}                                                                             
etcd-0               Healthy     {"health":"true"}                                                                             
etcd-2               Healthy     {"health":"true"}   
[root@k8s-master1 k8s-hxu]# kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.255.0.1   <none>        443/TCP   6h5m

4、配置kubectl子命令补全

[root@k8s-master1 k8s-hxu]# yum install -y bash-completion
[root@k8s-master1 k8s-hxu]# source /usr/share/bash-completion/bash_completion 
[root@k8s-master1 k8s-hxu]# source <(kubectl completion bash)
[root@k8s-master1 k8s-hxu]# kubectl completion bash > ~/.kube/completion.bash.inc
[root@k8s-master1 k8s-hxu]# source '/root/.kube/completion.bash.inc'
[root@k8s-master1 k8s-hxu]# source $HOME/.bash_profile
[root@k8s-master1 k8s-hxu]# echo 'source <(kubectl completion bash)' >>~/.bashrc #在文件 ~/.bashrc 中导入(source)补全脚本:
[root@k8s-master1 k8s-hxu]# kubectl completion bash >/etc/bash_completion.d/kubectl #将补全脚本添加到目录 /etc/bash_completion.d 中
[root@k8s-master1 k8s-hxu]# echo 'alias k=kubectl' >>~/.bashrc #如果 kubectl 有关联的别名,你可以扩展 shell 补全来适配此别名
[root@k8s-master1 k8s-hxu]# echo 'complete -F __start_kubectl k' >>~/.bashrc

5、部署部署kube-controller-manager组件

创建csr请求文件

[root@k8s-master1 k8s-hxu]# cat kube-controller-manager-csr.json 
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "10.10.22.20",
      "10.10.22.121",
      "10.10.22.211",
      "10.10.22.212"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "bj",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}

注: hosts 列表包含所有 kube-controller-manager 节点 IP; CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限

生成证书

[root@k8s-master1 k8s-hxu]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2022/05/04 17:05:44 [INFO] generate received request
2022/05/04 17:05:44 [INFO] received CSR
2022/05/04 17:05:44 [INFO] generating key: rsa-2048
2022/05/04 17:05:50 [INFO] encoded CSR
2022/05/04 17:05:50 [INFO] signed certificate with serial number 634405258506460654936064158656554432021296814038

 创建kube-controller-manager的kubeconfig

1.设置集群参数

[root@k8s-master1 k8s-hxu]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.22.20:6443 --kubeconfig=kube-controller-manager.kubeconfig
Cluster "kubernetes" set.

 2.设置客户端认证参数

[root@k8s-master1 k8s-hxu]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
User "system:kube-controller-manager" set.

 3.设置上下文参数

[root@k8s-master1 k8s-hxu]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Context "system:kube-controller-manager" created.
[root@k8s-master1 k8s-hxu]# cat kube-controller-manager.kubeconfig
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager

 4.设置当前上下文

[root@k8s-master1 k8s-hxu]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Switched to context "system:kube-controller-manager".
[root@k8s-master1 k8s-hxu]# cat kube-controller-manager.kubeconfig
current-context: system:kube-controller-manager

 创建配置文件kube-controller-manager.conf

[root@k8s-master1 k8s-hxu]# vim kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
  --secure-port=10252 \
  --bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.255.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.0.0.0/16 \
  --experimental-cluster-signing-duration=8760h \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2"

 创建启动文件

[root@k8s-master1 k8s-hxu]# vim kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kuberntes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

 启动服务

先拷贝相关的证书文件到指定的目录下,并同步到master2和master3节点
[root@k8s-master1 k8s-hxu]# cp kube-controller-manager*.pem /etc/kubernetes/ssl/
[root@k8s-master1 k8s-hxu]# cp kube-controller-manager.kubeconfig kube-controller-manager.conf /etc/kubernetes/
[root@k8s-master1 k8s-hxu]# cp kube-controller-manager.service /usr/lib/systemd/system

 拷贝文件到master2和master3节点

[root@k8s-master1 k8s-hxu]# rsync -vaz kube-controller-manager*.pem k8s-node1:/etc/kubernetes/ssl/
[root@k8s-master1 k8s-hxu]# rsync -vaz kube-controller-manager*.pem k8s-node2:/etc/kubernetes/ssl/
[root@k8s-master1 k8s-hxu]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-node1:/etc/kubernetes/
[root@k8s-master1 k8s-hxu]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf k8s-node2:/etc/kubernetes/
[root@k8s-master1 k8s-hxu]# rsync -vaz kube-controller-manager.service k8s-node1:/usr/lib/systemd/system/
[root@k8s-master1 k8s-hxu]# rsync -vaz kube-controller-manager.service k8s-node2:/usr/lib/systemd/system/

 启动

[root@k8s-master1 k8s-hxu]# systemctl daemon-reload
[root@k8s-master1 k8s-hxu]# systemctl enable kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@k8s-master1 k8s-hxu]# systemctl start kube-controller-manager
[root@k8s-master1 k8s-hxu]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2022-05-04 17:30:47 CST; 16s ago
     Docs: https://github.com/kuberntes/kubernetes
 Main PID: 6795 (kube-controller)
    Tasks: 12
   Memory: 21.2M
   CGroup: /system.slice/kube-controller-manager.service
           └─6795 /usr/local/bin/kube-controller-manager --port=0 --secure-port=10252 --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig ...

5月 04 17:30:47 k8s-master1 systemd[1]: Started Kubernetes Controller Manager.

6、部署kube-scheduler组件

创建csr请求

[root@k8s-master1 k8s-hxu]# cat kube-scheduler-csr.json 
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "10.10.22.20",
      "10.10.22.121",
      "10.10.22.211",
      "10.10.22.212",
      "10.10.22.213"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Beijing",
        "L": "bj",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}

注: hosts 列表包含所有 kube-scheduler 节点 IP; CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

生成证书

[root@k8s-master1 k8s-hxu]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

创建kube-scheduler的kubeconfig
1.设置集群参数 

[root@k8s-master1 k8s-hxu]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.22.20:6443 --kubeconfig=kube-scheduler.kubeconfig
Cluster "kubernetes" set.

 2.设置客户端认证参数

[root@k8s-master1 k8s-hxu]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
User "system:kube-scheduler" set.

 3.设置上下文参数

[root@k8s-master1 k8s-hxu]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Context "system:kube-scheduler" created.

 4.设置默认上下文

[root@k8s-master1 k8s-hxu]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Switched to context "system:kube-scheduler".

5.创建配置文件kube-scheduler.conf

[root@k8s-master1 k8s-hxu]# cat kube-scheduler.conf 
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"

6.创建服务启动文件

[root@k8s-master1 k8s-hxu]# cat kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

 启动服务

[root@k8s-master1 k8s-hxu]# cp kube-scheduler*.pem /etc/kubernetes/ssl/
[root@k8s-master1 k8s-hxu]# cp kube-scheduler.kubeconfig kube-scheduler.conf /etc/kubernetes/
[root@k8s-master1 k8s-hxu]# cp kube-scheduler.service /usr/lib/systemd/system/

 同步文件到其他节点

[root@k8s-master1 k8s-hxu]# rsync -vaz kube-scheduler*.pem k8s-node1:/etc/kubernetes/ssl/
[root@k8s-master1 k8s-hxu]# rsync -vaz kube-scheduler*.pem k8s-node2:/etc/kubernetes/ssl/
[root@k8s-master1 k8s-hxu]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf k8s-node1:/etc/kubernetes/
[root@k8s-master1 k8s-hxu]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf k8s-node2:/etc/kubernetes/
[root@k8s-master1 k8s-hxu]# rsync -vaz kube-scheduler.service k8s-node1:/usr/lib/systemd/system/
[root@k8s-master1 k8s-hxu]# rsync -vaz kube-scheduler.service k8s-node2:/usr/lib/systemd/system/

 启动

[root@k8s-master1 k8s-hxu]# systemctl daemon-reload
[root@k8s-master1 k8s-hxu]# systemctl enable kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@k8s-master1 k8s-hxu]# systemctl start kube-scheduler
[root@k8s-master1 k8s-hxu]# systemctl status kube-scheduler

 

[root@k8s-master1 k8s-hxu]# netstat -ntlp|grep kube-schedule
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      6991/kube-scheduler 
tcp6       0      0 :::10259                :::*                    LISTEN      6991/kube-scheduler

 部署kubelet组件

kubelet: 每个Node节点上的kubelet定期就会调用API Server的REST接口报告自身状态,API Server接收这些信息后,将节点状态信息更新到etcd中。kubelet也通过API Server监听Pod信息,从而对Node机器上的POD进行管理,如创建、删除、更新Pod,因为控制节点不需要调度pod所以不需要在控制节点安装kubelet组件,为了方便生成证书所以下面操作还是在master1上操作最后同步到node1节点上
以下操作在k8s-master1上操作
创建kubelet-bootstrap.kubeconfig

[root@k8s-master1 ~]# cd k8s-hxu/
[root@k8s-master1 k8s-hxu]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
[root@k8s-master1 k8s-hxu]# echo $BOOTSTRAP_TOKEN
bcfcffee295879be73c4f29bfa7bcbea
[root@k8s-master1 k8s-hxu]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.10.22.20:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
Cluster "kubernetes" set.
[root@k8s-master1 k8s-hxu]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
User "kubelet-bootstrap" set.
[root@k8s-master1 k8s-hxu]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
Context "default" created.
[root@k8s-master1 k8s-hxu]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
Switched to context "default".

 

 

 

 

 

 

 

 

 

 

 

 

 

[root@k8s-node4 yum.repos.d]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1  k8s-node4 node4
10.10.20.90  k8s-node4 node4
10.10.22.20 k8s-master1 master1
10.10.22.121 k8s-node1 node1
10.10.22.211 k8s-node2 node2
[root@k8s-node4 yum.repos.d]# yum install docker-ce-20.10.14 docker-ce-cli-20.10.14 containerd.io
[root@k8s-node4 yum.repos.d]# docker pull dyrnq/coredns:v1.8.6
v1.8.6: Pulling from dyrnq/coredns
d92bdee79785: Pull complete 
6e1b7c06e42d: Pull complete 
Digest: sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
Status: Downloaded newer image for dyrnq/coredns:v1.8.6
docker.io/dyrnq/coredns:v1.8.6

 


 

 

 
 
 
 
 
 
posted @ 2022-05-01 23:47  linuxws  阅读(560)  评论(0编辑  收藏  举报