kubernetes二进制安装(v1.23.5)

⏰ 编写时间:2022年4月12日17:51:23
🙋 编写人:@mr.pan

😃前置

目前生产部署Kubernetes 集群主要有两种方式:

  • kubeadm

Kubeadm 是一个K8s 部署工具,提供kubeadm init 和kubeadm join,用于快速部署Kubernetes 集群。

官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/

  • 二进制包

从github 下载发行版的二进制包,手动部署每个组件,组成Kubernetes 集群。

Kubernetes发布官网:https://github.com/kubernetes/kubernetes/releases

 

Kubeadm 降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes 集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。

 

📌安装要求

  • 一台或多台机器,操作系统CentOS7.x-86_x64
  • 硬件配置:2GB 或更多RAM,2 个CPU 或更多CPU,硬盘30GB 或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap 分区

 

📑集群规划

序号

IP

角色

安装组件

1

192.168.208.128

Master

etcd、kube-apiserver、kube-controller-manager、kube-scheduler

2

192.168.208.129

nodes

kubelet,kube-proxy,docker

3

192.168.208.130

nodes

kubelet,kube-proxy,docker

 

📅软件版本

 

📂部署准备

#配置所有hosts
cat <<EOF>> /etc/hosts
192.168.208.128 k8s-master
192.168.208.129 k8s-nodes01
192.168.208.130 k8s-nodes02
EOF
#配置yum源
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
#安装必要依赖
yum install wget vim net-tools telnet lrzsz tree ntpdate ipvsadm ipset sysstat jq psmisc lvm2 git conntrack libseccomp -y
#关闭防火墙firewalld/selinux
systemctl stop firewalld && systemctl disable firewalld
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config && setenforce 0
#关闭swap
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
#时间同步,按需同步
ntpdate time2.aliyun.com
crontab -e
*/5 * * * * ntpdate time2.aliyun.com
====================================================================
#优化Linux内核
#配置ulimit
ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF
#升级内核,按需,我这里没有升级
uname -r
#加载ipvs模块
cat > /etc/modules-load.d/ipvs.conf << EFO
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EFO
systemctl enable --now systemd-modules-load.service
#添加k8s内核参数
cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
# 禁止使用swap空间,只有当系统 OOM 时才允许使用它
vm.swappiness=0
# 不检查物理内存是否够用
vm.overcommit_memory=1
# 开启 OOM
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
##准备就绪后 #重启
reboot
#执行检查
lsmod | grep -e ip_vs -e nf_conntrack
-------------------------------------------------------------------
#ssh免密登录
ssh-keygen -t rsa
ssh-copy-id k8s-nodes01
ssh k8s-nodes01

 

📕Master节点

参考来源
https://www.cnblogs.com/wdyjx/p/16004407.html
https://blog.csdn.net/jato333/article/details/123956783
https://zhuanlan.zhihu.com/p/472693562

🔒安装Etcd

获取部署包

#上传etcd包并解压,配置

#下载cfssl包并赋权

#解压etcd
tar -zxvf etcd-v3.3.27-linux-amd64.tar.gz -C /usr/local/
cd /usr/local/etcd-v3.3.27-linux-amd64/ && cp etcd* /usr/local/bin/
#下载cfssl
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
#赋权并移动
chmod +x cfssl*
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
编辑etcd配置文件
#先创建目录,很重要
mkdir -p /etc/etcd/
mkdir -p /var/lib/etcd/ ;chmod 757 -R /var/lib/etcd/
#根据实际需求修改
cat > /etc/etcd/etcd.conf << EOF
#[Member]
#etcd名称,可自定义
ETCD_NAME="etcd1"
#etcd数据目录,需提前创建好
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#集群通信监听地址
#ETCD_LISTEN_PEER_URLS="https://192.168.208.128:2380"
#etcd对外监听的IP和端口,默认只监听127.0.0.1,监听全网则需要配置上自己的IP地址,也可以写成0.0.0.0
ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379,https://192.168.208.128:2379"
#[Clustering]
#对外宣告集群内部通信端口,若有多台etcd服务器,则需要把这里打开
#ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.208.128:2380"
#etcd对外访问IP地址,填写真实etcd服务器的IP
ETCD_ADVERTISE_CLIENT_URLS="http://127.0.0.1:2379,https://192.168.208.128:2379"
#集群节点地址,若有多台etcd服务器,则需要把这里打开
#ETCD_INITIAL_CLUSTER="etcd1=https://192.168.208.128:2380,etcd2=https://192.168.208.129:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
EOF
创建 etcd的 systemd启动文件
#需要注意:etcd的配置文件、数据目录、ssl证书目录、启动文件的路径
cat >/usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-client-cert-auth \
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
生成etcd集群所需证书

#先创建所需目录

mkdir -p /data/cert/
mkdir -p /etc/etcd/ssl/
cd /data/cert/
生成证书的ca机构
#生成申请文件
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "k8s",
"OU": "system"
}
],
"ca": {
"expiry": "87600h"
}
}
EOF

 

生成ca证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

 

创建etcd证书的ca
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF

 

生成etcd请求csr文件
#生成etcd证书申请文件
#下列hosts字段中IP为所有etcd节点的集群内部通信ip,一个都不能少,为了方便后期扩容可以多写几个预留的IP
cat > etcd-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.208.128",
"192.168.208.129"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "k8s",
"OU": "system"
}]
}
EOF

 

生成etcd证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

 

移动证书
cp ca*.pem /etc/etcd/ssl/
cp etcd*.pem /etc/etcd/ssl/

 

检查etcd运行状态
#启动etcd
systemctl enable etcd && systemctl start etcd
#检查状态,多个ip用逗号分割
ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.208.128:2379 endpoint health

 

#常用命令
./etcdctl --endpoints=ip,ip,ip endpoint status
./etcdctl --endpoints=ip,ip,ip endpoint health
./etcdctl --endpoints=ip,ip,ip endpoint hashkv
./etcdctl --endpoints=ip,ip,ip member list
./etcdctl --endpoints=ip,ip,ip check perf
./etcdctl --endpoints=ip,ip,ip check datascale

 

声明:如有问题可“tail -fn 50 /var/log/message”来协助查看系统日志进行分析。


 

🔒安装kube-apiserver

获取部署包

#上传kubernetes包并解压,配置

tar -zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin
# 拷贝至nodes,这一步是将kubelet、kube-proxy两个组件拷贝至nodes节点
scp kubelet kube-proxy k8s-nodes01:/usr/local/bin/
#创建工作目录
mkdir -p /etc/kubernetes/
mkdir -p /etc/kubernetes/ssl/
mkdir -p /var/log/kubernetes/
创建kube-apiserver证书
生成证书申请文件
#下列hosts字段中IP为所有Master/LB/VIP节点的ip(包括service段ip、pod内部通信ip),一个都不能少,为了方便后期扩容可以多写几个预留的IP
#由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.255.0.1)
cat > kube-apiserver-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.208.128",
"192.168.208.129",
"192.168.208.130",
"10.255.0.1",
"10.185.0.1",
"10.186.0.1",
"10.187.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"TS": "Beijing",
"L": "Beijing",
"O": "k8s",
"OU": "system"
}
]
}
EOF

 

签发apiserver证书
#生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

 

启用 TLS Bootstrapping 机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

#生成token
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
##或者也可用下面方式生成
# 格式:token,用户名,UID,用户组
# token也可自行生成替换;
# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
cat > token.csv << EOF
62ce460c38be936045f25d99f8a5aa45,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

 

移动apiserver证书
\cp ca*.pem /etc/kubernetes/ssl/
\cp kube-apiserver*.pem /etc/kubernetes/ssl/
\cp token.csv /etc/kubernetes/

 

创建配置文件kube-apiserver
##根据实际需求修改,需要注意相关配置文件、数据目录、ssl证书目录等路径
cat > /etc/kubernetes/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--anonymous-auth=false \\
--bind-address=192.168.208.128 \\
--secure-port=6443 \\
--advertise-address=192.168.208.128 \\
--insecure-port=0 \\
--authorization-mode=RBAC,Node \\
--runtime-config=api/all=true \\
--enable-bootstrap-token-auth=true \\
--service-cluster-ip-range=10.255.0.0/16 \\
--token-auth-file=/etc/kubernetes/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
--client-ca-file=/etc/kubernetes/ssl/ca.pem \\
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \\
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\
--etcd-cafile=/etc/etcd/ssl/ca.pem \\
--etcd-certfile=/etc/etcd/ssl/etcd.pem \\
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
--etcd-servers=https://192.168.208.128:2379,https://192.168.208.129:2379 \\
--enable-swagger-ui=true \\
--allow-privileged=true \\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/kube-apiserver-audit.log \\
--event-ttl=1h \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=4"
EOF
##===参数说明
–logtostderr:启用日志
–v:日志等级
–log-dir:日志目录
–etcd-servers:etcd集群地址
–bind-address:监听地址
–secure-port:https安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service虚拟IP地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用RBAC授权和节点自管理
–enable-bootstrap-token-auth:启用TLS bootstrap机制
–token-auth-file:bootstrap token文件
–service-node-port-range:Service nodeport类型默认分配端口范围
–kubelet-client-xxx:apiserver访问kubelet客户端证书
–tls-xxx-file:apiserver https证书
–etcd-xxxfile:连接Etcd集群证书
–audit-log-xxx:审计日志

 

创建启动文件kube-apiserver
#需要注意:apiserver的配置文件、数据目录、ssl证书目录、启动文件的路径
cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
启动kube-apiserver
systemctl daemon-reload
systemctl start kube-apiserver && systemctl enable kube-apiserver

 

声明:如有问题可“tail -fn 50 /var/log/messages”来协助查看系统日志进行分析。


 

🔒安装kube-controller-manager

创建controller-manager证书
生成证书申请文件
#生成证书申请文件
# hosts 列表包含所有 kube-controller-manager 节点 IP;
# CN 为 system:kube-controller-manager
# O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
cat > kube-controller-manager-csr.json <<-EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"192.168.208.128",
"192.168.208.129"
],
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-controller-manager",
"OU": "system"
}
]
}
EOF

 

签发controller-manager证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

 

移动controller-manager证书
cp kube-controller-manager*.pem /etc/kubernetes/ssl/
cp kube-controller-manager.kubeconfig /etc/kubernetes/

 

创建kubeconfig文件
#设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.208.128:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
#设置客户端认证参数
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
#设置上下文参数
kubectl config set-context system:kube-controller-manager \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
#设置默认上下文
kubectl config use-context system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig

 

 

创建配置文件controller-manager
##根据实际需求修改,需要注意相关配置文件数据目录ssl证书目录等路径
cat > /etc/kubernetes/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS=" \\
--bind-address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
--service-cluster-ip-range=10.255.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.186.0.0/16 \\
--experimental-cluster-signing-duration=87600h \\
--root-ca-file=/etc/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--leader-elect=true \\
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=4"
EOF

 

创建启动文件controller-manager
#需要注意:controller-manager配置文件、数据目录、ssl证书目录、启动文件的路径
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
启动kube-controller-manager
systemctl daemon-reload
systemctl enable kube-controller-manager && systemctl start kube-controller-manager

声明:如有问题可“tail -fn 50 /var/log/messages”来协助查看系统日志进行分析。


 

 

🔒安装kube-scheduler

创建kube-scheduler证书
生成证书申请文件
# hosts 列表包含所有 kube-scheduler 节点 IP;
# CN 为 system:kube-scheduler
# O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。
cat > kube-scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"192.168.208.128",
"192.168.208.129"
],
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-scheduler",
"OU": "system"
}
]
}
EOF

 

签发scheduler证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

 

移动scheduler证书
cp kube-scheduler*.pem /etc/kubernetes/ssl/
cp kube-scheduler.kubeconfig /etc/kubernetes/

 

创建kubeconfig文件
#设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.208.128:6443 \
--kubeconfig=kube-scheduler.kubeconfig
#设置客户端认证参数
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
#设置上下文参数
kubectl config set-context system:kube-scheduler \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
#设置默认上下文
kubectl config use-context system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig

 

 

创建配置文件scheduler
##根据实际需求修改,需要注意相关配置文件数据目录等路径
cat > /etc/kubernetes/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=4"
EOF

 

创建启动文件scheduler
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
启动kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler && systemctl start kube-scheduler

 

声明:如有问题可“tail -fn 50 /var/log/messages”来协助查看系统日志进行分析。


至此,Master节点上的三个组件(Apiserver、ControllerManager、Scheduler)已部署并启动成功,下面来检查一下所有组件的状态吧。

👀检查集群组件状态

创建admin-csr文件

## 这里也可以认为是 部署kubectl组件。

生成连接集群证书配置
cat > admin-csr.json << EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:masters",
"OU": "system"
}
]
}
EOF

 

说明:

后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;

kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;

O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;

注:

这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group;

“O”: “system:masters”, 必须是system:masters,否则后面kubectl create clusterrolebinding报错。

 

签发证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

 

移动证书
cp admin*.pem /etc/kubernetes/ssl/
mkdir ~/.kube
cp kube.config ~/.kube/config

 

创建kubeconfig文件
# kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书
#设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://192.168.208.128:6443 \
--kubeconfig=kube.config
#设置客户端认证参数
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=kube.config
#设置上下文参数
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin \
--kubeconfig=kube.config
#设置默认上下文
kubectl config use-context kubernetes \
--kubeconfig=kube.config
# mkdir ~/.kube
# cp kube.config ~/.kube/config
#授权kubernetes证书访问kubelet api权限
#即授权用户允许请求证书
kubectl create clusterrolebinding kube-apiserver:kubelet-apis \
--clusterrole=system:kubelet-api-admin \
--user kubernetes

 

 

💡

上面步骤完成后,kubectl就可以与kube-apiserver通信了。

检查组件状态
kubectl get cs
kubectl cluster-info
kubectl get componentstatuses
kubectl get all --all-namespaces

 

 


 

📘Node01节点

🔒安装Docker

获取部署包

#上传docker包并解压,配置

tar -zxvf docker-20.10.12.tgz
cp docker/* /usr/local/bin/
## 创建/etc/docker目录,配置daemon.json文件
mkdir -p /etc/docker/
tee /etc/docker/daemon.json <<-EOF
{
"registry-mirrors": ["https://u7vs31xg.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
创建docker的 systemd启动文件
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/local/bin/dockerd
ExecReload=/bin/kill -s HUP
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
启动docker
systemctl enable docker && systemctl restart docker

🔒安装kubelet

#创建工作目录
mkdir -p /etc/kubernetes/
mkdir -p /etc/kubernetes/ssl/
mkdir -p /etc/kubernetes/manifests/
mkdir -p /var/log/kubernetes/
mkdir -p /var/lib/kubelet
授权node允许请求证书
🔔

此步骤需在Master节点上执行!

#创建node必备,不然node节点上的kubelet无法启动,就是创建一个可以申请证书的用户
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
#设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.208.128:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
#设置上下文参数
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
#设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
#创建角色绑定
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

 

创建配置文件
💡

Nodes节点执行。

cat > /etc/kubernetes/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: systemd
clusterDNS:
- 10.255.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
# kubelet-config.yml配置文件[address]参数改为各个node节点的ip地址,也可以是0.0.0.0
# 如果docker的驱动为systemd,[cgroupDriver]参数处修改为systemd,此处设置很重要,否则后面node节点无法加入到集群

 

创建启动文件kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \\
--hostname-override=k8s-nodes01 \\
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
--config=/etc/kubernetes/kubelet-config.yml \\
--network-plugin=cni \\
--cert-dir=/etc/kubernetes/ssl \\
--pod-infra-container-image=k8s.gcr.io/pause:3.2 \\
--v=4
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
📌

注意:–pod-infra-container-image:管理Pod网络容器的镜像。

k8s.gcr.io/pause:3.2镜像默认无法直接下载,需通过阿里云镜像仓库下载;也可指定自己的私有仓库地址。

 

#pull coredns组件

拷贝证书
#Nodes节点执行
#将Master节点上的证书拷贝到node
cd /etc/kubernetes/
scp -rp k8s-master:/data/cert/kubelet-bootstrap.kubeconfig ./
scp -rp k8s-master:/data/cert/ca.pem ./ssl/
scp -rp k8s-master:/data/cert/ca-key.pem ./ssl/
启动kubelet
systemctl daemon-reload && systemctl enable kubelet
systemctl start kubelet && systemctl status kubelet
批准kubelet证书申请并加入集群

确认kubelet服务启动成功后,接着到Master节点上Approve一下bootstrap请求。执行如下命令可以看到Nodes节点发送的 CSR 请求:

kubectl get csr
#批准kubelet证书申请并加入集群
kubectl certificate approve <查询到的请求名称>
#例如:
kubectl certificate approve node-csr-O73Wkk6YcpWMOb0Tmyt_AN2zxn1U5qqc6wlWufIL9Zo
kubectl get csr
#删除csr可执行
kubectl delete csr node-csr-O73Wkk6YcpWMOb0Tmyt_AN2zxn1U5qqc6wlWufIL9Zo
#此时也可查看nodes
kubectl get nodes

 

node节点验证

在node节点ssl目录可以看到,多了2个kubelet的证书文件。

 


 

🔒安装kube-proxy

创建csr请求文件
创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "k8s",
"OU": "system"
}]
}
EOF

 

签发proxy证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

 

生成kubeconfig文件
#设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.208.128:6443 --kubeconfig=kube-proxy.kubeconfig
#设置客户端认证参数
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
#设置上下文参数
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
#设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

 

拷贝证书
#nodes 节点操作
#将Master节点上的证书拷贝到node
mkdir -p /var/lib/kube-proxy
cd /etc/kubernetes/
scp -rp k8s-master:/data/cert/kube-proxy.kubeconfig ./
scp -rp k8s-master:/data/cert/kube-proxy.pem ./ssl/
scp -rp k8s-master:/data/cert/kube-proxy-key.pem ./ssl/
创建配置文件
👋

Nodes节点上执行。

#node01节点操作
#clusterCIDR 此处网段必须与网络组件网段保持一致,否则部署网络组件时会报错
cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.208.129
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.186.0.0/16
healthzBindAddress: 192.168.208.129:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.208.129:10249
mode: "ipvs"
EOF

 

创建启动文件kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
--config=/etc/kubernetes/kube-proxy.yaml \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=4
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
启动kube-proxy
systemctl daemon-reload && systemctl restart kube-proxy && systemctl enable kube-proxy

🔒calico安装

🔔

Master节点安装。

# 获取calico
wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml
# 执行安装操作
kubectl apply -f calico.yaml
#或者可直接线上安装
#kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# 查看pod
kubectl get pods -A
# 查看node节点
kubectl get nodes

 

🔒coredns安装

👋

Master节点安装。

#获取coredns文件
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/coredns/coredns.yaml.base
#重命名
cp coredns.yaml.base coredns.yaml
>>>
修改yaml文件,有4处修改:
kubernetes cluster.local in-addr.arpa ip6.arpa
forward . /etc/resolv.conf
memory: 170Mi
clusterIP为:10.255.0.2(kubelet配置文件中的clusterDNS)
>>>

 

#安装
kubectl apply -f coredns.yaml

 

 

🔒Dashboard安装

#获取文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
>>>
# 修改kubernetes-dashboard的Service类型
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort # 新增
ports:
- port: 443
targetPort: 8443
nodePort: 30009 # 新增
selector:
k8s-app: kubernetes-dashboard
>>>
#创建
kubectl create -f recommended.yaml
#创建管理员用户
kubectl apply -f admin.yaml -n kube-system

 

📙Node02节点

同Nodes01节点!!!

 

posted @   i潘小潘  阅读(428)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· .NET10 - 预览版1新功能体验(一)
点击右上角即可分享
微信分享提示