Kubeadm部署高可用K8S集群

一 基础环境

1.1 资源

节点名称 ip地址
VIP 192.168.12.150
master01 192.168.12.48
master02 192.168.12.242
master03 192.168.12.246
node01 192.168.12.83
node02 192.168.12.130
node03 192.168.12.207
node04 192.168.12.182
node05 192.168.12.43
node06 192.168.12.198

1.2 修改所有节点hosts文件

vi /etc/hosts
cat >> /etc/hosts<<EOF
192.168.12.100 master
192.168.12.48 master01
192.168.12.242 master02
192.168.12.246 master03
192.168.12.83 node01
192.168.12.130 node02
192.168.12.207 node03
192.168.12.182 node04
192.168.12.43 node05
192.168.12.198 node06
10.0.7.141 harbor.xmkj.gz
EOF

1.3 免密配置

master01节点执行

#获取秘钥
ssh-keygen -t rsa

#备份公钥
cp -p .ssh/id_rsa.pub .ssh/authorized_keys

#本地秘钥同步其他节点
for H in master0{2..3}; do ssh-copy-id $H; done
for H in node0{1..6}; do ssh-copy-id $H; done

1.4 修改各节点hostname

master01节点执行

for H in master0{1..3}; do ssh $H  hostnamectl set-hostname $H ; done
for H in node0{1..6}; do ssh $H  hostnamectl set-hostname $H ; done

1.5 关闭防火墙

所有节点执行

systemctl stop firewalld
systemctl disable firewalld
yum install iptables-services -y

1.6 关闭SeLinux

所有节点执行

setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

1.7 关闭 swap

所有节点执行

swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

1.8 更新系统

yum install wget -y
rm -rf /etc/yum.repos.d/*

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum update -y

1.9 允许 iptables 检查桥接流量(可选,所有节点)

所有节点

加载br_netfilter模块

modprobe br_netfilter
modprobe overlay

查看br_netfilter 模块是否已加载

lsmod | grep br_netfilter

开机自动加载

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

设置所需的 sysctl 参数,参数在重新启动后保持不变

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
echo "net.ipv4.ip_nonlocal_bind = 1" >>/etc/sysctl.conf

应用 sysctl 参数而不重新启动

sysctl --system

1.10 加载ipvs模块

所有节点
若内核大于4.19替换nf_conntrack_ipv4为nf_conntrack

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

执行脚本

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安装相关管理工具

yum install ipset ipvsadm -y

1.11 时钟同步

所有节点

yum install chrony -y

编辑chronyd配置文件

egrep -v "^$|#" /etc/chrony.conf 
server ntp1.aliyun.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.0.0/16
local stratum 10
logdir /var/log/chrony

二 高可用环境配置

2.1 pacemaker部署

所有master节点

安装pacemaker

yum install pacemaker pcs corosync fence-agents resource-agents -y

启动pcs服务

systemctl enable pcsd
systemctl start pcsd
systemctl status pcsd

修改集群管理员hacluster(默认生成)密码

echo pacemaker_pass | passwd --stdin hacluster

2.2 配置pacemaker集群

其中一台master节点执行

节点认证配置,组件集群,采用上一步设置的password

pcs cluster auth master01 master02 master03 -u hacluster -p pacemaker_pass --force

创建并命名集群

生成配置文件:/etc/corosync/corosync.conf

pcs cluster setup --force --name k8s_cluster_ha master01 master02 master03

2.3 启动集群

其中一台master节点执行

启动集群

pcs cluster start --all
pcs status

设置集群开机启动

pcs cluster enable --all

设置集群属性

pcs property set pe-warn-series-max=1000 pe-input-series-max=1000 pe-error-series-max=1000 cluster-recheck-interval=5

关闭

pcs property set stonith-enabled=false

corosync默认启用stonith,但stonith机制并没有配置相应的stonith设备,此时pacemaker将拒绝启动任何资源,关闭stonith
查询修改后配置

pcs property set stonith-enabled=false

2.4 安装haproxy负载均衡

所有master节点

yum install haproxy -y

修改happroxy配置

cat >/etc/haproxy/haproxy.cfg<<EOF
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
listen stats
  bind 0.0.0.0:1080
  mode http
  stats enable
  stats uri /
  stats realm kubernetes\ Haproxy
  stats auth admin:admin
  stats  refresh 30s
  stats  show-node
  stats  show-legends
  stats  hide-version
listen  k8s-api
   bind 192.168.12.100:6443
   mode tcp
   option tcplog
   log global
   server master01  192.168.12.221:6443  check inter 3000 fall 2 rise 5
   server master02  192.168.12.163:6443  check inter 3000 fall 2 rise 5
   server master03  192.168.12.152:6443 check inter 3000 fall 2 rise 5
EOF

开机自启

systemctl enable haproxy

2.5 创建集群资源

任意master节点

配置虚ip

pcs resource create kube-api-vip ocf:heartbeat:IPaddr2 ip=192.168.12.100 cidr_netmask=24 op monitor interval=30s

添加haproxy资源

pcs resource create k8s-haproxy systemd:haproxy

配置资源启动顺序

pcs constraint order kube-api-vip then k8s-haproxy

设置资源开机自启

pcs resource enable kube-api-vip k8s-haproxy

通过绑定资源服务,将两种资源约束在1个节点

pcs constraint colocation add k8s-haproxy with kube-api-vip

查看集群情况

pcs status

image

三 Kubernetes集群安装

3.1 docker安装

所有节点

安装依赖环境

yum install -y yum-utils device-mapper-persistent-data lvm2 nfs-utils vim

配置docker yum源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

查看docker版本列表

yum list docker-ce --showduplicates | sort -r

安装指定版本

yum install docker-ce-20.10.9-3.el7 docker-ce-cli-20.10.9-3.el7 containerd.io-1.4.11-3 -y

或安装最新版本

yum install docker-ce docker-ce-cli containerd.io -y

设置镜像加速

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

启动docker

systemctl daemon-reload
systemctl start docker
systemctl enable docker

3.2 kubernetes安装

配置kubernetes yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

查看kubernetes版本

yum list kubeadm --showduplicates | sort -r

安装指定版本

小编安装1.22.2版本。

yum install kubeadm-1.22.2-0 kubelet-1.22.2-0 kubectl-1.22.2-0 lrzsz -y

或安装最新版本

yum install kubeadm kubelet kubectl lrzsz -y

修改docker Cgroup Driver为systemd管理

将/usr/lib/systemd/system/docker.service文件中的这一行 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
修改为 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
如果不修改,在添加 worker 节点时可能会碰到如下错误
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
Please follow the guide at https://kubernetes.io/docs/setup/cri/

sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service

docker重启

systemctl enable docker&&systemctl restart docker

启动kubelet

systemctl enable kubelet && systemctl start kubelet

3.3 初始化集群

master01节点

获取初始化配置文件

kubeadm config print init-defaults > kubeadm-init.yaml

修改初始化配置

vim kubeadm-init.yaml

主要修改localAPIEndpoint:的ip地址,此处为master01ip地址,controlPlaneEndpoint:,imageRepository: ,apiVersion: kubeproxy.config.k8s.io/v1alpha1等。

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.12.221
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: master01
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: "master:6443"
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.22.0
networking:
  dnsDomain: cluster.local
  #serviceSubnet: 10.96.0.0/12
  podSubnet: "10.100.0.1/16"
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"

根据配置文件初始化集群

kubeadm init --config kubeadm-init.yaml

image

3.4 部署calico网络

master01节点运行

获取calico配置文件

curl https://docs.projectcalico.org/manifests/calico.yaml -O

修改calico配置

vim calico.yaml

取消CALICO_IPV4POOL_CIDR注释,修改pod网段地址
image

配置kubectl环境变量

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

部署calico

kubectl apply -f calico.yaml

3.5 其他节点加入集群

分配kubernetes秘钥至其他master节点

USER=root
CONTROL_PLANE_IPS="master02 master03"
for host in ${CONTROL_PLANE_IPS}; do ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"; scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/; scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/; scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/; scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/; scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/; done

其他master节点加入集群

kubeadm join master:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:9cf6eb4106afb43e6487a1975563bfacff9e85ad96886fef6109bf8aa6fc6f5b --control-plane

其他node节点加入集群

kubeadm join master:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:9cf6eb4106afb43e6487a1975563bfacff9e85ad96886fef6109bf8aa6fc6f5b

token过期,重新获取

kubeadm token create --print-join-command

3.6 kuboard管理界面

master01节点执行

在线部署

kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml
kubectl apply -f https://addons.kuboard.cn/metrics-server/0.3.7/metrics-server.yaml

查看 Kuboard 运行状态:

kubectl get pods -l k8s.kuboard.cn/name=kuboard -n kube-system

输出结果如下所示:

NAME                       READY   STATUS        RESTARTS   AGE
kuboard-54c9c4f6cb-6lf88   1/1     Running       0          45s

获得管理员用户、只读用户的Token。

此Token拥有 ClusterAdmin 的权限,可以执行所有操作

echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep ^kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d)

取输出信息中 token 字段

eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWc4aHhiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5NDhiYjVlNi04Y2RjLTExZTktYjY3ZS1mYTE2M2U1ZjdhMGYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.DZ6dMTr8GExo5IH_vCWdB_MDfQaNognjfZKl0E5VW8vUFMVvALwo0BS-6Qsqpfxrlz87oE9yGVCpBYV0D00811bLhHIg-IR_MiBneadcqdQ_TGm_a0Pz0RbIzqJlRPiyMSxk1eXhmayfPn01upPdVCQj6D3vAY77dpcGplu3p5wE6vsNWAvrQ2d_V1KhR03IB1jJZkYwrI8FHCq_5YuzkPfHsgZ9MBQgH-jqqNXs6r8aoUZIbLsYcMHkin2vzRsMy_tjMCI9yXGiOqI-E5efTb-_KbDVwV5cbdqEIegdtYZ2J3mlrFQlmPGYTwFI8Ba9LleSYbCi4o0k74568KcN_w

通过NodePort访问

Kuboard Service 使用了 NodePort 的方式暴露服务,NodePort 为 32567;

http://任意一个Worker节点的IP地址:32567/
posted @   金笛秀才  阅读(310)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
点击右上角即可分享
微信分享提示