k8s高可用部署

采用3master节点和nginx+keepalived实现高可用

一、系统初始化(3个master节点均执行)

1、关闭swap
swapoff -a
2、关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
3、关闭SELinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
4、修改内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf 

二、配置yum源(3个master节点均执行)

cat >/etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

三、时钟同步(3个master节点均执行)

yum install chrony -y -q
systemctl start chronyd 
systemctl enable chronyd 
timedatectl set-timezone Asia/Shanghai 
timedatectl set-ntp yes

四、修改hostname添加host解析(3个master节点均执行)

分别修改3台master主机名
hostnamectl  set-hostname k8s-master01
hostnamectl  set-hostname k8s-master02
hostnamectl  set-hostname k8s-master03

3台分别执行
cat >/etc/hosts << EOF
192.168.0.1    k8s-master01
192.168.0.2    k8s-master02
192.168.0.3    k8s-master03
EOF

五、docker安装(3个master节点均执行)

yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine


yum install -y yum-utils
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
    
yum install docker-ce-20.10.8 docker-ce-cli-20.10.8 containerd.io

# 修改Docker参数
vim /etc/docker/daemon.json
{
     "data-root": "/data/docker/lib"  # 需要替换成数据盘目录
}

mkdir /data/docker/lib
systemctl enable docker
systemctl start docker

六、docker私有镜像仓库部署(在其中一个master节点部署即可)

mkdir -p /data/docker/registry

docker run -d \
  --restart=always \
  -p 5000:5000 \
  --restart=always \
  --name registry \
  -v /data/docker/registry:/var/lib/registry \
  registry:2
  
  /etc/docker/daemon.json 配置文件新增 #3个master均要配置
  "insecure-registries" : ["k8s-registry.com"]
  
 {
    "data-root": "/data/docker/lib",
    "insecure-registries" : ["k8s-registry.com:5000"]
}

在3个master添加hosts解析
部署仓库服务器ip k8s-registry.com

systemctl restart docker
 验证
 docker pull busybox
 docker tag busybox k8s-registry.com:5000/busybox
 docker push  k8s-registry.com:5000/busybox
  

七、keepalived+nginx

规划k8s-master-api的vip和域名

nginx使用tcp代理负载到3个master节点

keepalived使用标准的配置即可

upstream k8s_master{
        server  192.168.0.1:6443;
        server  192.168.0.2:6443;
        server  192.168.0.3:6443;
}

server{
        listen 16443;
        proxy_connect_timeout 20s;
        proxy_timeout 5m;
        proxy_pass k8s_master;
k8s.conf
! Configuration File for keepalived

global_defs {

    router_id k8s-001
    script_user root
    enable_script_security
}


## keepalived检测配置, 与track_script中的配置名保持一致
vrrp_script ha_switch {
    ## 检测脚本配置
    script "/data/HA/sh/ha_switch.sh"
    interval 3
    weight -5
    fall 2
    rise 1
}


vrrp_instance VI_1 {
    ## 指定keepalived的初始角色, MASTER为主服务器, BACKUP为备服务器
    state BACKUP
    ## 配置是否为非抢占式, 需要将主备服务器的state都设置为BACKUP, 避免无谓的切换
    nopreempt
    interface bond0
    virtual_router_id 210
    priority 100
    advert_int 1

    authentication {
        auth_type PASS
        ## 建议配置为浮动IP的后2位, 每一位长度用0补齐成3位
        auth_pass 00100
    }

    ## VRRP HA 虚拟地址, 如果有多个VIP, 继续换行填写
    virtual_ipaddress {
        192.168.0.100

    }

    #当前节点成为主节时触发的脚本,通知
    notify_master "/data/HA/sh/notify.py master"
    #当前节点转为备节点时触发的脚本,流通知
    notify_backup "/data/HA/sh/notify.py backup"
    #当前节点转为失败状态时触发的脚本,流通知
    notify_fault  "/data/HA/sh/notify.py fault"

    ## 执行检测的服务
    track_script {
        #引用VRRP脚本, 与vrrp_script指定的保持一致
        ha_switch
    }

}
keepalived.conf

八、安装k8s

1、安装依赖
yum install -y kubelet-1.21.0-0 kubeadm-1.21.0-0 kubectl-1.21.0-0

systemctl enable kubelet

2、下载k8s依赖的镜像
image.sh

#!/bin/bash
 
# 根据 kubeadm config images list 返回的结果,调整下面各组件的image tag
# 比如 k8s.gcr.io/kube-apiserver:v1.21.7,则修改 kube-apiserver:v1.21.0 为 kube-apiserver:v1.21.7
images=(
    kube-scheduler:v1.21.1
    kube-proxy:v1.21.1
    kube-controller-manager:v1.21.1
    etcd:3.4.13-0
    kube-apiserver:v1.21.1
    pause:3.4.1)
 
for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
 
docker pull coredns/coredns:1.8.0
docker tag coredns/coredns:1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
docker rmi coredns/coredns:1.8.0

3、编辑初始化文件
cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  #master node ip
  advertiseAddress: 10.6.122.5
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  #master node hostname
  name: k8s-master01

---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  #master vip and domain
  certSANs:
  - 10.6.122.210
  - "k8s-master.com"
clusterName: kubernetes
controlPlaneEndpoint: "k8s-master.com:16443"
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /data/k8s-etcd
#imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: v1.21.1
scheduler: {}


4、执行初始化
kubeadm init --config ./kubeadm-config.yaml  --upload-certs
然后按照提执行并记录join命令
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8s-master.com:16443 --token 3qctog.36nbjvci60ib9vfm \
        --discovery-token-ca-cert-hash sha256:b1038318d5f36e663d29239dd92c5d829fa21db332e0ea84d6349b41641dbaee \
        --control-plane --certificate-key 18e365cba6d888e5ec064bf3cc86e347859b2c5e47c4afd144db0dcd99387eea

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master.com:16443 --token 3qctog.36nbjvci60ib9vfm \
        --discovery-token-ca-cert-hash sha256:b1038318d5f36e663d29239dd92c5d829fa21db332e0ea84d6349b41641dbaee
        
 
5、安装Calico 网络插件
curl https://docs.projectcalico.org/archive/v3.20/manifests/calico.yaml -O
kubectl apply -f calico.yaml

curl -o calicoctl -O -L  "https://github.com/projectcalico/calicoctl/releases/download/v3.19.1/calicoctl" 
chmod +x calicoctl
mv calicoctl /usr/local/bin

确定所有系统pod正常,在master02  master03节点执行join

 kubeadm join k8s-master.com:16443 完整命令在init输出命令可以看到
View Code

 

 

posted @ 2022-12-01 15:10  泉love水  阅读(204)  评论(0编辑  收藏  举报