部署双主高可用K8S集群

部署双主高可用K8S集群

高可用原理:利用keepalived 的vrrp协议创建vip,kubeadm init的时候用这个vip的6443端口来初始化 另外一台master直接以master角色加入集群

也可以继续增加中间件haproxy或者nginx做负载均衡

环境准备

软件和系统版本:

k8s 1.23.3
docker

20.10.21

linux centos7.8

服务器四台:

master01 192.168.197.130
master02 192.168.197.131
node1 192.168.197.132
node2 192.168.197.132

基础优化:略

安装

1.keepalived

两个master节点运行

yum install keepalived -y

master01配置

定义的vip是 192.168.197.200[root@master01 ~]# vim /etc/keepalived/keepalived.conf global_defs {   router_id k-master   notification_email {    # 邮件功能 可以不设置   wangsiyu@123.com   }   notification_email_from wangsiyu@123.com  # 邮件功能 可以不设置   smtp_server smtp.pinuc.com     # 邮件功能 可以不设置   smtp_connect_timeout 30        # 邮件功能 可以不设置   vrrp_skip_check_adv_addr   vrrp_strict   vrrp_garp_interval 0}
vrrp_script check_apiserver { script "/etc/keepalivedcheck-apiserver.sh"  # 健康检查脚本 interval 3                             # 检查次数 weight -51                             # 脚本返回不是0 当前权重-51
}
vrrp_instance VI-k-master { # 定义一个名称    state MASTER               # 当前是master 必须大写 interface ens33 # 你的网卡名称    virtual_router_id 51      # 设置一个id 自定义 主备必须一样不然脑裂    priority 100              # 权重 数值越大优先级越高 advert_int 3 # 通信检查间隔时间 authentication {        auth_type PASS        # 通信密文协议 目前有PASS和AH        auth_pass 1234        # 通信密码 } virtual_ipaddress {        192.168.197.200       # vip 如果只有一个网卡 就用当前ip段 } track_script {        check_apiserver       # 调用s汗啊改变定义的脚本 }

master02配置

[root@master02 keepalived]# vim keepalived.confglobal_defs {   router_id k-backup   notification_email {   wangsiyu@123.com   }   notification_email_from wangsiyu@123.com! Configuration File for keepalived
global_defs { router_id k-backup notification_email { wangsiyu@pinuc.com } notification_email_from wangsiyu@pinuc.com smtp_server smtp.pinuc.com smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0}
vrrp_script check_apiserver { script "/etc/keepalivedcheck-apiserver.sh" interval 3 weight -51
}
vrrp_instance VI-k-master { state BACKUP interface ens33 virtual_router_id 51 priority 50 advert_int 3 authentication { auth_type PASS auth_pass 1234 } virtual_ipaddress { 192.168.197.200 } track_script { check_apiserver }}

检查脚本

[root@master01 keepalived]# vim check-apiserver.sh#!/bin/basherrorExit(){    echo "$*" 1>&2    exit 1}curl --silent --max-time 2 --insecure localhost:6443/ -o /dev/null || errorExit "Error GET localhost:6443"if ip addr|grep -q 192.168.197.200; then    curl --silent --max-time 2 --insecure 192.168.197.200:6443/ -o /dev/null || errorExit "Error GET vip:6443"fi

启动服务

systemctl start keepalivedsystemctl enable keepalive

检查服务和vip是否正常

[root@master01 keepalived]# ip a|grep 197.200    inet 192.168.197.200/32 scope global ens33[root@master01 keepalived]# systemctl status keeplivedUnit keeplived.service could not be found.[root@master01 keepalived]# systemctl status keepalived● keepalived.service - LVS and VRRP High Availability Monitor   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)   Active: active (running) since Thu 2022-11-17 15:20:43 CST; 9min ago  Process: 52236 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 52237 (keepalived)    Tasks: 3   Memory: 2.7M   CGroup: /system.slice/keepalived.service           ├─52237 /usr/sbin/keepalived -D           ├─52238 /usr/sbin/keepalived -D           └─52239 /usr/sbin/keepalived -D
Nov 17 15:20:51 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:51 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:51 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:51 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: VRRP_Instance(VI-k-master) Sending/queueing gratuitous ARPs on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200Nov 17 15:20:56 master01 Keepalived_vrrp[52239]: Sending gratuitous ARP on ens33 for 192.168.197.200

以上说明服务正常

2.k8s

四台机器的hosts

/etc/hosts192.168.197.130 master01192.168.197.131 master02192.168.197.132 node01192.168.197.133 node02

两个master节点运行脚本

[root@master01 ~]# cat install_docker_k8s.sh #!/bin/bashcat <<EOF >/etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/gpgcheck=0gpgkey=mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgenabled=1EOF
wget mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repoyum install kubeadm-1.23.3 kubectl-1.23.3 kubelet-1.23.3 docker-ce-20.10.12 -y cp /etc/docker/daemon.json{,.bak$(date +%F)}cat <<EOF >>/etc/docker/daemon.json{    "exec-opts":["native.cgroupdriver=systemd"],}EOFsystemctl daemon-reloadsystemctl restart dockersystemctl enable docker# docker version# 安装镜像 欺骗安装# *查看kubeadm默认安装的组件版本用 kubeadm config images listimages=(`kubeadm config images list|awk -F '/' '{print $2}'|head -6`)for img in ${images[@]};do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$img docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$img k8s.gcr.io/$img docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imgdonecorednstag=`kubeadm config images list|awk -F 'io' '{print $2}'|tail -1`coredns=`kubeadm config images list|awk -F '/' '{print $3}'|tail -1`docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$corednsdocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$coredns k8s.gcr.io$corednstagdocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$coredns
cat <<EOF >/etc/sysconfig/kubeletKUBELET_CGROUP_ARGS="--cgroup-driver=systemd"KUBE_PROXY_MODE="ipvs"KUBELET_EXTRA_ARGS="--fail-swap-on=false"EOFsystemctl enable kubelet
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >>/etc/sysctl.confecho 'net.bridge.bridge-nf-call-iptables = 1' >>/etc/sysctl.confecho 'net.ipv4.ip_forward=1' >>/etc/sysctl.confsysctl -pswapoff -a

两个node节点运行(node节点只是不装kubectl  其他的都一样 另外也可以用master的那些镜像 下载镜像那步跳过,直接用docker save ,scp ,docker load)

[root@node01 ~]# cat install_k8s_node.sh #!/bin/bashcat <<EOF >/etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/gpgcheck=0gpgkey=mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgenabled=1EOF
wget mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repoyum install kubeadm-1.23.3 kubelet-1.23.3 -yyum install docker-ce-20.10.12 -y
cp /etc/docker/daemon.json{,.bak_$(date +%F)}
cat <<EOF >/etc/docker/daemon.json{    "exec-opts":["native.cgroupdriver=systemd"],}EOFsystemctl daemon-reloadsystemctl restart dockersystemctl enable docker# docker version# 安装镜像 欺骗安装# *查看kubeadm默认安装的组件版本用 kubeadm config images listimages=(`kubeadm config images list|awk -F '/' '{print $2}'|head -6`)for img in ${images[@]};do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$img docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$img k8s.gcr.io/$img docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imgdonecorednstag=`kubeadm config images list|awk -F 'io' '{print $2}'|tail -1`coredns=`kubeadm config images list|awk -F '/' '{print $3}'|tail -1`docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$corednsdocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$coredns k8s.gcr.io$corednstagdocker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$corednscat <<EOF >/etc/sysconfig/kubeletKUBELET_CGROUP_ARGS="--cgroup-driver=systemd"KUBE_PROXY_MODE="ipvs"KUBELET_EXTRA_ARGS="--fail-swap-on=false"EOFsystemctl enable kubeletecho 'net.bridge.bridge-nf-call-ip6tables = 1' >>/etc/sysctl.confecho 'net.bridge.bridge-nf-call-iptables = 1' >>/etc/sysctl.confecho 'net.ipv4.ip_forward=1' >>/etc/sysctl.confsysctl -pswapoff -a

3.初始化集群

  apiVersion: kubeadm.k8s.io/v1beta2  bootstrapTokens:  - groups:    - system:bootstrappers:kubeadm:default-node-token    token: abcdef.0123456789abcdef    ttl: 24h0m0s    usages:    - signing    - authentication  kind: InitConfiguration  localAPIEndpoint:    advertiseAddress: 192.168.197.200     #VIP的地址    bindPort:  6443  nodeRegistration:    criSocket: /var/run/dockershim.sock    name: master01    taints:    - effect: NoSchedule    key: node-role.kubernetes.io/master  ---  apiServer:            #添加如下两行信息    certSANs:    - "192.168.197.200"         #VIP地址,当只有一个master时,不需要添加    timeoutForControlPlane: 4m0s  apiVersion: kubeadm.k8s.io/v1beta2  certificatesDir: /etc/kubernetes/pki  clusterName: kubernetes  controllerManager: {}  dns:    type: CoreDNS  etcd:    local:    dataDir: /var/lib/etcd  imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers     controlPlaneEndpoint: "192.168.197.200:6443"    #VIP的地址和端口  kind: ClusterConfiguration  kubernetesVersion: v1.23.3        #kubernetes版本号  networking:    dnsDomain: cluster.local    serviceSubnet: 10.96.0.0/12       #选择默认    podSubnet: 10.244.0.0/16        #添加pod网段  scheduler: {}

把上边的VIP 改成你的

主节点运行

kubeadm init --config kubeadm-init.yaml --upload-certs
保存重要信息You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.197.200:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:7db5b9c52573ad36507784bd1fc2e5efe975f9c9dfdffa972310b4ecb15c00d4 \ --control-plane --certificate-key ef0f358d41e5ead3cc74e183aa2201b1773b605926932170141bd60605c44735
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.197.200:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:7db5b9c52573ad36507784bd1fc2e5efe975f9c9dfdffa972310b4ecb16c00d4

上边有两个join 意思是 如果你的master角色 就用上边的 否则用下边的

另外一台master执行

  kubeadm join 192.168.197.200:6443 --token abcdef.0123456789abcdef \  --discovery-token-ca-cert-hash sha256:7db5b9c52573ad36507784bd1fc2e5efe975f9c9dfdffa972310b4ecb15c00d4 \  --control-plane --certificate-key ef0f358d41e5ead3cc74e183aa2201b1773b605926932170141bd60605c44735

两个node执行

kubeadm join 192.168.197.200:6443 --token abcdef.0123456789abcdef \  --discovery-token-ca-cert-hash sha256:7db5b9c52573ad36507784bd1fc2e5efe975f9c9dfdffa972310b4ecb16c00d4

4.部署网络

只在master01上安装即可

网络插件用flannel 我从github上拿的 直接copy即可

[root@master01 ~]# cat flannel.yaml kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:  name: flannelrules:- apiGroups:  - ""  resources:  - pods  verbs:  - get- apiGroups:  - ""  resources:  - nodes  verbs:  - list  - watch- apiGroups:  - ""  resources:  - nodes/status  verbs:  - patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:  name: flannelroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: flannelsubjects:- kind: ServiceAccount  name: flannel  namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata:  name: flannel  namespace: kube-system---kind: ConfigMapapiVersion: v1metadata:  name: kube-flannel-cfg  namespace: kube-system  labels:    tier: node    app: flanneldata:  cni-conf.json: |    {      "name": "cbr0",      "cniVersion": "0.3.1",      "plugins": [        {          "type": "flannel",          "delegate": {            "hairpinMode": true,            "isDefaultGateway": true          }        },        {          "type": "portmap",          "capabilities": {            "portMappings": true          }        }      ]    }  net-conf.json: |    {      "Network": "10.244.0.0/16",      "Backend": {        "Type": "vxlan"      }    }---apiVersion: apps/v1kind: DaemonSetmetadata:  name: kube-flannel-ds  namespace: kube-system  labels:    tier: node    app: flannelspec:  selector:    matchLabels:      app: flannel  template:    metadata:      labels:        tier: node        app: flannel    spec:      affinity:        nodeAffinity:          requiredDuringSchedulingIgnoredDuringExecution:            nodeSelectorTerms:            - matchExpressions:              - key: kubernetes.io/os                operator: In                values:                - linux      hostNetwork: true      priorityClassName: system-node-critical      tolerations:      - operator: Exists        effect: NoSchedule      serviceAccountName: flannel      initContainers:      - name: install-cni-plugin        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1        command:        - cp        args:        - -f        - /flannel        - /opt/cni/bin/flannel        volumeMounts:        - name: cni-plugin          mountPath: /opt/cni/bin      - name: install-cni        image: rancher/mirrored-flannelcni-flannel:v0.16.1        command:        - cp        args:        - -f        - /etc/kube-flannel/cni-conf.json        - /etc/cni/net.d/10-flannel.conflist        volumeMounts:        - name: cni          mountPath: /etc/cni/net.d        - name: flannel-cfg          mountPath: /etc/kube-flannel/      containers:      - name: kube-flannel        image: rancher/mirrored-flannelcni-flannel:v0.16.1        command:        - /opt/bin/flanneld        args:        - --ip-masq        - --kube-subnet-mgr        resources:          requests:            cpu: "100m"            memory: "50Mi"          limits:            cpu: "100m"            memory: "50Mi"        securityContext:          privileged: false          capabilities:            add: ["NET_ADMIN", "NET_RAW"]        env:        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace        - name: EVENT_QUEUE_DEPTH          value: "5000"        volumeMounts:        - name: run          mountPath: /run/flannel        - name: flannel-cfg          mountPath: /etc/kube-flannel/        - name: xtables-lock          mountPath: /run/xtables.lock      volumes:      - name: run        hostPath:          path: /run/flannel      - name: cni-plugin        hostPath:          path: /opt/cni/bin      - name: cni        hostPath:          path: /etc/cni/net.d      - name: flannel-cfg        configMap:          name: kube-flannel-cfg      - name: xtables-lock        hostPath:          path: /run/xtables.lock          type: FileOrCreate
kubectl apply -f flannel.yaml

如果下载镜像太慢 没关系 我打包好了分享给你们

链接:https://pan.baidu.com/s/1Hb-DU5gAKHfkVDbTOde0nQ 提取码:1212

使用方法 下载下来之后 rz到服务器

docker load <cni.tar

docker load <cni-flannel.tar

PS:每个节点都需要有网络 所以都执行 !

然后在master01上查看

[root@master01 ~]# kubectl get nodesNAME       STATUS   ROLES                  AGE    VERSIONmaster01   Ready    control-plane,master   163m   v1.23.3master02   Ready    control-plane,master   161m   v1.23.3node01     Ready    <none>                 156m   v1.23.3node02     Ready    <none>                 159m   v1.23.3
[root@master01 ~]# kubectl get pod -ANAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-65c54cc984-6cwrx 1/1 Running 0 171mkube-system coredns-65c54cc984-bb5fn 1/1 Running 0 171mkube-system etcd-master01 1/1 Running 0 171mkube-system etcd-master02 1/1 Running 0 170mkube-system kube-apiserver-master01 1/1 Running 0 171mkube-system kube-apiserver-master02 1/1 Running 0 170mkube-system         kube-controller-manager-master01   1/1     Running            0             171mkube-system kube-controller-manager-master02 1/1 Running 0 170mkube-system kube-proxy-7xkc2 1/1 Running 0 171mkube-system kube-proxy-bk82v 1/1 Running 0 165mkube-system kube-proxy-kvf2p 1/1 Running 0 167mkube-system kube-proxy-wwlk5 1/1 Running 0 170mkube-system kube-scheduler-master01 1/1 Running 0 171mkube-system kube-scheduler-master02 1/1 Running 0 170m

 

之后可视化 可以用rancher kubesphere kubernetes-dashboard 你们自己选择吧

posted @ 2023-03-19 22:59  耀阳居士  阅读(222)  评论(0编辑  收藏  举报