杨梅冲
每天在想什么呢?

一、环境准备

k8s集群角色 IP 主机名 安装相关组件 kubernetes版本号
控制节点 192.168.10.20 master apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico 1.28.2
控制节点 192.168.10.21 master2 apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico 1.28.2
控制节点 192.168.10.22 master3 apiserver、controller-manager、scheduler、kubelet、etcd、docker、kube-proxy、keepalived、nginx、calico 1.28.2
工作节点 192.168.10.24 node1 kubelet、kube-porxy、docker、calico、coredns 1.28.2
VIP 192.168.10.19 master、master2、master3 nginx、keeplived  

 

 

 

 

 

 

kubernetes官网文档:https://kubernetes.io/zh-cn/docs

github:https://github.com/kubernetes/kubernetes/releases

1.1、服务器环境初始化

# 控制节点、工作节点都需要安装
# 1.修改主机名:对应主机名修改
hostnamectl set-hostname master && bash

# 2.添加hosts
vim /etc/hosts
192.168.10.20 master
192.168.10.21 master2
192.168.10.22 master3
192.168.10.24 node1
192.168.10.25 node2

# 3.添加ssh信任,master相互添加
ssh-keygen -t rsa
ssh-copy-id master2

# 4.关闭交换分区
swapoff -a  # 临时关闭
永久关闭为注销/etc/fstab中swap一行

# 5.修改机器内核参数
modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/profile

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf
参考:https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/implementation-details/

# 6. 关闭防火墙
systemctl stop firewalld ; systemctl disable firewalld

# 7.关闭selinux,修改 x selinux  配置文件之后,重启
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config


# 8.配置阿里云yum源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast

# 9.配置kubernets源
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

参考安装最新版本:https://kubernetes.io/zh-cn/docs/tasks/tools/install-kubectl-linux/#install-using-native-package-management
所需版本https://v1-28.docs.kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
# 10.时间同步并定时同步 yum install ntpdate -y ntpdate time1.aliyun.com * */1 * * * /usr/sbin/ntpdate time1.aliyun.com systemctl restart crond

 二、基础软件包安装

# 1.基础软件包安装
yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl- devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet ipvsadm

# 2.停止iptables服务并禁止开机启动
service iptables stop && systemctl disable iptables
# 3.清空规则
iptables -F

2.1 containerd安装配置

# 1.安装containerd服务
yum -y install containerd

# 2.生成containerd配置文件
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

# 3.修改配置文件
vim /etc/containerd/config.toml
SystemdCgroup = true   # false改为true
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"   # 如果版本不清楚后面kubeadm config images list --config=kubeadm.yml时可以看了再修改

# 4.配置为开机启动
systemctl enable containerd --now

# 5.修改/etc/crictl.yaml 文件
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

systemctl restart containerd

# 6.配置镜像加速器
# 编辑 vim /etc/containerd/config.toml 文件,修改
config_path = "/etc/containerd/certs.d"

mkdir /etc/containerd/certs.d/docker.io/ -p
vim /etc/containerd/certs.d/docker.io/hosts.toml 
[host."https://pft7f97f.mirror.aliyuncs.com",host."https://registry.docker-cn.com",host."https://docker.mirrors.ustc.edu.cn"]
  capabilities = ["pull"]

systemctl restart containerd

三、安装配置kubernetes

3.1 安装k8s所需软件

# 1.安装k8s软件包,master和node都需要
yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2
systemctl enable kubelet

注:每个软件包的作用
Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的
kubelet: 安装在集群所有节点上,用于启动 Pod 的,kubeadm 安装k8s,k8s 控制节点和工作节点的组件,都是基于 pod 运行的,只要 pod 启动,就需要 kubelet 
kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

3.2 通过keepalive+nginx实现k8s apiserver高可用

# 1.安装nginx、keepalived
yum install nginx nginx-mod-stream -y

# 2.修改nginx 配置文件
vim /etc/nginx/nginx.conf

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

stream { 
 
    log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; 
 
    access_log /var/log/nginx/k8s-access.log main; 
 
    upstream k8s-apiserver { 
            server 192.168.10.20:6443 weight=5 max_fails=3 fail_timeout=30s;   
            server 192.168.10.21:6443 weight=5 max_fails=3 fail_timeout=30s;
            server 192.168.10.22:6443 weight=5 max_fails=3 fail_timeout=30s;   
 
    }
    server { 
        listen 16443; # 由于 nginx 与 master 节点复用,这个监听端口不能是 6443,否则会冲突 
        proxy_pass k8s-apiserver; 
    }

}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80;
        listen       [::]:80;
        server_name  _;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        error_page 404 /404.html;
        location = /404.html {
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
        }
    }


}
安装配置nginx
# 1.在3台master上安装keepalived
yum install -y keepalived

# 2.配置keepalived.conf
# master
[root@master nginx]# cat /etc/keepalived/keepalived.conf 
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33        # 实际网卡名称
    virtual_router_id 51  # vrrp路由ID实例,每个实例唯一
    priority 100   # 优先级,备服务器设置为90
    advert_int 1  # 指定vrrp心跳包通告间隔时间,默认1s
    authentication {
        auth_type PASS
        auth_pass 1111
    }

# 虚拟IP(VIP)
    virtual_ipaddress {
        192.168.10.19/24
    }
    track_script {
        check_nginx
    }
}

# master2
[root@master2 k8s]# cat /etc/keepalived/keepalived.conf 
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33        # 实际网卡名称
    virtual_router_id 51  # vrrp路由ID实例,每个实例唯一
    priority 90   # 优先级,备服务器设置为90
    advert_int 1  # 指定vrrp心跳包通告间隔时间,默认1s
    authentication {
        auth_type PASS
        auth_pass 1111
    }

# 虚拟IP(VIP)
    virtual_ipaddress {
        192.168.10.19/24
    }
    track_script {
        check_nginx
    }
}

# master3
[root@master3 k8s]# cat /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP2
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state BACK
    interface ens33        # 实际网卡名称
    virtual_router_id 51  # vrrp路由ID实例,每个实例唯一
    priority 80   # 优先级,备服务器设置为90
    advert_int 1  # 指定vrrp心跳包通告间隔时间,默认1s
    authentication {
        auth_type PASS
        auth_pass 1111
    }

# 虚拟IP(VIP)
    virtual_ipaddress {
        192.168.10.19/24
    }
    track_script {
        check_nginx
    }
}

#/etc/keepalived/check_nginx.sh  检查脚本编写
[root@master3 k8s]# cat /etc/keepalived/check_nginx.sh 
#!/bin/bash
count=$(ps -ef | grep nginx | grep sbin | egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
  systemctl stop keepalived
fi
安装配置keepalived
# 启动程序
systemctl daemon-reload
systemctl start nginx && systemctl enable nginx && systemctl status nginx
systemctl start keepalived && systemctl enable keepalived && systemctl status keepalived

# 查看VIP,在master上看
[root@master nginx]# ip addr
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:e7:2d:55 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.20/24 brd 192.168.10.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.10.19/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::c94e:2729:9c6d:7fee/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
测试:停止master的nginx就会发现192.168.10.20这个IP漂移到master2服务器上,重启master的nginx和keepalived后,IP还会漂移回master

3.3 kubeadm 初始化配置文件生成与配置

参考:https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/

# 1.设置容器运行时,master,node
crictl config runtime-endpoint unix:///run/containerd/containerd.sock

#2.使用配置文件初始化k8s:master
kubeadm config print init-defaults > kubeadm.yaml

参考:https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-config/#cmd-config-print-init-defaults  # 官网文档搜索kubeadm config

3.4 配置初始化配置文件kubeadm.yaml

[root@master k8s]# cat kubeadm.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
#localAPIEndpoint:
#  advertiseAddress: 1.2.3.4
#  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
#  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
# 指定阿里云镜像以及k8s版本
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.2
controlPlaneEndpoint: 192.168.10.19:16443  # 新增
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16  # 指定pod网段
scheduler: {}
# 新增如下:
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

# 参考
配置cgroup驱动:
https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/
配置ipvs模式:https://kubernetes.io/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1/#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConfiguration

3.5 拉取k8s集群所需镜像

# 查看需要拉取的镜像:kubeadm config images 
参考命令:https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-config/#cmd-config-images-list

# 查看需要拉取的镜像
[root@master k8s]# kubeadm config images list --config=kubeadm.yaml 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.28.2
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.28.2
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.9-0
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1


# 拉取镜像
[root@master k8s]# kubeadm config images pull --config=kubeadm.yaml 
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.28.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.28.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.28.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.28.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1

[root@master k8s]# crictl images
IMAGE                                                                         TAG                 IMAGE ID            SIZE
registry.aliyuncs.com/google_containers/pause                                 3.7                 221177c6082a8       311kB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   v1.10.1             ead0a4a53df89       16.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.5.9-0             73deb9a3f7025       103MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.28.2             cdcab12b2dd16       34.7MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.28.2             55f13c92defb1       33.4MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.28.2             c120fed2beb84       24.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.28.2             7a5d9d67a13f6       18.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.9                 e6f1816883972       322kB

3.6 k8s初始化

 参考:https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/

[root@master k8s]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification

直接结果:

[root@master k8s]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
[init] Using Kubernetes version: v1.28.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0110 13:17:41.401915  125012 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.7" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.10.20 192.168.10.19]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.10.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.10.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
W0110 13:17:42.862748  125012 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
W0110 13:17:43.024918  125012 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
W0110 13:17:43.397812  125012 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
W0110 13:17:43.594228  125012 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.531263 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
W0110 13:18:19.253381  125012 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.10.19:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:3d2052ebcdc58cce07aeb55f9e5987d8d406e3b0d0370299283cdb4fdc216eeb \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.19:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:3d2052ebcdc58cce07aeb55f9e5987d8d406e3b0d0370299283cdb4fdc216eeb
初始化结果
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


[root@master k8s]# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   2m12s   v1.28.2

3.7 扩容k8s集群,添加master

# 1. 从节点拉取镜像
# 将kubeadm.yaml传送到master2、master3,提前拉取所需镜像
kubectl config images pull --config=kubeadm.yaml

# 2.将master节点证书拷贝到其余master节点
mkdir -p /etc/kubernetes/pki/etcd/

scp /etc/kubernetes/pki/ca.* master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.* master3:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/sa.* master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* master3:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/front-proxy-ca.* master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* master3:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/etcd/ca.* master2:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.* master3:/etc/kubernetes/pki/etcd/

# 3.在master主节点生成token
参考:https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-token/

[root@master etcd]# kubeadm token create --print-join-command
kubeadm join 192.168.10.19:16443 --token fnt20r.1a2vs4f82dvy2lgr --discovery-token-ca-cert-hash sha256:3d2052ebcdc58cce07aeb55f9e5987d8d406e3b0d0370299283cdb4fdc216eeb

# 4.将master2、master3加入集群,成为控制节点
kubeadm join 192.168.10.19:16443 --token fnt20r.1a2vs4f82dvy21gr --discovery-token-ca-cert-hash sha256:3d2052ebcdc58cce07aeb55f9e5987d8d406e3b0d0370299283cdb4fdc216eeb --control-plane

成功结果:Run 'kubectl get nodes' to see this node join the cluster.

# 5.master2/3执行
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config


# 6.查看
[root@master k8s]# kubectl get nodes
NAME      STATUS     ROLES           AGE   VERSION
master    NotReady   control-plane   97m   v1.28.2
master2   NotReady   control-plane   85m   v1.28.2
master3   NotReady   control-plane   84m   v1.28.2

3.8 添加node节点进入集群

# 1.将node1加入集群作为工作节点

[root@node1 containerd]# kubeadm join 192.168.10.19:16443 --token a8103q.ynglyjrjruhbzzzh --discovery-token-ca-cert-hash sha256:3d2052ebcdc58cce07aeb55f9e5987d8d406e3b0d0370299283cdb4fdc216eeb

成功标志:Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

# 在任意master节点查看
[root@master k8s]# kubectl get nodes
NAME      STATUS     ROLES           AGE    VERSION
master    NotReady   control-plane   109m   v1.28.2
master2   NotReady   control-plane   97m    v1.28.2
master3   NotReady   control-plane   96m    v1.28.2
node1     NotReady   <none>          67s    v1.28.2

# 2.修改node节点 ROLES
[root@master k8s]# kubectl label node node1 node-role.kubernetes.io/worker=worker
node/node1 labeled
[root@master k8s]# kubectl get nodes
NAME      STATUS     ROLES           AGE     VERSION
master    NotReady   control-plane   110m    v1.28.2
master2   NotReady   control-plane   98m     v1.28.2
master3   NotReady   control-plane   97m     v1.28.2
node1     NotReady   worker          2m48s   v1.28.2

 四、安装kubernetes网络插件calico

查看calico支持的版本:https://docs.tigera.io/calico/3.26/getting-started/kubernetes/requirements

下载calico.yaml文件:https://docs.tigera.io/calico/3.26/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico

线下配置文件地址:https://docs.projectcalico.org/manifests/calico.yaml  # 默认是50节点的

# calico.yaml 新增参数IP_AUTODETECTION_METHOD,指定网卡
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            # 
            - name: IP_AUTODETECTION_METHOD
              value: "interface=ens33"
[root@master2 k8s]# kubectl apply -f calico.yaml 
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
# 等待时间根据服务器配置来看,如果配置低,等待时间比较长,READY没有都为0,还在初始化中
[root@master k8s]# kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS              RESTARTS       AGE    IP              NODE      NOMINATED NODE   READINESS GATES
calico-kube-controllers-7ddc4f45bc-76zdb   0/1     ContainerCreating   0              15m    <none>          master3   <none>           <none>
calico-node-c56kn                          1/1     Running             0              15m    192.168.10.22   master3   <none>           <none>
calico-node-ljx2h                          0/1     Init:2/3            0              15m    192.168.10.21   master2   <none>           <none>
calico-node-nw8hw                          0/1     Init:0/3            0              15m    192.168.10.24   node1     <none>           <none>
calico-node-s6shp                          0/1     Init:0/3            0              15m    192.168.10.20   master    <none>           <none>
coredns-6554b8b87f-ccvtm                   1/1     Running             0              146m   10.244.136.1    master3   <none>           <none>
coredns-6554b8b87f-cjtsk                   1/1     Running             0              146m   10.244.136.3    master3   <none>           <none>
etcd-master                                1/1     Running             3              146m   192.168.10.20   master    <none>           <none>
etcd-master2                               1/1     Running             0              135m   192.168.10.21   master2   <none>           <none>
etcd-master3                               1/1     Running             0              134m   192.168.10.22   master3   <none>           <none>
kube-apiserver-master                      1/1     Running             3              146m   192.168.10.20   master    <none>           <none>
kube-apiserver-master2                     1/1     Running             0              134m   192.168.10.21   master2   <none>           <none>
kube-apiserver-master3                     1/1     Running             0              134m   192.168.10.22   master3   <none>           <none>
kube-controller-manager-master             1/1     Running             4 (134m ago)   146m   192.168.10.20   master    <none>           <none>
kube-controller-manager-master2            1/1     Running             0              134m   192.168.10.21   master2   <none>           <none>
kube-controller-manager-master3            1/1     Running             0              134m   192.168.10.22   master3   <none>           <none>
kube-proxy-5pn87                           1/1     Running             0              135m   192.168.10.21   master2   <none>           <none>
kube-proxy-mwtxw                           1/1     Running             0              146m   192.168.10.20   master    <none>           <none>
kube-proxy-phdlz                           1/1     Running             0              134m   192.168.10.22   master3   <none>           <none>
kube-proxy-xb2z6                           1/1     Running             0              39m    192.168.10.24   node1     <none>           <none>
kube-scheduler-master                      1/1     Running             4 (134m ago)   146m   192.168.10.20   master    <none>           <none>
kube-scheduler-master2                     1/1     Running             0              134m   192.168.10.21   master2   <none>           <none>
kube-scheduler-master3                     1/1     Running             0              134m   192.168.10.22   master3   <none>           <none>

[root@master k8s]# kubectl get nodes
NAME      STATUS   ROLES           AGE    VERSION
master    Ready    control-plane   160m   v1.28.2
master2   Ready    control-plane   148m   v1.28.2
master3   Ready    control-plane   147m   v1.28.2
node1     Ready    worker          52m    v1.28.2

4.1 测试网络情况以及coredns域名解析

# 测试网络情况,测试域名解析
# node节点导入busybox
[root@node1 ~]# ctr -n=k8s.io images import busybox-1-28.tar.gz 
unpacking docker.io/library/busybox:1.28 (sha256:585093da3a716161ec2b2595011051a90d2f089bc2a25b4a34a18e2cf542527c)...done


# master节点:
[root@master ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping baidu.com
PING baidu.com (39.156.66.10): 56 data bytes
64 bytes from 39.156.66.10: seq=0 ttl=127 time=31.370 ms
64 bytes from 39.156.66.10: seq=1 ttl=127 time=31.079 ms
64 bytes from 39.156.66.10: seq=2 ttl=127 time=31.162 ms
64 bytes from 39.156.66.10: seq=3 ttl=127 time=29.614 ms
^C
--- baidu.com ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 29.614/30.806/31.370 ms
/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ # exit
pod "busybox" deleted

五、etcd配置为高可用状态

# 修改master、master2、master3上的配置文件etcd.yaml
vim /etc/kubernetes/manifests/etcd.yaml
将
- --initial-cluster=master=https://192.168.10.20:2380
修改为
- --initial-cluster=master=https://192.168.10.20:2380,master2=https://192.168.10.21:2380,master3=https://192.168.10.22:2380

 5.1 查看etcd集群是否配置成功

# etcdctl下载地址:https://github.com/etcd-io/etcd/releases

cd etcd-v3.5.9-linux-amd64
cp etcd* /usr/local/bin

[root@master ~]# etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt member list
a2f7e7fa1563203c, started, master3, https://192.168.10.22:2380, https://192.168.10.22:2379, false
b35a9a1be9d15d2b, started, master2, https://192.168.10.21:2380, https://192.168.10.21:2379, false
be3fc3d5e1dfe2ce, started, master, https://192.168.10.20:2380, https://192.168.10.20:2379, false

或者
[root@master ~]# ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list
a2f7e7fa1563203c, started, master3, https://192.168.10.22:2380, https://192.168.10.22:2379, false
b35a9a1be9d15d2b, started, master2, https://192.168.10.21:2380, https://192.168.10.21:2379, false
be3fc3d5e1dfe2ce, started, master, https://192.168.10.20:2380, https://192.168.10.20:2379, false

[root@master ~]# etcdctl -w table --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints=https://192.168.10.20:2379,https://192.168.10.21:2379,https://192.168.10.22:2379 endpoint status --cluster
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.10.22:2379 | a2f7e7fa1563203c |   3.5.9 |  3.3 MB |      true |      false |         5 |      38255 |              38255 |        |
| https://192.168.10.21:2379 | b35a9a1be9d15d2b |   3.5.9 |  3.3 MB |     false |      false |         5 |      38255 |              38255 |        |
| https://192.168.10.20:2379 | be3fc3d5e1dfe2ce |   3.5.9 |  3.3 MB |     false |      false |         5 |      38255 |              38255 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

六、模拟k8s集群控制节点故障并快速恢复

问题:K8s 集群,公司里有 3 个控制节点和 1 个工作节点,有一个控制节点 master 出问题关机了,修复不成功,然后我们 kubectl delete nodes master 把 master1 移除,移除之后,把机器恢复了,上架了,我打算还这个机器加到 k8s 集群,还是做控制节点,如何做?
处理方法:https://www.cnblogs.com/yangmeichong/p/16464574.html
# 不管那个版本,命令一样的

[root@master ~]# etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt member list

[root@master ~]# ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key memrove a2f7e7fa1563203c

七、证书延长时间

https://www.cnblogs.com/yangmeichong/p/16463112.html

 

posted on 2024-01-10 17:38  杨梅冲  阅读(1357)  评论(0编辑  收藏  举报