一、系统环境准备
1、系统环境说明
系统环境说明
系统
角色
IP
组件
K8s版本
centos7.9
kubeadm-master1
192.168.100.41
docker,kubeadm,kubelet,kubectl
v1.20.0
centos7.9
kubeadm-master1
192.168.100.42
docker,kubeadm,kubelet,kubectl
v1.20.0
centos7.9
kubeadm-master1
192.168.100.43
docker,kubeadm,kubelet,kubectl
v1.20.0
centos7.9
kubeadm-node1
192.168.100.44
docker,kubeadm,kubelet,kubectl
v1.20.0
centos7.9
kubeadm-node1
192.168.100.45
docker,kubeadm,kubelet,kubectl
v1.20.0
VIP
192.168.100.46
使用VIP进行kubeadm初始化master
2、初始化环境配置
2.1、关闭防火墙
systemctl stop firewalld
systemctl disable --now firewalld
2.2、关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
2.3、关闭swap分区
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0
cat /etc/fstab
2.4、时间同步
echo "5 * * * * ntpdate ntp1.aliyun.com" > /var/spool/cron/root
/usr/sbin/ntpdate ntp.aliyun.com
hwclock --systohc
2.5、将桥接的IPv4流量传递到iptables的链
lsmod | grep br_netfilter
modprobe br_netfilter
#然后执行下命令
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
#使其生效
sysctl --system
2.6、更换centos7配置yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo https:
wget -O /etc/yum.repos.d/epel.repo https:
yum clean all
yum makecache
2.7、配置docker和kubernetes源
wget https:
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name =Kubernetes
baseurl =https :
enabled =1
gpgcheck =0
repo_gpgcheck =0
gpgkey =https :
EOF
yum makecache fast
2.8、所有节点设置主机名并配置hosts解析
hostnamectl set-hostname <hostname>
cat > /etc/hosts << EOF
127 .0 .0 .1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192 .168 .100 .41 kubeadm-master1
192 .168 .100 .42 kubeadm-master2
192 .168 .100 .43 kubeadm-master3
192 .168 .100 .44 kubeadm-node1
192 .168 .100 .45 kubeadm-node2
EOF
2.9、配置免密登录
yum install -y sshpass
ssh -keygen -f /root/.ssh/id_rsa -P ''
export IP="192.168.100.41 192.168.100.42 192.168.100.43 192.168.100.44 192.168.100.45"
export SSHPASS=086530
for HOST in $IP;do
sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST
done
# 这段脚本的作用是在一台机器上安装sshpass工具,并通过sshpass自动将本机的SSH公钥复制到多个远程主机上,以实现无需手动输入密码的SSH登录。
# 具体解释如下:
# 1. `apt install -y sshpass` 或 `yum install -y sshpass`:通过包管理器(apt或yum)安装sshpass工具,使得后续可以使用sshpass命令。
# 2. `ssh-keygen -f /root/.ssh/id_rsa -P ' '`:生成SSH密钥对。该命令会在/root/.ssh目录下生成私钥文件id_rsa和公钥文件id_rsa.pub,同时不设置密码(即-P参数后面为空),方便后续通过ssh-copy-id命令自动复制公钥。
# 3. `export IP="192.168.100.41 192.168.100.42 192.168.100.43 192.168.100.44 192.168.100.45"`:设置一个包含多个远程主机IP地址的环境变量IP,用空格分隔开,表示要将SSH公钥复制到这些远程主机上。
# 4. `export SSHPASS=086530`:设置环境变量SSHPASS,将sshpass所需的SSH密码(在这里是"123123")赋值给它,这样sshpass命令可以自动使用这个密码进行登录。
# 5. `for HOST in $IP;do`:遍历环境变量IP中的每个IP地址,并将当前IP地址赋值给变量HOST。
# 6. `sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST`:使用sshpass工具复制本机的SSH公钥到远程主机。其中,-e选项表示使用环境变量中的密码(即SSHPASS)进行登录,-o StrictHostKeyChecking=no选项表示连接时不检查远程主机的公钥,以避免交互式确认。
# 通过这段脚本,可以方便地将本机的SSH公钥复制到多个远程主机上,实现无需手动输入密码的SSH登录。
二、 所有节点安装docker、kubeadm、kubelet和kubectl
kubernetes部署采用yum安装默认版本
kubernetes需要用到容器运行时接口,本例采用docker容器运行时
容器运行时安装参考:https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/
相关资料下载:
链接:https://pan.baidu.com/s/1DB95Izwn54u8Za4tjNBdWA 提取码:4yxs
yum list kubeadm --showduplicates | sort -r
指定安装版本
yum install -y containerd.io-1.2.13 docker-ce-19.03.11 docker-ce-cli-19.03.11 kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0 kubernetes-cni-0.8.6-0.x86_64
#重启 Docker
systemctl daemon-reload && systemctl restart docker&& systemctl enable docker
#配置镜像加载器及Cgroup Driver驱动采用system
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
#重启docker
systemctl restart docker
#查看Cgroup驱动是否为systemd
docker info | grep "Cgroup Driver"
systemctl enable --now kubelet
docker --version
Docker version 19 .03 .11 , build 42 e35e61f3
kubeadm version
kubeadm version: &version.Info{Major:"1" , Minor:"20" , GitVersion:"v1.20.0" , GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38" , GitTreeState:"clean" , BuildDate:"2020-12-08T17:57:36Z" , GoVersion:"go1.15.5" , Compiler:"gc" , Platform:"linux/amd64" }
三、 在所有master节点上建立高可用
在master建立高可用,其实就是给所有的kube-apiserver做反向代理,可使用SLB或者使用一台独立虚拟服务器代理。本例是在所有master节点上部署nginx(upstream)+keepalived方式反向代理kube-apiserver。
3.1、kube-proxy开启IPVS配置
yum -y install ipvsadm ipset sysstat conntrack libseccomp
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
[root
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4 19149 7
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
nf_conntrack 143411 9 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
3.2、部署nginx和keepalived
yum -y install nginx keepalived nginx-all-modules.noarch
systemctl start keepalived && systemctl enable keepalived
systemctl start nginx && systemctl enable nginx
3.3、配置Nginx的upstream反向代理
cat /etc/nginx/nginx.conf | grep -vE "(^[ \t]*#| ^[ \t]*$)"
#写入nginx配置文件
cat > /etc/nginx/nginx.conf <<EOF
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
stream {
log_format proxy '\$remote_addr \$remote_port - [\$time_local] \$status \$protocol '
'" $upstream_addr" " \$upstream_bytes_sent" " \$upstream_connect_time"' ;
access_log /var/log/nginx/nginx-proxy.log proxy;
upstream k8s-apiserver {
server 192.168.100.41:6443 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.100.42:6443 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.100.43:6443 weight=5 max_fails=3 fail_timeout=30s;
}
server {
listen 7443;
proxy_connect_timeout 30s;
proxy_timeout 30s;
proxy_pass k8s-apiserver;
}
}
http {
log_format main '\$remote_addr - \$remote_user [\$time_local] " \$request" '
'\$status $body_bytes_sent " \$http_referer" '
'" \$http_user_agent" " \$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
EOF
#说明:
# 四层负载均衡,为Master apiserver组件提供负载均衡
stream {
......
}
#监听端口
listen 7443;
## Master APISERVER IP:PORT
upstream kubernetes_lb {
server 192.168.100.41:6443 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.100.42:6443 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.100.43:6443 weight=5 max_fails=3 fail_timeout=30s;
}
#将nginx配置文件发送到master2和master3节点
[root@kubeadm-master1 ~]# scp /etc/nginx/nginx.conf 192.168.100.42:/etc/nginx/
nginx.conf 100% 1725 2.1MB/s 00:00
[root@kubeadm-master1 ~]# scp /etc/nginx/nginx.conf 192.168.100.43:/etc/nginx/
nginx.conf
#检查Nginx配置文件语法是否正常
nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
3.4、keepalived配置
cat > /etc/keepalived/keepalived.conf <<EOF
global_defs {
notification_email {
root
}
notification_email_from root
smtp_server 127 .0 .0 .1
smtp_connect_timeout 30
router_id kubeadm_master1
}
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 88
advert_int 1
priority 110
authentication {
auth_type PASS
auth_pass 1234 abcd
}
track_script {
chk_nginx
}
virtual_ipaddress {
192 .168 .100 .46 /24
}
}
EOF
cat > /etc/keepalived/nginx_check.sh <<EOF
export LANG="en_US.UTF-8"
if [ ! -f "/run/nginx.pid" ]
/usr/bin/systemctl restart nginx
sleep 2
if [ ! -f "/run/nginx.pid" ]
/bin/kill -9 \$(head -n 1 /var/run/keepalived.pid)
fi
fi
EOF
chmod a+x /etc/keepalived/nginx_check.sh
[root
keepalived.conf 100 % 472 398 .4 KB/s 00 :00
[root
nginx_check.sh 100 % 228 206 .4 KB/s 00 :00
[root
keepalived.conf 100 % 474 628 .9 KB/s 00 :00
[root
nginx_check.sh 100 % 228 241 .5 KB/s 00 :00
router_id kubeadm_master1 #router_id 每台机器设置不同
script "/etc/keepalived/nginx_check.sh" ## 检测 nginx 状态的脚本路径
interval 2 ## 检测时间间隔
weight -20 ## 如果条件成立,权重-20
state MASTER #其他节点设置为BACKUP
interface ens33 #网卡设备名称,根据自己网卡信息进行更改
virtual_router_id 88 # VRRP 路由 ID实例,每个实例是唯一的
priority 110 # 优先级,备服务器设置为100 ,90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1 秒
chk_nginx #执行nginx监控
192 .168 .100 .46 /24 #这就是虚拟IP 地址
systemctl restart nginx && systemctl restart keepalived
journalctl -f -u keepalived
ping 192 .168 .100 .46
ssh -v -p 7443 192 .168 .100 .46
debug1: Connection established.
四、 在master1节点上进行kubeadm初始化
4.1、获取kubeadm-init.yaml文件
kubeadm config print init-defaults > kubeadm-init.yaml
cat > kubeadm-init.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789 abcdef
ttl: 24 h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192 .168 .100 .41
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: kubeadm-master1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4 m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
controlPlaneEndpoint: "192.168.100.46:7443"
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1 .20 .0
networking:
dnsDomain: cluster.local
serviceSubnet: 10 .96 .0 .0 /12
podSubnet: 10 .244 .0 .0 /16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
EOF
advertiseAddress: 192 .168 .100 .41 #指定本地ip 地址
name: kubeadm-master1 #指定本地主机名
controlPlaneEndpoint: "192.168.100.46:7443" #增加kubeapiserver集群ip 地址和端口,就是VIP
registry.aliyuncs.com /google_containers #国外网址k8s.gcr.io受限换成国内
kubernetesVersion: v1 .23 .0 #修改实际kubernetes版本
podSubnet: 10 .244 .0 .0 /16 #增加pod网络
--- #增加kubeproxy代理配置
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
4.2、kubeadm初始化
kubeadm init --config kubeadm-init.yaml
[root
[init] Using Kubernetes version: v1 .20 .0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadm-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10 .96 .0 .1 192 .168 .100 .41 192 .168 .100 .46 ]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadm-master1 localhost] and IPs [192 .168 .100 .41 127 .0 .0 .1 ::1 ]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadm-master1 localhost] and IPs [192 .168 .100 .41 127 .0 .0 .1 ::1 ]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" . This can take up to 4 m0s
[apiclient] All control plane components are healthy after 31 .517700 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubeadm-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node kubeadm-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789 abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively , if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https:
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 192 .168 .100 .46 :7443 --token abcdef.0123456789 abcdef \
--discovery-token-ca-cert-hash sha256:ff0b34df599a5d9dc637df64f056db4f31b3d3eedd0ad0b2bedd17414d146a4e \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192 .168 .100 .46 :7443 --token abcdef.0123456789 abcdef \
--discovery-token-ca-cert-hash sha256:ff0b34df599a5d9dc637df64f056db4f31b3d3eedd0ad0b2bedd17414d146a4e
kubeadm config images pull --config kubeadm-init.yaml
根据输出提示操作:
kubeadm 初始化完成先本地执行命令
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
五、把master2和master3节点加入集群
5.1、复制相关文件到另外两个master节点
mkdir -p /etc/kubernetes/pki/etcd
master ="192.168.100.42 192.168.100.43"
for host in ${master}
scp /etc/kubernetes/pki/ca.* $host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* $host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* $host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* $host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf $host:/etc/kubernetes/
done
[root
/etc/kubernetes
├── admin.conf
├── manifests
└── pki
├── ca.crt
├── ca.key
├── etcd
│ ├── ca.crt
│ └── ca.key
├── front-proxy-ca.crt
├── front-proxy-ca.key
├── sa.key
└── sa.pub
3 directories, 9 files
5.2、在另外两个master节点执行相关操作
kubeadm join 192 .168 .100 .46 :7443 --token abcdef.0123456789 abcdef \
--discovery-token-ca-cert-hash sha256:ff0b34df599a5d9dc637df64f056db4f31b3d3eedd0ad0b2bedd17414d146a4e \
--control-plane
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
5.3、验证集群
kubectl get pod,svc --all-namespaces -o wide
[root
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube -system pod/coredns-7 f89b7bc75-8 lk9k 0 /1 Pending 0 24 m <none> <none> <none> <none>
kube -system pod/coredns-7 f89b7bc75-j4f9g 0 /1 Pending 0 24 m <none> <none> <none> <none>
kube -system pod/etcd-kubeadm-master1 1 /1 Running 0 24 m 192 .168 .100 .41 kubeadm-master1 <none> <none>
kube -system pod/etcd-kubeadm-master2 1 /1 Running 0 5 m 192 .168 .100 .42 kubeadm-master2 <none> <none>
kube -system pod/etcd-kubeadm-master3 1 /1 Running 0 4 m4s 192 .168 .100 .43 kubeadm-master3 <none> <none>
kube -system pod/kube-apiserver-kubeadm-master1 1 /1 Running 0 24 m 192 .168 .100 .41 kubeadm-master1 <none> <none>
kube -system pod/kube-apiserver-kubeadm-master2 1 /1 Running 0 5 m4s 192 .168 .100 .42 kubeadm-master2 <none> <none>
kube -system pod/kube-apiserver-kubeadm-master3 1 /1 Running 0 4 m18s 192 .168 .100 .43 kubeadm-master3 <none> <none>
kube -system pod/kube-controller-manager-kubeadm-master1 1 /1 Running 1 24 m 192 .168 .100 .41 kubeadm-master1 <none> <none>
kube -system pod/kube-controller-manager-kubeadm-master2 1 /1 Running 0 5 m3s 192 .168 .100 .42 kubeadm-master2 <none> <none>
kube -system pod/kube-controller-manager-kubeadm-master3 1 /1 Running 0 4 m18s 192 .168 .100 .43 kubeadm-master3 <none> <none>
kube -system pod/kube-proxy-g5cxd 1 /1 Running 0 4 m19s 192 .168 .100 .43 kubeadm-master3 <none> <none>
kube -system pod/kube-proxy-gdckm 1 /1 Running 0 24 m 192 .168 .100 .41 kubeadm-master1 <none> <none>
kube -system pod/kube-proxy-qdgkh 1 /1 Running 0 5 m5s 192 .168 .100 .42 kubeadm-master2 <none> <none>
kube -system pod/kube-scheduler-kubeadm-master1 1 /1 Running 1 24 m 192 .168 .100 .41 kubeadm-master1 <none> <none>
kube -system pod/kube-scheduler-kubeadm-master2 1 /1 Running 0 5 m4s 192 .168 .100 .42 kubeadm-master2 <none> <none>
kube -system pod/kube-scheduler-kubeadm-master3 1 /1 Running 0 4 m18s 192 .168 .100 .43 kubeadm-master3 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10 .96 .0 .1 <none> 443 /TCP 24 m <none>
kube -system service/kube-dns ClusterIP 10 .96 .0 .10 <none> 53 /UDP,53 /TCP,9153 /TCP 24 m k8s-app=kube -dns
[root
NAME STATUS ROLES AGE VERSION
kubeadm -master1 NotReady control-plane,master 24 m v1 .20 .0
kubeadm -master2 NotReady control-plane,master 5 m34s v1 .20 .0
kubeadm -master3 NotReady control-plane,master 4 m48s v1 .20 .0
六、安装CNI网络插件
wget https:
kubectl apply -f kube-flannel.yml
kubectl run -it --rm dns-test --image=busybox :1 .28 .4 sh
/# nslookup kubernetes
/# ping kubernetes
/# nslookup 163 .com
/# ping 163 .com
[root
NAME STATUS ROLES AGE VERSION
kubeadm -master1 Ready control-plane,master 50 m v1 .20 .0
kubeadm -master2 Ready control-plane,master 31 m v1 .20 .0
kubeadm -master3 Ready control-plane,master 30 m v1 .20 .0
[root
NAMESPACE NAME READY STATUS RESTARTS AGE
kube -flannel kube-flannel-ds-fcwf4 1 /1 Running 0 5 m25s
kube -flannel kube-flannel-ds-k5pl5 1 /1 Running 0 5 m25s
kube -flannel kube-flannel-ds-v6tkp 1 /1 Running 0 5 m25s
kube -system coredns-7 f89b7bc75-8 lk9k 1 /1 Running 0 52 m
kube -system coredns-7 f89b7bc75-j4f9g 1 /1 Running 0 52 m
kube -system etcd-kubeadm-master1 1 /1 Running 0 52 m
kube -system etcd-kubeadm-master2 1 /1 Running 0 32 m
kube -system etcd-kubeadm-master3 1 /1 Running 0 32 m
kube -system kube-apiserver-kubeadm-master1 1 /1 Running 0 52 m
kube -system kube-apiserver-kubeadm-master2 1 /1 Running 0 33 m
kube -system kube-apiserver-kubeadm-master3 1 /1 Running 0 32 m
kube -system kube-controller-manager-kubeadm-master1 1 /1 Running 2 52 m
kube -system kube-controller-manager-kubeadm-master2 1 /1 Running 2 33 m
kube -system kube-controller-manager-kubeadm-master3 1 /1 Running 0 32 m
kube -system kube-proxy-g5cxd 1 /1 Running 0 32 m
kube -system kube-proxy-gdckm 1 /1 Running 0 52 m
kube -system kube-proxy-qdgkh 1 /1 Running 0 33 m
kube -system kube-scheduler-kubeadm-master1 1 /1 Running 2 52 m
kube -system kube-scheduler-kubeadm-master2 1 /1 Running 2 33 m
kube -system kube-scheduler-kubeadm-master3 1 /1 Running 0 32 m
七、加入worker节点
7.1、加入worker节点
kubeadm join 192 .168 .100 .46 :7443 --token abcdef.0123456789 abcdef \
--discovery-token-ca-cert-hash sha256:ff0b34df599a5d9dc637df64f056db4f31b3d3eedd0ad0b2bedd17414d146a4e
7.2、验证
journalctl -f -u kubelet
kubectl get node -A | grep node
#返回结果如下
kubeadm-node1 NotReady <none> 2m46s v1.20.0
kubeadm-node2 NotReady <none> 80s v1.20.0
#node节点处于NotReady状态说明pod的kube-flannel、kube-proxy未部署完成,通过命令
kubectl -n kube-system get pods #查看
[root@kubeadm-master1 ~]# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
coredns-7f89b7bc75-8lk9k 1/1 Running 0 60m
coredns-7f89b7bc75-j4f9g 1/1 Running 0 60m
etcd-kubeadm-master1 1/1 Running 0 60m
etcd-kubeadm-master2 1/1 Running 0 40m
etcd-kubeadm-master3 1/1 Running 0 39m
kube-apiserver-kubeadm-master1 1/1 Running 0 60m
kube-apiserver-kubeadm-master2 1/1 Running 0 40m
kube-apiserver-kubeadm-master3 1/1 Running 0 40m
kube-controller-manager-kubeadm-master1 1/1 Running 2 60m
kube-controller-manager-kubeadm-master2 1/1 Running 2 40m
kube-controller-manager-kubeadm-master3 1/1 Running 0 40m
kube-proxy-6lv24 1/1 Running 0 2m28s
kube-proxy-g5cxd 1/1 Running 0 40m
kube-proxy-gdckm 1/1 Running 0 60m
kube-proxy-gdcth 1/1 Running 0 3m54s
kube-proxy-qdgkh 1/1 Running 0 40m
kube-scheduler-kubeadm-master1 1/1 Running 2 60m
kube-scheduler-kubeadm-master2 1/1 Running 2 40m
kube-scheduler-kubeadm-master3 1/1 Running 0 40m
#READY全部为1说明部署已完成
#再次验证返回结果都为Ready状态
[root@kubeadm-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubeadm-master1 Ready control-plane,master 60m v1.20.0
kubeadm-master2 Ready control-plane,master 41m v1.20.0
kubeadm-master3 Ready control-plane,master 40m v1.20.0
kubeadm-node1 Ready <none> 4m3s v1.20.0
kubeadm-node2 Ready <none> 2m37s v1.20.0
八、Dashboard部署及验证k8s集群
wget https:
vim recommended.yaml
...
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #新增
ports:
- port: 443
targetPort: 8443
nodePort: 30001 #新增
selector:
k8s-app: kubernetes-dashboard
---
...
kubectl apply -f recommended.yaml
kubectl -n kubernetes-dashboard get pod,svc
https:
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster -admin --serviceaccount=kube -system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
#使用输出的token登录Dashboard
https://NodeIP:30001
#在设置项里可以修改语言
九、etcdctl部署
wget https:
tar -xzf etcd-v3 .4 .13 -linux-amd64.tar.gz
cp etcd-v3 .4 .13 -linux-amd64/etcdctl /usr/bin/
etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints="https://192.168.100.41:2379,https://192.168.100.42:2379,https://192.168.100.43:2379" endpoint health
cat <<EOF | sudo tee ~/.bashrc
export ETCDCTL_API=3
export ETCDCTL_DIAL_TIMEOUT=3s
export ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt
export ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt
export ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key
EOF
source ~/.bashrc
#以表格形式查看集群状态
etcdctl --endpoints="https://192.168.100.41:2379" -w table endpoint --cluster status
#查看所有的key
etcdctl --endpoints="https://192.168.100.41:2379" --keys-only=true get --from-key ''
#或
etcdctl --endpoints="https://192.168.100.41:2379" --prefix --keys-only=true get /
#查看拥有某个前缀的keys
etcdctl --endpoints="https://192.168.100.41:2379" --prefix --keys-only=true get /registry/pods/
#查看某个具体key的值以json格式输出
etcdctl --endpoints="https://192.168.100.41:2379" --prefix --keys-only=false -w json get /registry/pods/kube-system/etcd-k8s-master1
#更多etcdctl操作命令:https://github.com/etcd-io/etcd/tree/master/etcdctl
十、启用kubectl命令的自动补全功能
yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
参考:https://www.jianshu.com/p/351b61a87c17
******************************我也想难过的时候到海边走走,可是我的城市没有海。******************************
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY