kubeadm方式部署k8s集群(三主两从)
转载自:https://www.jianshu.com/p/351b61a87c17(相比原版有较大改动,如拆分出nginx主备为独立两台虚拟机、keepalived配置等)
参考视频:https://www.bilibili.com/video/BV1pS4y1A7Qe?p=16&vd_source=58c49e6fe48fe9d7249815c8ef44e143
============
kubeadm部署k8s集群
一、kubeadm介绍
kubeadm是官方社区推出的用于快速部署kubernetes集群的工具。该工具通过两条指令完成kubernetes集群的部署:
#1.创建Master节点 kubeadm init #2.将node节点加入到当前集群中 kubeadm join [master节点IP和端口]
1.1.安装要求
- 一台或多台机器,操作系统CentOS7.x-86_X64
- 硬件配置:CPU:2C以上,内存:4G以上,硬盘:30G以上
- 所有节点之间网络互通
- 可访问外网
- 禁止SWAP分区
- 可以使用"–ignore-preflight-errors=…"参数忽视,但建议不低于2c CPU的配置部署k8s
1.2.安装目标
- 在所有节点上安装Docker、kubeadm、kubelet、kubectl
- 部署Kubernetes Master
- 部署容器网络插件
- 部署Kubernetes Node,将节点加入Kubernetes集群中
- 部署Dashboard Web页面,可视化查看Kubernetes资源
1.3.安装规划
角色 | ip | 组件 |
---|---|---|
VIP | 172.30.2.100 | 使用VIP进行kubeadm初始化master |
k8s-master1 | 172.30.2.101 | docker,kubeadm,kubelet,kubectl |
k8s-master2 | 172.30.2.102 | docker,kubeadm,kubelet,kubectl |
k8s-master3 | 172.30.2.103 | docker,kubeadm,kubelet,kubectl |
k8s-worker1 | 172.30.2.201 | docker,kubeadm,kubelet,kubectl |
k8s-worker2 | 172.30.2.202 | docker,kubeadm,kubelet,kubectl |
二、kubernetes集群安装
2.1.所有节点操作系统优化
#1.关闭防火墙 systemctl stop firewalld systemctl disable firewalld #2.关闭selinux sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config # 永久 setenforce 0 # 临时 #3.关闭swap swapoff -a # 临时 sed -ri 's/.*swap.*/#&/' /etc/fstab #永久 #或 sed -ri '/.*swap.*/d' /etc/fstab #4.DNS设置(根据实际环境) cat >> /etc/resolv.conf << EOF nameserver 114.114.114.114 nameserver 8.8.8.8 EOF #5.将桥接的IPv4流量传递到iptables的链: #桥接前确认br_netfilter模块是否加载,执行以下命令 lsmod | grep br_netfilter modprobe br_netfilter #然后执行下命令 cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system #生效 #6.时间同步 yum install -y ntpdate wget ntpdate time.windows.com
2.2.在master节点和worker节点操作
#1.所有节点设置主机名 hostnamectl set-hostname <hostname> #2.在master节点添加hosts cat >> /etc/hosts << EOF 172.30.2.101 k8s-master1 172.30.2.102 k8s-master2 172.30.2.103 k8s-master3 172.30.2.201 k8s-worker1 172.30.2.202 k8s-worker2 EOF #3.在worker节点创建LVM pvcreate /dev/sdb vgcreate vg_node /dev/sdb lvcreate -n lv_node -l 100%FREE vg_node mkfs.xfs /dev/vg_node/lv_node mount /dev/mapper/vg_node-lv_node /opt sed -i '$a /dev/mapper/vg_node-lv_node /opt xfs defaults 0 0' /etc/fstab
2.3.所有节点安装docker/kubeadm/kubelet和kubectl
kubernetes部署采用yum安装默认版本
kubernetes需要用到容器运行时接口,本例采用docker容器运行时
容器运行时安装参考:https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/
#1.配置docker、kubernetes仓库 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum makecache fast #更新yum缓存 #2.安装docker-ce kubeadm kubelet kubectl #由kubeadm采用最新版本,目前kubernetes最新版本通过验证docker版本到19.03 yum list docker-ce --showduplicates | sort -r #检索docker19.03 yum install -y containerd.io-1.2.13 docker-ce-19.03.11 docker-ce-cli-19.03.11 kubelet kubeadm kubectl kubernetes-cni # 创建 /etc/docker 目录 sudo mkdir /etc/docker #3.配置镜像加载器及Cgroup Driver驱动采用system cat <<EOF | sudo tee /etc/docker/daemon.json { "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"], "insecure-registries": ["172.30.2.254"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF #"insecure-registries": ["172.30.2.254"],非https访问仓库配置 # Create /etc/systemd/system/docker.service.d sudo mkdir -p /etc/systemd/system/docker.service.d #重启 Docker sudo systemctl daemon-reload && sudo systemctl restart docker&& sudo systemctl enable docker #查看Cgroup驱动是否为systemd docker info | grep "Cgroup Driver" # Cgroup Driver: systemd #4.在worker节点修改Docker本地镜像与容器的存储位置的方法 #默认/opt/data/ 是大容量磁盘 docker info | grep "Docker Root Dir" systemctl stop docker mkdir -p /opt/data mv /var/lib/docker /opt/data/ ln -s /opt/data/docker /var/lib/docker #5.重启docker和kubelet设置开机启动 systemctl restart docker && systemctl enable --now kubelet #6.查看版本 docker --version #查看版本 #Docker version 19.03.11, build 42e35e61f3 kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:25:59Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
2.4.在所有master节点上建立高可用
在master建立高可用,其实就是给所有的kube-apiserver做反向代理,可使用SLB或者使用一台独立虚拟服务器代理。本例是在所有master节点上部署nginx(upstream)+keepalived方式反向代理kube-apiserver。
2.4.1.kube-proxy开启IPVS配置
#ipvs称之为IP虚拟服务器(IP Virtual Server,简写为IPVS) #1.在所有master节点执行以下命令 cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules #2.查看IPVS模块加载情况 lsmod | grep -e ip_vs -e nf_conntrack_ipv4 #能看到ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4加载成功
在两台新虚拟机上,搭建主备nginx和keepalived
前提关闭防火墙、selinux设置为enforce、开启ipvs-2.4.1
2.4.2.部署nginx和keepalived
#1.在所有master节点安装nginx和keepalived
rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
yum -y install nginx
systemctl start nginx && systemctl enable nginx
yum -y installkeepalived
systemctl start keepalived && systemctl enable keepalived
2.4.3.配置Nginx的upstream反向代理
#1.在所有master节点配置nginx.conf cat > /etc/nginx/nginx.conf <<EOF user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } stream { log_format proxy '\$remote_addr \$remote_port - [\$time_local] \$status \$protocol ' '"\$upstream_addr" "\$upstream_bytes_sent" "\$upstream_connect_time"' ; access_log /var/log/nginx/nginx-proxy.log proxy; # 修改为master的IP地址 upstream kubernetes_lb { server 172.30.2.101:6443 weight=5 max_fails=3 fail_timeout=30s; server 172.30.2.102:6443 weight=5 max_fails=3 fail_timeout=30s; server 172.30.2.103:6443 weight=5 max_fails=3 fail_timeout=30s; } server { listen 7443; proxy_connect_timeout 30s; proxy_timeout 30s; proxy_pass kubernetes_lb; } } EOF #在其他master节点执行 scp -r 172.30.2.101:/etc/nginx/nginx.conf /etc/nginx/ #2.检查Nginx配置文件语法是否正常,后重新加载Nginx nginx -t #nginx: the configuration file /etc/nginx/nginx.conf syntax is ok #nginx: configuration file /etc/nginx/nginx.conf test is successful nginx -s reload
2.4.4.keepalived配置
#1.在nginx1节点配置keepalived.conf cat > /etc/keepalived/keepalived.conf <<EOF
global_defs {
notification_email {
root@localhost
}
notification_email_from root@k8s.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_1 #router_id每台机器设置不同
}
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh" ## 检测 nginx 状态的脚本路径
interval 1
weight -20
}
vrrp_instance VI_1 {
state MASTER #其他节点设置为BACKUP
interface ens33 #网卡设备名称,根据自己网卡信息进行更改
virtual_router_id 88
advert_int 1
priority 100 #其他节点设置为90
authentication {
auth_type PASS
auth_pass 1234abcd
}
virtual_ipaddress {
192.168.43.200/24 # 这就是虚拟IP地址
}
track_script {
chk_nginx #执行nginx监控
}
}
EOF
#1.在nginx2节点配置keepalived.conf
cat > /etc/keepalived/keepalived.conf <<EOF
"/etc/keepalived/keepalived.conf" 36L, 752C 13,1 顶端
global_defs {
notification_email {
root@localhost
}
notification_email_from root@k8s.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_2 #router_id每台机器设置不同
}
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh" ## 检测 nginx 状态的脚本路径
interval 1
weight -20
}
vrrp_instance VI_1 {
state BACKUP #其他节点设置为BACKUP
interface ens33 #网卡设备名称,根据自己网卡信息进行更改
virtual_router_id 88
advert_int 1
priority 90 #其他节点设置为109,108
authentication {
auth_type PASS
auth_pass 1234abcd
}
virtual_ipaddress {
192.168.43.200/24 # 这就是虚拟IP地址
}
track_script {
chk_nginx #执行nginx监控
}
}
EOF
#注释:
#1>修改interface ens32中的ens32改为服务模块节点实际的网卡名
#2>三个节点router_id分别修改为LVS_1,LVS_2,LVS_3
#3>三个节点state MASTER分别修改为:state MASTER、state BACKUP、state BACKUP
#4>三个节点priority 110 分别修改为:110,109,108
#2.nginx1和nginx2节点,创建nginx_check.sh脚本
cat > /etc/keepalived/nginx_check.sh <<EOF
#!/bin/bash
count=$(ps -ef | grep nginx | egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
systemctl stop keepalived
exit 1
else
exit 0
fi
EOF
chmod a+x /etc/keepalived/nginx_check.sh
#在其他nginx节点执行
scp 172.30.2.101:/etc/keepalived/keepalived.conf /etc/keepalived/
scp 172.30.2.101:/etc/keepalived/nginx_check.sh /etc/keepalived/
#3.所有nginx节点重启keepalived systemctl restart keepalived
#查看日志 journalctl -f -u keepalived
#4.在同网络任意节点验证keepalived是否畅通 ping 172.30.2.100
#5.在同网络任意节点验证nginx 的VIP:7443端口是否畅通 ssh -v -p 7443 172.30.2.100
#出现这个结果代表畅通 debug1: Connection established.
至此高可用VIP已经建立下面开始master初始化工作
2.5.在master1节点上进行kubeadm初始化,获取kubeadm-init.yaml文件,此步骤只是为了获得初始化文件,没有真正部署集群
#1.初始化节点1 kubeadm config print init-defaults > kubeadm-init.yaml #2.编辑kubeadm-init.yaml cat > kubeadm-init.yaml <<EOF apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.30.2.101 #指定本地ip地址 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master1 #指定本地主机名 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} controlPlaneEndpoint: "172.30.2.100:7443" #增加kubeapiserver集群ip地址和端口,就是VIP dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers #国外网址k8s.gcr.io受限换成国内 kind: ClusterConfiguration kubernetesVersion: v1.20.0 #修改实际kubernetes版本 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16 #增加pod网络 scheduler: {} --- #增加kubeproxy代理配置 apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" EOF
在master1、master2、master3节点上进行kubeadm初始化
1、把上述master1节点上的kubeadm-init.yaml文件,复制到master2和master3上,路径随意,/etc/myk8s/
2、在master1、master2、master3节点上,先pull到本地镜像
kubeadm config images pull --config kubeadm-init.yaml
4、只在master1节点上,执行kubeadm初始化
#kubeadm初始化 kubeadm init --config kubeadm-init.yaml
#以下初始化结果
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown
(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef
--discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350
--control-planeThen you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef
--discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350
只在master1节点上,执行如下命令
#1.kubeadm初始化完成先本地执行命令 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=/etc/kubernetes/admin.conf #2.加入master节点需要执行--在2.6.节操作 kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350 \ --control-plane #3.加入node节点需要执行--在2.7.节操作 kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350
2.6.在其他两个master节点执行相关操作
#1.在master2和master3节点复制相关证书 mkdir -p /etc/kubernetes/pki/etcd scp -r 172.30.2.101:/etc/kubernetes/pki/ca.* /etc/kubernetes/pki/ scp -r 172.30.2.101:/etc/kubernetes/pki/sa.* /etc/kubernetes/pki/ scp -r 172.30.2.101:/etc/kubernetes/pki/front-proxy-ca.* /etc/kubernetes/pki/ scp -r 172.30.2.101:/etc/kubernetes/pki/etcd/ca.* /etc/kubernetes/pki/etcd/ scp -r 172.30.2.101:/etc/kubernetes/admin.conf /etc/kubernetes/ #2.master2和master3节点执行以下命令 kubeadm join 172.30.2.100:7443 --v=5 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350 \ --control-plane \ --ignore-preflight-errors=all #3.在master1节点上查看pod、svc状态,其中pod是否全部处于running状态 kubectl get pod,svc --all-namespaces -o wide #4.在任意master节点上执行以下命令验证是否部署成功 kubectl get node #返回结果如下 NAME STATUS ROLES AGE VERSION k8s-master1 NotReady control-plane,master 28m v1.20.2 k8s-master2 NotReady control-plane,master 4m13s v1.20.2 k8s-master3 NotReady control-plane,master 30s v1.20.2 #以上NotReady等待CNI网络插件安装
2.7.安装CNI网络插件
#1.其中一个master节点上下载 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml #遇见无法下载解决办法 yum provides dig #dig命令所属包 yum install -y bind-utils dig@DNSIP raw.githubusercontent.com #解析到ip放在hosts里 #2.执行命令 kubectl apply -f kube-flannel.yml #3.coredns应用测试验证 kubectl run -it --rm dns-test --image=busybox:1.28.4 sh /# nslookup kubernetes /# ping kubernetes /# nslookup 163.com /# ping 163.com #4.所有节点再验证 kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 Ready control-plane,master 37m v1.20.2 k8s-master2 Ready control-plane,master 13m v1.20.2 k8s-master3 Ready control-plane,master 9m59s v1.20.2
2.8.加入worker节点
#1.在所有node节点上执行如下命令 kubeadm join 172.30.2.100:7443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:dec79a611778ffabb70219c391901f3244e0204c1d2bd88e63e6efc4e9434350 #通过journalctl查看日志 journalctl -f -u kubelet #2.在任意master节点执行如下命令进行验证node节点是否加入成功 kubectl get node -A | grep node #返回结果如下 k8s-worker1 NotReady <none> 16s v1.20.2 k8s-worker2 NotReady <none> 10s v1.20.2 #node节点处于NotReady状态说明pod的kube-flannel、kube-proxy为部署完成,通过命令 kubectl -n kube-system get pods #查看 #再次验证返回结果都为Ready状态 kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 Ready control-plane,master 44m v1.20.2 k8s-master2 Ready control-plane,master 20m v1.20.2 k8s-master3 Ready control-plane,master 16m v1.20.2 k8s-worker1 Ready <none> 3m29s v1.20.2 k8s-worker2 Ready <none> 2m56s v1.20.2
至此kubeadm集群部署完成。
测试安装是否成功:
ip addr
===========================
2.9.遇见到问题解决
#问题1.error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition To see the stack trace of this error execute with --v=5 or higher #解决方法 kubeadm reset -f docker rm -f $(docker ps -a -q ) rm -rf /var/lib/cni/ systemctl daemon-reload systemctl restart kubelet sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
2.10.缩减worker节点
#删除worker节点 #1.在master节点执行 #将当前运行在该节点上的容器驱离 kubectl drain k8s-worker1 --ignore-daemonsets #将该节点设置为不可调度模式 kubectl cordon k8s-worker1 #删除worker节点 kubectl delete node k8s-worker1 #2.在k8s-worker1节点执行 kubeadm reset -f docker rm -f $(docker ps -a -q ) rm -rf /var/lib/cni/ systemctl daemon-reload systemctl restart kubelet sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X ip link delete cni0 ip link delete flannel.1
三、kubernetes应用部署
3.1.Dashboard部署及验证k8s集群
#1.下载Dashboard的yaml文件 #官方主页https://github.com/kubernetes/dashboard wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml #2.默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部: vim recommended.yaml ... --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: NodePort #新增 ports: - port: 443 targetPort: 8443 nodePort: 30001 #新增 selector: k8s-app: kubernetes-dashboard --- ... kubectl apply -f recommended.yaml #3.验证 kubectl -n kubernetes-dashboard get pod,svc #pod状态处于Running说明部署成功 #4.通过网页访问使用worker节点任意ip访问 https://NodeIP:30001 #5.创建service account并绑定默认cluster-admin管理员集群角色: kubectl create serviceaccount dashboard-admin -n kube-system kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') #6.使用输出的token登录Dashboard https://NodeIP:30001 #在设置项里可以修改语言
3.2.etcd-3.14.13使用
#1.在master1节点下载etcd程序包 wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz #2.解压程序etcd-v3.4.13-linux-amd64.tar.gz tar -xzf etcd-v3.4.13-linux-amd64.tar.gz cp etcd-v3.4.13-linux-amd64/etcdctl /usr/bin/ #3.etcdctl使用 #---验证集群状态 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints="https://172.30.2.101:2379,https://172.30.2.102:2379,https://172.30.2.103:2379" endpoint health #绑定etcdctl环境变量使用 cat <<EOF | sudo tee ~/.bashrc export ETCDCTL_API=3 export ETCDCTL_DIAL_TIMEOUT=3s export ETCDCTL_CACERT=/etc/kubernetes/pki/etcd/ca.crt export ETCDCTL_CERT=/etc/kubernetes/pki/etcd/server.crt export ETCDCTL_KEY=/etc/kubernetes/pki/etcd/server.key EOF source ~/.bashrc #1>.以表格形式查看集群状态 etcdctl --endpoints="https://172.30.2.101:2379" -w table endpoint --cluster status #2>.查看所有的key etcdctl --endpoints="https://172.30.2.101:2379" --keys-only=true get --from-key '' #或 etcdctl --endpoints="https://172.30.2.101:2379" --prefix --keys-only=true get / #3>.查看拥有某个前缀的keys etcdctl --endpoints="https://172.30.2.101:2379" --prefix --keys-only=true get /registry/pods/ #4>.查看某个具体key的值以json格式输出 etcdctl --endpoints="https://172.30.2.101:2379" --prefix --keys-only=false -w json get /registry/pods/kube-system/etcd-k8s-master1 #更多etcdctl操作命令:https://github.com/etcd-io/etcd/tree/master/etcdctl
四、kubernetes插件部署
4.1.在windows部署kubectl工具
#1.master节点下载windows版本kubectl工具 curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/windows/amd64/kubectl.exe #把kubectl.exe下载windows系统d:\kubectlv1.20.2目录里 #https://storage.googleapis.com/kubernetes-release/release/stable.txt #2.创建.kube目录 #在windows系统运行-cmd进入当前用户目录创建 cd C:\Users\当用户目录 md .kube #把master节点上$HOME/.kube/config拷贝到windows系统的.kube目录中 #3.windows系统的环境变量的path环境增加d:\kubectlv1.20.2 #4.确保windows系统跟k8s集群在同一网络里,并且打开cmd执行命令 kubectl get pod,svc --all-namespaces
参考:https://luyanan.com/article/info/19821386744192 加入新的master与worker节点
https://blog.csdn.net/liuyunshengsir/article/details/105149866 node节点扩缩容
https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/ 容器运行时
作者:轻雪飘扬_5be4
链接:https://www.jianshu.com/p/351b61a87c17
来源:简书
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· CSnakes vs Python.NET:高效嵌入与灵活互通的跨语言方案对比
· DeepSeek “源神”启动!「GitHub 热点速览」
· 我与微信审核的“相爱相杀”看个人小程序副业
· Plotly.NET 一个为 .NET 打造的强大开源交互式图表库
· 上周热点回顾(2.17-2.23)
2022-02-08 django执行python manage.py makemigrations 时报错AttributeError: 'str' object has no attribute 'decode'