kubernetes学习笔记

 


kubernetes

部署k8s集群踩过的坑 vpn

  • 基于docker部署k8s集群时,需配置vpn

  • 离线配置需要注意,以下两行配置需要和离线传输的版本相对应,否则 有坑!!!!

cat kube-flannel.yml
...

        image: docker.io/flannel/flannel-cni-plugin:v1.2.0
...
        image: docker.io/flannel/flannel:v0.22.2

...

配置本地harbor镜像库+k8s基于kubeadm集群部署

0. 使用域名证书https

- 基于权威证书配置harbor的https
1.1 下载自己的证书
略,此处我使用的是自己的,如果你用自己的,后期需要做相应的修改。

1.2 修改harbor的配置文件
[root@centos102 harbor]# vim harbor.yml
...
hostname: www.supershy.com
...
https:
  ...
  certificate: /supershy/softwares/harbor/www.supershy.com_nginx/www.supershy.com_bundle.crt
  private_key: /supershy/softwares/harbor/www.supershy.com_nginx/www.supershy.com.key

1.3 让服务生效
[root@centos102 harbor]# ./install.sh 


1.4 windows做域名解析
10.0.0.102  www.supershy.com
 
1.5 访问harbor服务
https://www.supershy.com/


1.6 linux访问
root@ubuntu201:[root@harbor250 ~]#   vim /etc/docker/daemon.json 
{
  ...
  [root@harbor250 ~]#  需要删除如下的配置
   "insecure-registries": ["10.0.0.102"]
}
root@ubuntu201:[root@harbor250 ~]#  
root@ubuntu201:[root@harbor250 ~]#  vim /etc/hosts
...
10.0.0.102 phpshe.supershy.com  www.supershy.com

1. 自建证书配置harbor的https

- 基于自建证书配置harbor的https:
- 配置CA证书:
	1搭建harbor环境
[root@harbor250 ~]# tar xf supershy-docker-compose-binary-install.tar.gz 
[root@harbor250 ~]# 
[root@harbor250 ~]# ./install-docker.sh install 
[root@harbor250 ~]# 
[root@harbor250 ~]# tar xf harbor-offline-installer-v2.8.4.tgz -C /supershy/softwares/


	2.创建工作目录
[root@harbor250 ~]# mkdir -pv /supershy/softwares/harbor/certs/{ca,harbor-server,docker-client}
mkdir: created directory ‘/supershy/softwares/harbor/certs’
mkdir: created directory ‘/supershy/softwares/harbor/certs/ca’
mkdir: created directory ‘/supershy/softwares/harbor/certs/harbor-server’
mkdir: created directory ‘/supershy/softwares/harbor/certs/docker-client’
[root@harbor250 ~]# 


	3.进入到harbor证书存放目录
[root@harbor250 ~]# cd /supershy/softwares/harbor/certs/
[root@harbor250 certs]# 
[root@harbor250 certs]# ll
total 0
drwxr-xr-x 2 root root 6 Feb 27 09:06 ca
drwxr-xr-x 2 root root 6 Feb 27 09:06 docker-client
drwxr-xr-x 2 root root 6 Feb 27 09:06 harbor-server
[root@harbor250 certs]# 
	
	
	4.生成自建CA证书
		4.1 创建CA的私钥
[root@harbor250 certs]# openssl genrsa -out ca/ca.key 4096

		4.2 基于自建的CA私钥创建CA证书(注意,证书签发的域名范围)
[root@harbor250 certs]# openssl req -x509 -new -nodes -sha512 -days 3650 \
 -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=supershy.com" \
 -key ca/ca.key \
 -out ca/ca.crt
 
 
		4.3 查看自建证书信息
[root@harbor250 certs]# openssl  x509 -in ca/ca.crt -noout -text


- 配置harbor证书
	1.生成harbor服务器的私钥
[root@harbor250 certs]# openssl genrsa -out harbor-server/harbor.supershy.com.key 4096


	2.harbor服务器基于私钥签发证书认证请求(csr文件),让自建CA认证
[root@harbor250 certs]# openssl req -sha512 -new \
    -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=harbor.supershy.com" \
    -key harbor-server/harbor.supershy.com.key \
    -out harbor-server/harbor.supershy.com.csr


	3.生成 x509 v3 的扩展文件用于认证
cat > harbor-server/v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1=harbor.supershy.com
EOF


	4. 基于 x509 v3 的扩展文件认证签发harbor server证书
[root@harbor250 certs]# openssl x509 -req -sha512 -days 3650 \
    -extfile harbor-server/v3.ext \
    -CA ca/ca.crt -CAkey ca/ca.key -CAcreateserial \
    -in harbor-server/harbor.supershy.com.csr \
    -out harbor-server/harbor.supershy.com.crt


	5.修改harbor的配置文件使用自建证书
[root@harbor250 certs]# cp ../harbor.yml.tmpl ../harbor.yml
[root@harbor250 certs]# 
[root@harbor250 certs]# vim ../harbor.yml
...
hostname: harbor.supershy.com
https:
  ...
  certificate: /supershy/softwares/harbor/certs/harbor-server/harbor.supershy.com.crt
  private_key: /supershy/softwares/harbor/certs/harbor-server/harbor.supershy.com.key
...
harbor_admin_password: 1
...
[root@harbor250 certs]# 


	6.安装harbor
[root@harbor250 certs]# ../install.sh 

	7.客户端登录测试,windows一定要配置hosts解析哟~
https://harbor.supershy.com/

	
	
- 配置docker客户端证书
	1.生成docker客户端证书
[root@harbor250 certs]# openssl x509 -inform PEM -in harbor-server/harbor.supershy.com.crt -out docker-client/harbor.supershy.com.cert

[root@harbor250 certs]# 
[root@harbor250 certs]# 
[root@harbor250 certs]# pwd
/supershy/softwares/harbor/certs
[root@harbor250 certs]# 
[root@harbor250 certs]# md5sum docker-client/harbor.supershy.com.cert harbor-server/harbor.supershy.com.crt 
f3d34a5c5d88a5fcacd8435ca9f4d944  docker-client/harbor.supershy.com.cert
f3d34a5c5d88a5fcacd8435ca9f4d944  harbor-server/harbor.supershy.com.crt
[root@harbor250 certs]# 


	2.拷贝docker client证书文件
[root@harbor250 certs]# cp harbor-server/harbor.supershy.com.key docker-client/
[root@harbor250 certs]# 
[root@harbor250 certs]# cp ca/ca.crt docker-client/
[root@harbor250 certs]# 
[root@harbor250 certs]# ll -R
.:
total 0
drwxr-xr-x 2 root root  48 Feb 27 09:20 ca
drwxr-xr-x 2 root root  85 Feb 27 09:33 docker-client
drwxr-xr-x 2 root root 116 Feb 27 09:20 harbor-server

./ca:
total 12
-rw-r--r-- 1 root root 2033 Feb 27 09:11 ca.crt
-rw-r--r-- 1 root root 3243 Feb 27 09:09 ca.key
-rw-r--r-- 1 root root   17 Feb 27 09:20 ca.srl

./docker-client:
total 12
-rw-r--r-- 1 root root 2033 Feb 27 09:33 ca.crt
-rw-r--r-- 1 root root 2086 Feb 27 09:30 harbor.supershy.com.cert
-rw-r--r-- 1 root root 3243 Feb 27 09:33 harbor.supershy.com.key

./harbor-server:
total 16
-rw-r--r-- 1 root root 2086 Feb 27 09:20 harbor.supershy.com.crt
-rw-r--r-- 1 root root 1716 Feb 27 09:15 harbor.supershy.com.csr
-rw-r--r-- 1 root root 3243 Feb 27 09:13 harbor.supershy.com.key
-rw-r--r-- 1 root root  239 Feb 27 09:19 v3.ext
[root@harbor250 certs]# 




- docker客户端使用证书
	1.docker客户端创建自建证书的目录结构(注意域名的名称和目录要一致哟~)
[root@master231 ~]# mkdir -pv /etc/docker/certs.d/harbor.supershy.com/


	2.配置名称解析
[root@master231 ~]# cat >> /etc/hosts <<EOF
10.0.0.250 harbor.supershy.com
10.0.0.231 master231
10.0.0.232 worker232
10.0.0.233 worker233
EOF
	
	3.将客户端证书文件进行拷贝
[root@master231 ~]# scp harbor.supershy.com:/supershy/softwares/harbor/certs/docker-client/* /etc/docker/certs.d/harbor.supershy.com/
...
Warning: Permanently added 'harbor.supershy.com,10.0.0.250' (ECDSA) to the list of known hosts.
root@harbor.supershy.com's password: 
ca.crt                                                                                               100% 2033     1.3MB/s   00:00    
harbor.supershy.com.cert                                                                            100% 2086     1.4MB/s   00:00    
harbor.supershy.com.key                                                                             100% 3243   621.3KB/s   00:00    
[root@master231 ~]# 


	4.docker客户端验证
[root@master231 ~]# docker login -u admin -p 1 harbor.supershy.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@master231 ~]# 



	5.其他客户端重复以上步骤
注意worker232和worker233一定做,否则后期K8S无法拉取镜像!




参考链接:
	https://goharbor.io/docs/1.10/install-config/configure-https/#generate-a-certificate-authority-certificate
	https://www.cnblogs.com/supershy/p/17153673.html



	
错误提示:
Error response from daemon: Get "https://harbor.supershy.com/v2/": x509: certificate signed by unknown authority

 
解决方案:
	需要拷贝自建客户端证书。

2. k8s所有节点准备工作

	1 虚拟机操作系统环境准备
参考链接:
	https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

	2 关闭swap分区
		2.1临时关闭
swapoff -a && sysctl -w vm.swappiness=0

		2.2 基于配置文件关闭
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab


	3 确保各个节点MAC地址或product_uuid唯一
ifconfig  eth0  | grep ether | awk '{print $2}'
cat /sys/class/dmi/id/product_uuid 

    温馨提示:
        一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。 
        Kubernetes使用这些值来唯一确定集群中的节点。 如果这些值在每个节点上不唯一,可能会导致安装失败。


	4 检查网络节点是否互通
简而言之,就是检查你的k8s集群各节点是否互通,可以使用ping命令来测试。


	5 允许iptable检查桥接流量
cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system


	6 检查端口是否被占用
参考链接: https://kubernetes.io/zh-cn/docs/reference/networking/ports-and-protocols/

	7 禁用防火墙
systemctl disable --now firewalld


	8 禁用selinux
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config 
grep ^SELINUX= /etc/selinux/config


	9 所有节点修改cgroup的管理进程为systemd
[root@master231 ~]# docker info  | grep cgroup
 Cgroup Driver: cgroupfs
[root@master231 ~]# 
[root@master231 ~]# cat >/etc/docker/daemon.json<<EOF
{
  "registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn/","https://hub-mirror.c.163.com/","https://reg-mirror.qiniu.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@master231 ~]# 
[root@master231 ~]# systemctl restart docker
[root@master231 ~]# 
[root@master231 ~]# docker info | grep "Cgroup Driver"
 Cgroup Driver: systemd
[root@master231 ~]# 


温馨提示:
	如果不修改cgroup的管理驱动为systemd,则默认值为cgroupfs,在初始化master节点时会失败哟!




- 所有节点安装kubeadm,kubelet,kubectl
	1 软件包说明
你需要在每台机器上安装以下的软件包:
	kubeadm:
		用来初始化集群的指令。
	kubelet:
		在集群中的每个节点上用来启动Pod和容器等。
	kubectl:
		用来与集群通信的命令行工具。

kubeadm不能帮你安装或者管理kubelet或kubectl,所以你需要确保它们与通过kubeadm安装的控制平面(master)的版本相匹配。 如果不这样做,则存在发生版本偏差的风险,可能会导致一些预料之外的错误和问题。 

然而,控制平面与kubelet间的相差一个次要版本不一致是支持的,但kubelet的版本不可以超过"API SERVER"的版本。 例如,1.7.0版本的kubelet可以完全兼容1.8.0版本的"API SERVER",反之则不可以。


	2 配置软件源
cat  > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF


	3 查看kubeadm的版本(将来你要安装的K8S时请所有组件版本均保持一致!)
yum -y list kubeadm --showduplicates | sort -r


	4 安装kubeadm,kubelet,kubectl软件包
yum -y install kubeadm-1.23.17-0 kubelet-1.23.17-0 kubectl-1.23.17-0 


	当然,你也可以使用我下载好的软件包
tar xf supershy-kubeadm-kubelet-kubectl.tar.gz && yum -y localinstall kubeadm-kubelet-kubectl/*.rpm


	5 启动kubelet服务(若服务启动失败时正常现象,其会自动重启,因为缺失配置文件,初始化集群后恢复!)
systemctl enable --now kubelet
systemctl status kubelet


	6  配置docker的vpn
[root@worker233 ~]# mkdir -p /etc/systemd/system/docker.service.d/

[root@worker233 ~]# cat >/etc/systemd/system/docker.service.d/http-proxy.conf <<EOF
[Service]
Environment="ALL_PROXY=socks:////192.168.1.10:10808"
Environment="HTTP_PROXY=http://192.168.1.10:10809"
Environment="HTTPS_PROXY=http://192.168.1.10:10809"
Environment="NO_PROXY=your-registry.com,10.10.10.10,*.example.com"
EOF


	7  重启docker
systemctl daemon-reload
systemctl restart docker

	8  重启后查看是否加入systemctl服务
[root@worker233 ~]# systemctl cat docker


参考链接:
	https://kubernetes.io/zh/docs/tasks/tools/install-kubectl-linux/



3. master节点初始化

- 初始化master节点
	0.彩蛋:
	批量导出master镜像:
[root@master231 ~]# docker save `docker images | awk 'NR>1{print $1":"$2}'` -o supershy-master-v1.23.17.tar.gz


	导入master镜像
[root@master231 ~]# docker load -i supershy-master-v1.23.17.tar.gz 



	1 使用kubeadm初始化master节点
[root@master231 ~]# kubeadm init --kubernetes-version=v1.23.17 --image-repository registry.aliyuncs.com/google_containers  --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16  --service-dns-domain=supershy.com


相关参数说明:
	--kubernetes-version:
		指定K8S master组件的版本号。
		
	--image-repository:
		指定下载k8s master组件的镜像仓库地址。
		
	--pod-network-cidr:
		指定Pod的网段地址。
		
	--service-cidr:
		指定SVC的网段

	--service-dns-domain:
		指定service的域名。若不指定,默认为"cluster.local"。
		

使用kubeadm初始化集群时,可能会出现如下的输出信息:
[init] 
	使用初始化的K8S版本。
	
[preflight] 
	主要是做安装K8S集群的前置工作,比如下载镜像,这个时间取决于你的网速。

[certs] 
	生成证书文件,默认存储在"/etc/kubernetes/pki"目录哟。

[kubeconfig]
	生成K8S集群的默认配置文件,默认存储在"/etc/kubernetes"目录哟。

[kubelet-start] 
	启动kubelet,
    环境变量默认写入:"/var/lib/kubelet/kubeadm-flags.env"
    配置文件默认写入:"/var/lib/kubelet/config.yaml"

[control-plane]
	使用静态的目录,默认的资源清单存放在:"/etc/kubernetes/manifests"。
	此过程会创建静态Pod,包括"kube-apiserver""kube-controller-manager""kube-scheduler"

[etcd] 
	创建etcd的静态Pod,默认的资源清单存放在:""/etc/kubernetes/manifests"
	
[wait-control-plane] 
	等待kubelet从资源清单目录"/etc/kubernetes/manifests"启动静态Pod。

[apiclient]
	等待所有的master组件正常运行。
	
[upload-config] 
	创建名为"kubeadm-config"的ConfigMap在"kube-system"名称空间中。
	
[kubelet] 
	创建名为"kubelet-config-1.22"的ConfigMap在"kube-system"名称空间中,其中包含集群中kubelet的配置

[upload-certs] 
	跳过此节点,详情请参考”--upload-certs"
	
[mark-control-plane]
	标记控制面板,包括打标签和污点,目的是为了标记master节点。
	
[bootstrap-token] 
	创建token口令,例如:"kbkgsa.fc97518diw8bdqid"。
	如下图所示,这个口令将来在加入集群节点时很有用,而且对于RBAC控制也很有用处哟。

[kubelet-finalize] 
	更新kubelet的证书文件信息

[addons] 
	添加附加组件,例如:"CoreDNS""kube-proxy”
	
...


Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.231:6443 --token juanog.fbl440fv8rzp4d2q \
	--discovery-token-ca-cert-hash sha256:f3baddb1fd501b701c676ad128e3badba9369424fdea2faad0f5dd653714e113 

	
	2 拷贝授权文件,用于管理K8S集群
[root@master231 ~]# mkdir -p $HOME/.kube
[root@master231 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master231 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config



	3 查看集群节点
[root@master231 ~]# kubectl get nodes
NAME        STATUS     ROLES                  AGE    VERSION
master231   NotReady   control-plane,master   6m7s   v1.23.17
[root@master231 ~]# 
[root@master231 ~]# 
[root@master231 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-0               Healthy   {"health":"true","reason":""}   
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
[root@master231 ~]# 

4. worker节点加入集群

	0.彩蛋:
	批量导出master镜像:
[root@worker232 ~]# docker save `docker images | awk 'NR>1{print $1":"$2}'` -o supershy-woker-v1.23.17.tar.gz
[root@worker232 ~]# 



	导入master镜像
[root@worker233 ~]# docker load -i supershy-worker-v1.23.17.tar.gz 

	
	1.将worker节点加入集群,此处复制出你自己的节点的token,在初始化master节点时会有提示哟。
[root@worker232 ~]# kubeadm join 10.0.0.231:6443 --token juanog.fbl440fv8rzp4d2q \
	--discovery-token-ca-cert-hash sha256:f3baddb1fd501b701c676ad128e3badba9369424fdea2faad0f5dd653714e113 
	
[root@worker233 ~]# kubeadm join 10.0.0.231:6443 --token juanog.fbl440fv8rzp4d2q \
	--discovery-token-ca-cert-hash sha256:f3baddb1fd501b701c676ad128e3badba9369424fdea2faad0f5dd653714e113 
	
	
	
	2.master节点查看集群节点 
[root@master231 ~]# kubectl get nodes
NAME        STATUS     ROLES                  AGE     VERSION
master231   NotReady   control-plane,master   13m     v1.23.17
worker232   NotReady   <none>                 4m58s   v1.23.17
worker233   NotReady   <none>                 26s     v1.23.17
[root@master231 ~]# 



5. 初始化网络组件flannel

  • k8s本身不提供跨主机互联,而是声明了CNI(Container Networker Interface)规范,凡是遵循CNI规范的都可以被k8s用作网络基础组件

  • flannel.1:实现跨主机互联

  • cni0:实现同主机间互联

- 初始化网络组件
	1 查看现有的网络插件
 

推荐阅读:
	https://kubernetes.io/docs/concepts/cluster-administration/addons/


	2 下载flannel资源清单文件
[root@master231 ~]# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml


	3 修改flannel的配置文件
[root@master231 ~]# grep 16 kube-flannel.yml 
      "Network": "10.244.0.0/16",
[root@master231 ~]# 
[root@master231 ~]# sed -i 's#10.244#10.100#' kube-flannel.yml 
[root@master231 ~]# 
[root@master231 ~]# grep 16 kube-flannel.yml 
      "Network": "10.100.0.0/16",
[root@master231 ~]# 



因为我们在初始化K8S集群的时候,修改了Pod的网段。因此,这里也需要做相应修改哟。

	4 部署flannel组件
[root@master231 ~]# kubectl apply -f kube-flannel.yml 


	5 验证flannel组件是否正常工作
[root@master231 ~]# kubectl get pods -A -o wide | grep flannel
kube-flannel   kube-flannel-ds-44b2l               0/1     Init:0/2   0          7s    10.0.0.232   worker232   <none>           <none>
kube-flannel   kube-flannel-ds-l2vbm               0/1     Init:0/2   0          7s    10.0.0.233   worker233   <none>           <none>
kube-flannel   kube-flannel-ds-rv487               0/1     Init:0/2   0          7s    10.0.0.231   master231   <none>           <none>
[root@master231 ~]# 

[root@master231 ~]# kubectl get pods -A -o wide | grep flannel
kube-flannel   kube-flannel-ds-44b2l               1/1     Running   0          5m50s   10.0.0.232   worker232   <none>           <none>
kube-flannel   kube-flannel-ds-l2vbm               1/1     Running   0          5m50s   10.0.0.233   worker233   <none>           <none>
kube-flannel   kube-flannel-ds-rv487               1/1     Running   0          5m50s   10.0.0.231   master231   <none>           <none>


	6.各节点验证是否有flennel.1设备
[root@master231 ~]# ifconfig  flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.100.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::3853:4eff:fe31:1ccb  prefixlen 64  scopeid 0x20<link>
        ether 3a:53:4e:31:1c:cb  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

[root@master231 ~]# 


[root@worker232 ~]# ifconfig  flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.100.1.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::e886:7ff:fe7b:6813  prefixlen 64  scopeid 0x20<link>
        ether ea:86:07:7b:68:13  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

[root@worker232 ~]# 


[root@worker233 ~]# ifconfig  flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.100.2.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::60c3:70ff:fee0:30b0  prefixlen 64  scopeid 0x20<link>
        ether 62:c3:70:e0:30:b0  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

...

cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.100.2.1  netmask 255.255.255.0  broadcast 10.100.2.255
        inet6 fe80::e45a:bbff:fe7f:b7ef  prefixlen 64  scopeid 0x20<link>
        ether e6:5a:bb:7f:b7:ef  txqueuelen 1000  (Ethernet)
        RX packets 777  bytes 64788 (63.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 776  bytes 96116 (93.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


[root@worker233 ~]# 



- 如果集群初始化失败可以重置
kubeadm reset –f


- 手动创建cni0网卡
---> 假设 master231的flannel.1是10.100.0.0网段。
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.0.1/24 dev cni0


---> 假设 worker232的flannel.1是10.100.1.0网段。
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.1.1/24 dev cni0

6. 自动补全工具

- 自动补全功能-新手必备
 
yum –y install bash-completion

kubectl completion bash > ~/.kube/completion.bash.inc
echo "source '$HOME/.kube/completion.bash.inc'" >> $HOME/.bash_profile
source $HOME/.bash_profile

cni0网卡未自动创建解决

  • flannel安装后,会自动创建两块网卡:flannel.1,cni0
- 手动创建cni0网卡
---> 假设 master231的flannel.1是10.100.0.0网段。
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.0.1/24 dev cni0


---> 假设 worker232的flannel.1是10.100.1.0网段。
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.1.1/24 dev cni0

Pod日志(面试题)

    1. 如何查看十分钟以内的日志
    2. 如何查看Pod的上一个容器的日志
kubectl logs 
	-c					#指定pod中的容器
	-f					#实时查看
	-p					#查看pod中已经停掉的容器日志和正在运行的容器,不包括删除的容器
	--since=5m			#查看时间段之内的日志,s 秒,m 分钟,h 小时
	--since-time		#查看某时间内(时间戳)的日志
	--timestamps=true	#查看日志时显示每条数据时间戳

cp-pod的数据拷贝

	#拷贝宿主机文件到容器
[root@master231 /tmp]# kubectl cp test.txt supershy-linux89-labels-01:/


	#拷贝容器内文件到宿主机,目标必须指定为不存在的文件名
[root@master231 /tmp]# kubectl cp supershy-linux89-labels-01:/etc/nginx/conf.d/default.conf ./default.conf
tar: removing leading '/' from member names

	#拷贝容器内目录到宿主机,目标必须指定为不存在的目录名
[root@master231 /tmp]# kubectl cp  supershy-linux89-labels-01:/etc/nginx/conf.d ./conf.d
tar: removing leading '/' from member names

故障排查

  • 故障排查案例1-资源名称不能大写
The Pod "supershy-linux89-Troubleshooting" is invalid: metadata.name: Invalid value: "supershy-linux89-Troubleshooting": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
  • 故障排查案例2-nodeName指定的节点必须在etcd中有记录,否则会处于Pending状态。
[root@master231 /supershy/manifests/pods]# kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP       NODE         NOMINATED NODE   READINESS GATES
supershy-linux89-troubleshooting   0/1     Pending   0          15s   <none>   10.0.0.232   <none>           <none>
[root@master231 /supershy/manifests/pods]# cat 04-pods-troubleshooting.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: supershy-linux89-troubleshooting
spec:
  nodeName: 10.0.0.232		#不能使用ip,未在etcd中声明,使用worker232
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-linux/alpine:lastest
    stdin: true

  • 故障排查案例3-对于镜像无法阻塞时,会自动重启:状态会变为“CrashLoopBackOff”,“Completed”

  • 故障排查案例4-字段写错了导致无法识别。

[root@master231 /supershy/manifests/pods]# kubectl apply -f 04-pods-troubleshooting.yaml 
error: error validating "04-pods-troubleshooting.yaml": error validating data: ValidationError(Pod.spec.containers[0]): unknown field "agrs" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
  • 故障排查案例5- 镜像不存在:ImagePullBackOff
[root@master231 /supershy/manifests/pods]# kubectl get pods -o wide
NAME                                READY   STATUS             RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
supershy-linux89-troubleshooting   0/1     ImagePullBackOff   0          2m31s   10.100.1.9   worker232   <none>           <none>


[root@master231 /supershy/manifests/pods]# kubectl describe pod supershy-linux89-troubleshooting 
...
Events:
  Type     Reason   Age                From     Message
  ----     ------   ----               ----     -------
  Normal   BackOff  21s (x2 over 47s)  kubelet  Back-off pulling image "harbor.supershy.com/supershy-linux/alpine:lastest"
  Warning  Failed   21s (x2 over 47s)  kubelet  Error: ImagePullBackOff
  Normal   Pulling  9s (x3 over 48s)   kubelet  Pulling image "harbor.supershy.com/supershy-linux/alpine:lastest"
  Warning  Failed   9s (x3 over 48s)   kubelet  Failed to pull image "harbor.supershy.com/supershy-linux/alpine:lastest": rpc error: code = Unknown desc = Error response from daemon: unknown: artifact supershy-linux/alpine:lastest not found
  Warning  Failed   9s (x3 over 48s)   kubelet  Error: ErrImagePull

  • 故障排查案例6- 使用describe查看容器的详细信息
[root@master231 /supershy/manifests/pods]# kubectl describe pod supershy-linux89-troubleshooting 
...
Events:
  Type     Reason   Age                From     Message
  ----     ------   ----               ----     -------
  Normal   BackOff  21s (x2 over 47s)  kubelet  Back-off pulling image "harbor.supershy.com/supershy-linux/alpine:lastest"
  Warning  Failed   21s (x2 over 47s)  kubelet  Error: ImagePullBackOff
  Normal   Pulling  9s (x3 over 48s)   kubelet  Pulling image "harbor.supershy.com/supershy-linux/alpine:lastest"
  Warning  Failed   9s (x3 over 48s)   kubelet  Failed to pull image "harbor.supershy.com/supershy-linux/alpine:lastest": rpc error: code = Unknown desc = Error response from daemon: unknown: artifact supershy-linux/alpine:lastest not found
  Warning  Failed   9s (x3 over 48s)   kubelet  Error: ErrImagePull
  • 故障排查案例7- 未配置镜像仓库密码认证
Warning  Failed     3s (x2 over 16s)  kubelet            Failed to pull image "harbor.supershy.com/web/nginx:1.20.1-alpine": rpc error: code = Unknown desc = Error response from daemon: unauthorized: unauthorized to access repository: web/nginx, action: pull: unauthorized to access repository: web/nginx, action: pull
[root@master231 /supershy/manifests/secrets]# 


[root@master231 /supershy/manifests/secrets]# kubectl create secret docker-registry supershy-harbor --docker-username=admin --docker-password=1 --docker-email=admin@123.com --docker-server=harbor.supershy.com
  secret/supershy-harbor created


[root@master231 /supershy/manifests/secrets]# cat 03-secrets-harbor.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-secrets-harbor
spec:
  imagePullSecrets:
  - name: supershy-harbor
  containers:
  - name: c1
    image: harbor.supershy.com/web/nginx:1.20.1-alpine
    imagePullPolicy: Always
    

[root@master231 /supershy/manifests/secrets]# kubectl get pods -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
supershy-linux89-secrets-harbor   1/1     Running   0          16s   10.100.2.30   worker233   <none>           <none>

  • 故障排查案例7- didn't tolerate 节点设置了taint,资源清单未配置容忍
[root@master231 /supershy/manifests/diy]# kubectl get po -o wide
NAME                               READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
mysql-deployment-d65fc485f-5ljfd   0/1     Pending   0          2m23s   <none>         <none>      <none>           <none>

[root@master231 /supershy/manifests/diy]# kubectl describe po mysql-deployment-d65fc485f-5ljfd 
...
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  27s   default-scheduler  0/3 nodes are available: 3 node(s) had taint {www.supershy.com/class: linux89}, that the pod didn't tolerate.

  • 故障排查案例8- 命名空间做了资源限制,创建pod总数的需求资源超过限制
[root@master231 /supershy/manifests/resourcequotas]# kubectl -n kube-public describe rs supershy-stress-d6d7d76
...
Warning  FailedCreate      53s                replicaset-controller  Error creating: pods "supershy-stress-d6d7d76-5c8n4" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=1,requests.cpu=500m, used: limits.cpu=2,requests.cpu=1, limited: limits.cpu=2,requests.cpu=1
...

[root@master231 /supershy/manifests/limitRange]#kubectl -n kube-public describe po pods-03
...
  Warning  FailedScheduling  57s   default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu, 2 Insufficienmemory.
  • 故障排查案例9-master节点初始化时无法连接apiserver
[root@k8s-master01 ~]# kubeadm init --config kubeadm-init.yaml  --upload-certs
...
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
^C

[root@k8s-master01 ~]# tail -100f /var/log/messages
...
Sep  9 19:45:48 k8s-master01 kubelet: E0909 19:45:48.233975   32566 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://api-server:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp: lookup api-server on 223.5.5.5:53: no such host
Sep  9 19:45:48 k8s-master01 kubelet: I0909 19:45:48.836182   32566 kubelet_node_status.go:70] "Attempting to register node" node="k8s-master01"
Sep  9 19:45:48 k8s-master01 kubelet: E0909 19:45:48.913144   32566 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://api-server:8443/api/v1/nodes\": dial tcp: lookup api-server on 223.5.5.5:53: no such host" node="k8s-master01"
Sep  9 19:45:49 k8s-master01 containerd: time="2024-09-09T19:45:49.637071422+08:00" level=info msg="shim disconnected" id=5730d39e6f25eef595a4d79c8ceeb057f33d4eb7835933637b6b801788d43220
Sep  9 19:45:49 k8s-master01 containerd: time="2024-09-09T19:45:49.637116814+08:00" level=warning msg="cleaning up after shim disconnected" id=5730d39e6f25eef595a4d79c8ceeb057f33d4eb7835933637b6b801788d43220 namespace=k8s.io
Sep  9 19:45:49 k8s-master01 containerd: time="2024-09-09T19:45:49.637123851+08:00" level=info msg="cleaning up dead shim"
Sep  9 19:45:49 k8s-master01 containerd: time="2024-09-09T19:45:49.645418252+08:00" level=warning msg="cleanup warnings time=\"2024-09-09T19:45:49+08:00\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=33073 runtime=io.containerd.runc.v2\n"


	# 排查,这里我在kubeadm-init文件中配置的https://api-server:8443
	# 实际集群的apiserver并没有配置以api-server为主机名的地址,而使用的本地8443
[root@k8s-master01 ~]# curl -k https://api-server:8443
curl: (6) Could not resolve host: api-server; Unknown error
[root@k8s-master01 ~]# telnet api-server 8443.
telnet: 8443.: bad port
[root@k8s-master01 ~]# telnet api-server 8443
telnet: api-server: Name or service not known
api-server: Unknown host

  • 故障排查案例9-flannel插件初始化时一直重启,而node还处于ready状态,
    • 错误原因:你在kube-flannel.yaml中新加了内容,格式不对,检查格式
[root@k8s-master01 ~]# kubectl get po -o wide -A
NAMESPACE      NAME                                   READY   STATUS              RESTARTS         AGE   IP           NODE           NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-48bw6                  0/1     CrashLoopBackOff    6 (5m7s ago)     20m   10.0.0.242   k8s-master02   <none>           <none>
kube-flannel   kube-flannel-ds-dmrjc                  0/1     CrashLoopBackOff    12 (3m42s ago)   20m   10.0.0.241   k8s-master01   <none>           <none>
kube-system    coredns-66f779496c-54kq4               0/1     ContainerCreating   0                64m   <none>       k8s-master01   <none>           <none>
kube-system    coredns-66f779496c-dvh6l               0/1     ContainerCreating   0                64m   <none>       k8s-master01   <none>           <none>
kube-system    etcd-k8s-master01                      1/1     Running             3                65m   10.0.0.241   k8s-master01   <none>           <none>
kube-system    etcd-k8s-master02                      1/1     Running             0                35m   10.0.0.242   k8s-master02   <none>           <none>
kube-system    kube-apiserver-k8s-master01            1/1     Running             0                65m   10.0.0.241   k8s-master01   <none>           <none>
kube-system    kube-apiserver-k8s-master02            1/1     Running             0                35m   10.0.0.242   k8s-master02   <none>           <none>
kube-system    kube-controller-manager-k8s-master01   1/1     Running             4 (35m ago)      65m   10.0.0.241   k8s-master01   <none>           <none>
kube-system    kube-controller-manager-k8s-master02   1/1     Running             0                35m   10.0.0.242   k8s-master02   <none>           <none>
kube-system    kube-proxy-6c2jp                       1/1     Running             0                64m   10.0.0.241   k8s-master01   <none>           <none>
kube-system    kube-proxy-jcrbf                       1/1     Running             0                35m   10.0.0.242   k8s-master02   <none>           <none>
kube-system    kube-scheduler-k8s-master01            1/1     Running             4 (35m ago)      65m   10.0.0.241   k8s-master01   <none>           <none>
kube-system    kube-scheduler-k8s-master02            1/1     Running             0                35m   10.0.0.242   k8s-master02   <none>           <none>

  • 故障排查案例10- containerd容器拉取镜像时报错
    • 解决:重启containerd
[root@k8s-master03 ~]# ctr -n k8s.io images pull docker.io/flannel/flannel:v0.22.2
docker.io/flannel/flannel:v0.22.2:                                                resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:0ceadfb3394bc68b30588eaa7e4ee4b296cb5d43d3852b46dbb85d715d7f0622:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:434f93767d5801af5288ac094c26d64feeaeee5e2f286aa4b6a450f907a49647: done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:6cf27f72f9d781e7b49dc7a2175b3c3254cc17bb4b81d4eaed67116723fbb797:    downloading    |--------------------------------------|    0.0 B/1.5 KiB   
config-sha256:49937eb983daff0a7296b62b5781e6fb34828a90a334544c6acee7ae02c2411f:   downloading    |--------------------------------------|    0.0 B/2.6 KiB   
layer-sha256:f56be85fc22e46face30e2c3de3f7fe7c15f8fd7c4e5add29d7f64b87abdaa09:    downloading    |--------------------------------------|    0.0 B/3.2 MiB   
layer-sha256:a85811ec4ef7248b50c03ac0b9bebc0607fbf7a7e51a8af5ed7d0d9355d514ff:    downloading    |--------------------------------------|    0.0 B/5.0 MiB   
layer-sha256:fe662e6f3f8b5b0bdde2f402c80586f433e4c6ef3de88efc8602098c1013ddc4:    downloading    |--------------------------------------|    0.0 B/4.7 MiB   
layer-sha256:254b571e61a329666819e4bb4c1d9e2b56e8e873284689a545f38bc8b1a29cb0:    downloading    |--------------------------------------|    0.0 B/935.5 KiB 
layer-sha256:3643064943e12001fd1ecee8a9a0843f7b0864d4656319a58c8ed76be33bc5d2:    downloading    |--------------------------------------|    0.0 B/12.0 MiB  
layer-sha256:7694190571a299a7e09dbc8530fcfd47d1b775682e0a7e66e4455a0fb6c7916d:    downloading    |--------------------------------------|    0.0 B/1.2 KiB   
layer-sha256:506197a4fd2783eacd82f4078697605dc4fdafd0676a6a0f64d8bac2792eff1f:    downloading    |--------------------------------------|    0.0 B/2.3 KiB   
elapsed: 9.3 s                                                                    total:  3.9 Ki (433.0 B/s)                                       
ctr: failed to copy: httpReadSeeker: failed open: failed to do request: Get "https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/50/506197a4fd2783eacd82f4078697605dc4fdafd0676a6a0f64d8bac2792eff1f/data?verify=1725890456-lMhLTRjqB%2BDyK3jrtwyvUWtLREQ%3D": read tcp 10.0.0.243:33790->104.16.97.215:443: read: connection reset by peer


- 重启containerd
[root@k8s-master03 ~]# systemctl restart containerd
[root@k8s-master03 ~]# ctr -n k8s.io images pull docker.io/flannel/flannel:v0.22.2
docker.io/flannel/flannel:v0.22.2:                                                resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:0ceadfb3394bc68b30588eaa7e4ee4b296cb5d43d3852b46dbb85d715d7f0622:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:434f93767d5801af5288ac094c26d64feeaeee5e2f286aa4b6a450f907a49647: done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:6cf27f72f9d781e7b49dc7a2175b3c3254cc17bb4b81d4eaed67116723fbb797:    done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:49937eb983daff0a7296b62b5781e6fb34828a90a334544c6acee7ae02c2411f:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:f56be85fc22e46face30e2c3de3f7fe7c15f8fd7c4e5add29d7f64b87abdaa09:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:a85811ec4ef7248b50c03ac0b9bebc0607fbf7a7e51a8af5ed7d0d9355d514ff:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:fe662e6f3f8b5b0bdde2f402c80586f433e4c6ef3de88efc8602098c1013ddc4:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:254b571e61a329666819e4bb4c1d9e2b56e8e873284689a545f38bc8b1a29cb0:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:3643064943e12001fd1ecee8a9a0843f7b0864d4656319a58c8ed76be33bc5d2:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:7694190571a299a7e09dbc8530fcfd47d1b775682e0a7e66e4455a0fb6c7916d:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:506197a4fd2783eacd82f4078697605dc4fdafd0676a6a0f64d8bac2792eff1f:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 3.3 s                                                                    total:   0.0 B (0.0 B/s)                                         
unpacking linux/amd64 sha256:0ceadfb3394bc68b30588eaa7e4ee4b296cb5d43d3852b46dbb85d715d7f0622...
done: 17.623829ms	
[root@k8s-master03 ~]# 

1. Pod资源清单

  • 查看资源清单指定字段的参数:kubectl explain po.spec.containers.ports
- 资源清单的组成
apiVersion:
	对应不同的API版本。

kind:
	资源的类型。

metadata:
	声明资源的元数据信息。

spec:
	用户期望运行状态,可以让Pod运行指定滚动容器。

status:
	资源的实际运行状态,由K8S组件自行维护。

1. 容器的类型

  • 基础架构容器,无需配置,k8s集群自行维护

    • pause:3.6:创建pod会在工作节点自动创建pause:3.6容器,用于分配ip地址,当业务容器删除或重启后保证ip地址不变,如果把pause:3.6这个容器删除,k8s也会自动将它重启,但是ip地址会发生改变
    • 作用:提供网络命名空间的初始化操作
  • 初始化容器:(可选配置,会优先于业务容器运行)

    • initContainers:初始化容器可以执行多个,当所有初始化容器执行完毕后才会执行业务容器
    • 作用:完成一些业务容器的初始化操作
  • 业务容器:(运行实际业务)

    • containers:运行一个或多个容器
[root@master231 /supershy/manifests/pods]# cat 05-pods-initContainers.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: supershy-linux89-multiple-pods
spec:
  nodeName: worker233
  initContainers:
  - name: init-1
    image: harbor.supershy.com/supershy-linux/alpine:latest
    command: ["sleep","20"]
  - name: init-2
    image: harbor.supershy.com/supershy-linux/alpine:latest
    command: ["sleep","50"]
  containers:
  - name: c1
    image: harbor.supershy.com/web/nginx:1.20.1-alpine

2. 容器的数据持久化

1) emptyDir

- 容器的数据持久化

什么是emptyDir:
	是一个临时存储卷,与Pod的生命周期绑定到一起,如果Pod被删除了,这意味着数据也被随之删除。
	
emptyDir作用:
	(1)可以实现持久化;
	(2)同一个Pod的多个容器可以实现数据共享,多个不同的Pod之间不能进行数据通信;
	(3)随着Pod的生命周期而存在,当我们删除Pod时,其数据也会被随之删除;
    
    
emptyDir的应用场景:
	(1)临时缓存空间,比如基于磁盘的归并排序;
	(2)为较耗时计算任务提供检查点,以便任务能方便的从崩溃前状态恢复执行;
	(3)存储Web访问日志及错误日志等信息;
	
	
emptyDir优缺点:
	优点:
		(1)可以实现同一个Pod内多个容器之间数据共享;
		(2)当Pod内的某个容器被强制删除时,数据并不会丢失,因为Pod没有删除;
	缺点:
		(1)当Pod被删除时,数据也会被随之删除;
		(2)不同的Pod之间无法实现数据共享;
	
	
参考链接:
	https://kubernetes.io/docs/concepts/storage/volumes#emptydir
	
	
温馨提示:
	1)启动pods后,使用emptyDir其数据存储在"/var/lib/kubelet/pods"路径下对应的POD_ID目录哟!
		/var/lib/kubelet/pods/${POD_ID}/volumes/kubernetes.io~empty-dir/


[root@master231 /supershy/manifests/pods]# cat 06-pods-nginx-volumes.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: supershy-linux89-multiple-pods
spec:
  nodeName: worker233
  volumes: 
  - name: data
    emptyDir: {}
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-linux/alpine:latest
    command:
    - tail
    args:
    - -f
    - /etc/hosts
    volumeMounts:
    - name: data
      mountPath: /supershy-data
  - name: c2
    image: harbor.supershy.com/web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /usr/share/nginx/html

2) hostPath

- 基于hotsPath数据卷实现容器访问宿主机指定路径
hotsPath数据卷:
	挂载Node文件系统(Pod所在节点)上文件或者目录到Pod中的容器。如果Pod删除了,宿主机的数据并不会被删除。


应用场景:
	Pod中容器需要访问宿主机文件。


hotsPath优缺点:
	优点:
		(1)可以实现同一个Pod不同容器之间的数据共享;
		(2)可以实现同一个Node节点不同Pod之间的数据共享;

	缺点:
		无法满足跨节点Pod之间的数据共享。

推荐阅读:
	https://kubernetes.io/docs/concepts/storage/volumes/#hostpath


参考案例:

[root@master231 pods]# cat 21-volumes-hostPath.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-volumes-hostpath-01
spec:
  nodeName: worker232
  volumes:
  - name: data
    # 指定存储卷的类型是hostPath
    hostPath:
      # 指定宿主机的存储路径
      path: /supershy-data
  containers:
  - name: web
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /usr/share/nginx/html
[root@master231 pods]# 

3) nfs

- 基于nfs服务器实现跨主机节点的数据共享
NFS数据卷:
	提供对NFS挂载支持,可以自动将NFS共享路径挂载到Pod中。


NFS:
	英文全称为"Network File System"(网络文件系统),是由SUN公司研制的UNIX表示层协议(presentation layer protocol),能使使用者访问网络上别处的文件就像在使用自己的计算机一样。
	NFS是一个主流的文件共享服务器,但存在单点故障,我们需要对数据进行备份哟,如果有必要可以使用分布式文件系统哈。


推荐阅读:
	https://kubernetes.io/docs/concepts/storage/volumes/#nfs
	

- 部署nfs server
	1 所有节点安装nfs相关软件包
yum -y install nfs-utils

	2 k8s231节点设置共享目录
mkdir -pv /supershy/data/kubernetes
cat > /etc/exports <<'EOF'
/supershy/data/kubernetes *(rw,no_root_squash)
EOF

	3 配置nfs服务开机自启动
systemctl enable --now nfs

	4 服务端检查NFS挂载信息 
exportfs

	5 客户端节点手动挂载测试
mount -t nfs master231:/supershy/data/kubernetes /mnt/
umount /mnt 


- 配置pod实现nfs数据共享
[root@master231 pods]# cat 22-volumes-nfs.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-volumes-nfs-01
spec:
  nodeName: worker232
  volumes:
  - name: data
    # 指定存储卷的类型为nfs
    nfs:
      # 指定nfs服务器地址
      server: master231
      # nfs共享的数据目录
      path: /supershy/data/kubernetes
  containers:
  - name: web
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /usr/share/nginx/html

---

apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-volumes-nfs-02
spec:
  nodeName: worker233
  volumes:
  - name: data
    nfs:
      server: 10.0.0.231
      path: /supershy/data/kubernetes
  containers:
  - name: web
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /usr/share/nginx/html
[root@master231 pods]# 

3. ports端口映射

3.1 使用宿主机端口映射ports

[root@master231 /supershy/manifests/pods]# cat 08-pods-games-ports.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: supershy-linux89-games
spec:
  nodeName: worker232
  containers:
  - name: c1
    image: harbor.supershy.com/games/supershy-games:v1
    ports: 
    - containerPort: 80
      hostPort: 2480
      protocol: TCP
      name: nginx
    - containerPort: 22
      hostPort: 2422
      protocol: TCP
      name: ssh

3.2 使用宿主机网络hostNetwork

  • 要避免和宿主机端口冲突的问题
  • 使用该字段,则容器直接使用宿主机的网络名称空间,效率是最高的
  • 但使用时,一定要注意,Pod的容器服务的端口不要和宿主机的端口冲突,否则Pod的容器无法正常启动
[root@master231 /supershy/manifests/pods]# cat 09-pods-nginx-hostNetwork.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: supershy-linux89-nginx
spec:
  nodeName: worker232
  hostNetwork: true
  containers:
  - name: c1
    image: harbor.supershy.com/web/nginx:1.20.1-alpine

4. env 容器环境变量传递

1) MySQL容器环境变量传递

[root@master231 /supershy/manifests/pods]# cat 10-pods-mysql-env.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: supershy-linux89-mysql-env
spec:
  nodeName: worker232
  containers:
  - name: c1
    image: harbor.supershy.com/db/mysql:8.0-oracle
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: "123"
   # - name: MYSQL_ALLOW_EMPTY_PASSWORD
   #   value: "yes"
   # - name: MYSQL_RANDOM_ROOT_PASSWORD
   #   value: "yes"


[root@master231 /supershy/manifests/pods]# kubectl exec -it supershy-linux89-mysql-env -- mysql -uroot -p123

2) wordpress实现环境变量传递

[root@master231 /supershy/manifests/pods]# cat 11-pods-wordpress.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: supershy-linux89-wordpress
spec:
  nodeName: worker232
  hostNetwork: true
  volumes:
  - name: db
    emptyDir: {}
  - name: wp
    emptyDir: {}
  containers:
  - name: mysql
    image: harbor.supershy.com/db/mysql:8.0-oracle
    env:
    - name: MYSQL_ALLOW_EMPTY_PASSWORD
      value: "yes"
    - name: MYSQL_DATABASE
      value: "wordpress"
    - name: MYSQL_USER
      value: "admin"
    - name: MYSQL_PASSWORD
      value: "123"
    ports: 
    - containerPort: 3306
    volumeMounts:
    - name: db
      mountPath: /var/lib/mysql
  - name: wordpress
    image: harbor.supershy.com/wordpress/wordpress:php8.1-apache
    env:
    - name: WORDPRESS_DB_HOST
      value: "127.0.0.1"
    - name: WORDPRESS_DB_USER
      value: "admin"
    - name: WORDPRESS_DB_PASSWORD
      value: "123"
    ports: 
    - containerPort: 80
    volumeMounts:
    - name: wp
      mountPath: /var/www/html/wp-content/uploads

5. imagePullPolicy 镜像下载策略(面试题)

[root@master231 pods]# cat 12-pods-imagePullPolicy.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-imagepullpolicy-09
spec:
  nodeName: worker233
  containers:
  - name: c1
    # image: harbor.supershy.com/supershy-imagepullpolicy/apps:v1
    # image: harbor.supershy.com/supershy-imagepullpolicy/apps
    image: harbor.supershy.com/supershy-imagepullpolicy/apps:latest
    # 指定容器镜像的拉取策略,有效值: Always, Never, IfNotPresent
    #   Always:
    #     无论本地是否有镜像,都会去远程仓库获取镜像的摘要信息(可以理解为镜像的md5值)。
    #     如果摘要值信息对比一致。说明本地镜像和远程仓库镜像是一致,于是就会使用本地镜像。
    #     如果摘要值信息对比不一致,说明本地的镜像和远程仓库镜像不一致,则会拉取远程仓库的镜像覆盖本地镜像。
    #   IfNotPresent:
    #     如果本地有镜像则尝试启动镜像,如果本地没有镜像则拉取镜像。
    #   Never:
    #     如果本地有镜像则尝试启动镜像,如果本地没有镜像则不拉取镜像
    #
    # 关于默认的镜像下载策略:
    #    - 如果镜像的tag非":latest",则默认的镜像策略为"IfNotPresent"。
    #    - 如果镜像的tag是":latest",则默认的镜像策略为"Always"。
    # imagePullPolicy: Never
    # imagePullPolicy: IfNotPresent
    # imagePullPolicy: Always

6. restartPolicy 重启策略

- Pod的容器重启策略
[root@master231 pods]# cat 14-pods-RestartPolicy.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-restartpolicy-05
spec:
  nodeName: worker232
  # 容器的重启策略,有效值为: Always, OnFailure,Never
  #   Always:
  #     当容器退出时,始终重启容器。
  #   OnFailure:
  #     当容器异常退出时,才回去重启容器。
  #   Never:
  #     当容器退出时,始终不重启容器。
  #
  #  温馨提示: 重启策略同一个Pod内的所有容器生效,默认值为Always。
  # restartPolicy: Always
  # restartPolicy: Never
  restartPolicy: OnFailure
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-imagepullpolicy/apps:v1
    imagePullPolicy: IfNotPresent
    command: ["sleep","30"]
[root@master231 pods]# 

7. resources 资源限制

- 容器的资源限制
[root@master231 pods]# cat 16-pods-resources.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-resources-05
spec:
  restartPolicy: Always
  containers:
  - name: c1
    image: jasonyin2020/supershy-linux-tools:v0.1
    imagePullPolicy: IfNotPresent
    command:
    - tail
    - -f
    - /etc/hosts
    # 对容器做资源限制
    resources:
      # 容器运行的期望资源,不会立即使用这些资源,但节点必须满足Pod调度的期望.
      requests:
        # 指定CPU的核心数,1core=1000m
        # cpu: 4
        cpu: 200m
        memory: 200Mi
      # 容器运行的资源上限,
      limits:
        cpu: 0.5
        memory: 500Mi
[root@master231 pods]# 

8. 健康检查探针

0) 常用探针

- 常用的探针(Probe):
	livenessProbe:
		健康状态检查,周期性检查服务是否存活,检查结果失败,将"重启"容器(删除源容器并重新创建新容器)。
		如果容器没有提供健康状态检查,则默认状态为Success。
		
	readinessProbe:
		可用性检查,周期性检查服务是否可用,从而判断容器是否就绪。
		若检测Pod服务不可用,则会将Pod从svc的ep列表中移除。
		若检测Pod服务可用,则会将Pod重新添加到svc的ep列表中。
		如果容器没有提供可用性检查,则默认状态为Success。
		
	startupProbe: (1.16+之后的版本才支持)
		如果提供了启动探针,则所有其他探针都会被禁用,直到此探针成功为止。
		如果启动探测失败,kubelet将杀死容器,而容器依其重启策略进行重启。 
		如果容器没有提供启动探测,则默认状态为 Success。


探针(Probe)检测Pod服务方法:
	exec:
		执行一段命令,根据返回值判断执行结果。返回值为0或非0,有点类似于"echo $?"。
		
	httpGet:
		发起HTTP请求,根据返回的状态码来判断服务是否正常。
			200: 返回状态码成功
			301: 永久跳转
			302: 临时跳转
			401: 验证失败
			403: 权限被拒绝
			404: 文件找不到
			413: 文件上传过大
			500: 服务器内部错误
			502: 无效的请求
			504: 后端应用网关响应超时
			...
			
	tcpSocket:
    	测试某个TCP端口是否能够链接,类似于telnet,nc等测试工具。
    	
    grpc:
		K8S 1.19引入,目前属于测试阶段。
    	
参考链接:
	https://kubernetes.io/zh/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe

1) livenessProbe-httpGet

- 健康检查探针livenessProbe之httpGet
[root@master231 pods]# cat 14-livenessProbe-httpGet.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-livenessprobe-httpget
spec:
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    # 健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器。
    livenessProbe:
      # 使用httpGet的方式去做健康检查
      httpGet:
        # 指定访问的端口号
        port: 80
        # 检测指定的url访问路径
        path: /index.html
      # 检测服务失败次数的累加值,默认值是3次,最小值是1。当检测服务成功后,该值会被重置!
      failureThreshold: 3	
      # 指定多久之后进行健康状态检查,即此时间段内检测服务失败并不会对failureThreshold进行计数。
      initialDelaySeconds: 15
      # 指定探针检测的频率,默认是10s,最小值为1.
      periodSeconds: 1
      # 检测服务成功次数的累加值,默认值为1次,最小值1.
      successThreshold: 1
      # 一次检测周期超时的秒数,默认值是1秒,最小值为1.
      timeoutSeconds: 1
[root@master231 pods]# 

2) livenessProbe-exec

- 健康检查探针livenessProbe之exec
[root@master231 pods]# cat 15-livenessProbe-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-livenessprobe-exec
spec:
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    command: 
     - /bin/sh
     - -c
     - touch /tmp/supershy-linux89-healthy; sleep 5; rm -f /tmp/supershy-linux89-healthy; sleep 600
    # 健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器。
    livenessProbe:
      # 使用exec的方式去做健康检查
      exec:
        # 自定义检查的命令
        command:
        - cat
        - /tmp/supershy-linux89-healthy
      # 检测服务失败次数的累加值,默认值是3次,最小值是1。当检测服务成功后,该值会被重置!
      failureThreshold: 3
      # 指定多久之后进行健康状态检查,即此时间段内检测服务失败并不会对failureThreshold进行计数。
      initialDelaySeconds: 15
      # 指定探针检测的频率,默认是10s,最小值为1.
      periodSeconds: 1
      # 检测服务成功次数的累加值,默认值为1次,最小值1.
      successThreshold: 1
      # 一次检测周期超时的秒数,默认值是1秒,最小值为1.
      timeoutSeconds: 1

[root@master231 pods]# 

3) livenessProbe-tcpSocket

- 健康检查探针livenessProbe之tcpsocket
[root@master231 pods]# cat 16-livenessProbe-tcpsocket.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-livenessprobe-tcpsocket
spec:
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    command:
    - /bin/sh
    - -c
    - nginx ; sleep 10; nginx -s stop ; sleep 600
    # 健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器。
    livenessProbe:
      # 使用tcpSocket的方式去做健康检查
      tcpSocket:
        port: 80
      # 检测服务失败次数的累加值,默认值是3次,最小值是1。当检测服务成功后,该值会被重置!
      failureThreshold: 3
      # 指定多久之后进行健康状态检查,即此时间段内检测服务失败并不会对failureThreshold进行计数。
      initialDelaySeconds: 15
      # 指定探针检测的频率,默认是10s,最小值为1.
      periodSeconds: 1
      # 检测服务成功次数的累加值,默认值为1次,最小值1.
      successThreshold: 1
      # 一次检测周期超时的秒数,默认值是1秒,最小值为1.
      timeoutSeconds: 1
[root@master231 pods]# 

4) readinessProbe-exec

- readinessProbe就绪可用性检查探针之exec
[root@master231 pods]# cat 17-readinessProbe-exec.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-readinessprobe-01
spec:
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    command: 
    - /bin/sh
    - -c
    - touch /tmp/supershy-linux89-healthy; sleep 5; rm -f /tmp/supershy-linux89-healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/supershy-linux89-healthy
      failureThreshold: 3
      initialDelaySeconds: 65
      periodSeconds: 1
      successThreshold: 1
      timeoutSeconds: 1
    # 可用性检查,周期性检查服务是否可用,从而判断容器是否就绪.
    readinessProbe:
      # 使用exec的方式去做健康检查
      exec:
        # 自定义检查的命令
        command:
        - cat
        - /tmp/supershy-linux89-healthy
      failureThreshold: 3
      initialDelaySeconds: 15
      periodSeconds: 1
      successThreshold: 1
      timeoutSeconds: 1

[root@master231 pods]# #不会立即进入READY状态
[root@master231 /supershy/manifests/pods]# kubectl get pods -o wide
NAME                                  READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
supershy-linux89-readinessprobe-01   0/1     Running   0          6s    10.100.1.48   worker232   <none>           <none>

5) readinessProbe-httpGet

- readinessProbe可用性检查探针之httpGet

[root@master231 pods]# cat 18-readinessProbe-httpGet.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-readinessprobe-httpget
spec:
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    command: 
    - /bin/sh
    - -c
    - touch /tmp/supershy-linux89-healthy; sleep 5; rm -f /tmp/supershy-linux89-healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/supershy-linux89-healthy
      failureThreshold: 3
      initialDelaySeconds: 65
      periodSeconds: 1
      successThreshold: 1
      timeoutSeconds: 1
    # 可用性检查,周期性检查服务是否可用,从而判断容器是否就绪.
    readinessProbe:
      # 使用httpGet的方式去做健康检查
      httpGet:
        # 指定访问的端口号
        port: 80
        # 检测指定的访问路径
        path: /index.html
      failureThreshold: 3
      initialDelaySeconds: 15
      periodSeconds: 1
      successThreshold: 1
      timeoutSeconds: 1
[root@master231 pods]# 

6) readinessProbe-tcpSocket

- readinessProbe可用性检查探针之tcpSocket
[root@master231 pods]# cat 19-readinessProbe-tcpSocket.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-readinessprobe-tcpsocket
spec:
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    command: 
     - /bin/sh
     - -c
     - touch /tmp/supershy-linux89-healthy; sleep 5; rm -f /tmp/supershy-linux89-healthy; nginx -g "daemon off;"
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/supershy-linux89-healthy
      failureThreshold: 3
      initialDelaySeconds: 65
      periodSeconds: 1
      successThreshold: 1
      timeoutSeconds: 1
    # 可用性检查,周期性检查服务是否可用,从而判断容器是否就绪.
    readinessProbe:
      # 使用tcpSocket的方式去做健康检查
      tcpSocket:
        port: 80
      # 检测服务失败次数的累加值,默认值是3次,最小值是1。当检测服务成功后,该值会被重置!
      failureThreshold: 3
      # 指定多久之后进行可用性检查,在此之前,Pod始终处于未就绪状态。
      initialDelaySeconds: 10
      # 指定探针检测的频率,默认是10s,最小值为1.
      periodSeconds: 1
      # 检测服务成功次数的累加值,默认值为1次,最小值1.
      successThreshold: 1
      # 一次检测周期超时的秒数,默认值是1秒,最小值为1.
      timeoutSeconds: 1

[root@master231 pods]# 

7) startupProbe-httpGet

  • startupProbe优先级高于livenessProbe和readinessProbe
- startupProbe启动探针烧脑版
[root@master231 pods]# cat 20-startupProbe-httpGet.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-startupprobe-httpget-01
spec:
  volumes:
  - name: data
    emptyDir: {}
  initContainers:
  - name: init01
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /supershy
    command:
    - /bin/sh
    - -c
    - echo "liveness probe test page" >> /supershy/huozhe.html
  - name: init02
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /supershy
    command:
    - /bin/sh
    - -c
    - echo "readiness probe test page" >> /supershy/supershy.html
  - name: init03
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /supershy
    command:
    - /bin/sh
    - -c
    - echo "liveness probe test page" >> /supershy/start.html
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /usr/share/nginx/html
    # 判断服务是否健康,若检查不通过,将Pod直接重启。
    livenessProbe:
      httpGet:
        port: 80
        path: /huozhe.html
      failureThreshold: 3
      initialDelaySeconds: 15
      periodSeconds: 1
      successThreshold: 1
      timeoutSeconds: 1
    # 判断服务是否就绪,若检查不通过,将Pod标记为未就绪状态。
    readinessProbe:
      httpGet:
        port: 80
        path: /supershy.html
      failureThreshold: 3
      initialDelaySeconds: 15
      periodSeconds: 3
      successThreshold: 1
      timeoutSeconds: 1
    # 启动时做检查,若检查不通过,直接杀死容器。并进行重启!
    # startupProbe探针通过后才回去执行readinessProbe和livenessProbe哟~
    startupProbe:
      httpGet:
        port: 80
        path: /start.html
      failureThreshold: 3
      # initialDelaySeconds: 65
      initialDelaySeconds: 35
      periodSeconds: 3
      successThreshold: 1
      timeoutSeconds: 1
[root@master231 pods]# 

9. lifecycle Pod的创建容器生命周期

- 容器的优雅终止postStart和preStop:
[root@master231 pods]# cat 23-containers-lifecycle.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-containers-lifecycle-04
spec:
  # 指定容器退出时,最大要等待的时间,如果不指定,默认为30s。
  # terminationGracePeriodSeconds: 5
  terminationGracePeriodSeconds: 60
  nodeName: worker232
  volumes:
  - name: data
    hostPath:
      path: /supershy-data
  containers:
  - name: web
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /data
    # 定义容器的生命周期。
    lifecycle:
      # 容器启动之后做的事情
      postStart:
        exec:
          command: 
          - "/bin/sh"
          - "-c"
          - "echo \"postStart at $(date +%F_%T)\" >> /data/postStart.log"
      # 容器停止之前做的事情
      preStop:
        exec:
          command: 
          - "/bin/sh"
          - "-c"
          #- "echo \"preStop at $(date +%F_%T)\" >> /data/preStop.log"
          - "sleep 10; echo \"preStop at $(date +%F_%T)\" >> /data/preStop.log"
[root@master231 pods]# 


- 烧脑版Pod的创建生命周期
[root@master231 pods]# cat 24-pods-workflow.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pods-workflow-02
spec:
  nodeName: worker233
  initContainers:
  - name: init-c1
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /data
    command: 
    - /bin/sh
    - -c
    - echo init-c1 `date "+%F %T"` > /data/init.log
    # - echo init-c1 `date "+%F %T"` > /data/init.log; sleep 30
  - name: init-c2
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /data
    command: 
    - /bin/sh
    - -c
    - echo init-c2 `date "+%F %T"` >> /data/init.log
  volumes:
  - name: data
    hostPath:
      path: /supershy-data
  - name: web
    emptyDir: {}
  # 在pod优雅终止时,定义延迟发送kill信号的时间,此时间可用于pod处理完未处理的请求等状况。
  # 默认单位是秒,若不设置默认值为30s。
  # terminationGracePeriodSeconds: 60
  terminationGracePeriodSeconds: 3
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /data
    - name: web
      mountPath: /usr/share/nginx/html
    imagePullPolicy: IfNotPresent
    livenessProbe:
      httpGet:
        port: 80
        path: /huozhe.html
      failureThreshold: 3
      initialDelaySeconds: 35
      periodSeconds: 1
      successThreshold: 1
      timeoutSeconds: 1
    readinessProbe:
      httpGet:
        port: 80
        path: /supershy.html
      failureThreshold: 3
      initialDelaySeconds: 15
      periodSeconds: 3
      successThreshold: 1
      timeoutSeconds: 1
    startupProbe:
      httpGet:
        port: 80
        path: /start.html
      failureThreshold: 3
      initialDelaySeconds: 65
      periodSeconds: 3
      successThreshold: 1
      timeoutSeconds: 1
    # 定义容器的生命周期。
    lifecycle:
      # 容器启动之后做的事情
      postStart:
        exec:
          command: 
          - "/bin/sh"
          - "-c"
          - "echo \"postStart at $(date +%F_%T)\" >> /data/postStart.log"
      # 容器停止之前做的事情
      preStop:
        exec:
         command: 
         - /bin/sh
         - -c
         - echo preStop at $(date +%F_%T) >> /data/preStop.log; sleep 20;
[root@master231 pods]# 

10. labels标签管理

对比响应式和声明式管理标签的方式:
	- 相同点:
		都可以实现标签的管理。
		
	- 不同点:
		- 响应式是基于命令行的方式创建标签,会立刻生效。当资源被重新创建时,需要再次手动创建出相应的标签。
		- 声明式是基于配置文件修改标签,需要apply应用后才能生效。当重新创建时,和配置文件的标签始终一致。

- 响应式管理pod标签
	#1.查看帮助
kubectl label --help

	#官方案例
Examples:
  # Update pod 'foo' with the label 'unhealthy' and the value 'true'
  kubectl label pods foo unhealthy=true
  
  # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value
  kubectl label --overwrite pods foo status=unhealthy
  
  # Update all pods in the namespace
  kubectl label pods --all status=unhealthy
  
  # Update a pod identified by the type and name in "pod.json"
  kubectl label -f pod.json status=unhealthy
  
  # Update pod 'foo' only if the resource is unchanged from version 1
  kubectl label pods foo status=unhealthy --resource-version=1
  
  # Update pod 'foo' by removing a label named 'bar' if it exists
  # Does not require the --overwrite flag
  kubectl label pods foo bar-


- 声明式管理pod标签
[root@master231 pods]# cat 28-pods-labels.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-labels-01
  # 给资源打标签
  labels:
    school: supershy2025
    class: linux89
    
    
	#基于标签过滤
[root@master231 /supershy/manifests/03-secrets]# kubectl get pod -l class -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
supershy-linux89-secrets-harbor   1/1     Running   0          74m   10.100.2.38   worker233   <none>           <none>

11. securityContext Pod的安全管理(优化篇)

- Pod的安全管理

- 安全上线文securityContext运行特权容器
	1.可以尝试修改容器的内核参数
[root@master231 01-pods]# cat 29-pods-securityContext-privileged.yaml
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-iptables-01
spec:
  restartPolicy: Always
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-tools/iptables:centos7
    imagePullPolicy: IfNotPresent
    # 为指定的容器设置安全上线文
    securityContext:
      # 为当前容器设置为特权容器,可以修改内核参数
      # 但是修改的内核参数并不会影响到宿主机的内核。
      privileged: true
[root@master231 01-pods]      


- 安全上线文securityContext禁用linux特性功能capabilities
	1.编写资源清单
[root@master231 01-pods]# cat 30-pods-securityContext-capabilities.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-capabilities
spec:
  restartPolicy: Always
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-tools/iptables:centos7
    imagePullPolicy: IfNotPresent
    securityContext:
       # 自定义LINUX内核特性
       # 推荐阅读:
       #   https://man7.org/linux/man-pages/man7/capabilities.7.html
       #   https://docs.docker.com/compose/compose-file/compose-file-v3/#cap_add-cap_drop
       capabilities:
         # 添加所有的Linux内核功能
         add:
         - ALL
         # 移除指定Linux内核特性
         drop:
         # 代表禁用网络管理的配置,
         - NET_ADMIN
         # 代表禁用UID和GID,表示你无法使用chown命令哟
         # 比如执行"useradd supershy"时会创建"/home/supershy"目录,并执行chown修改目录权限为"supershy"用户,
         # 此时你会发现可以创建用户成功,但无法修改"/home/supershy"目录的属主和属组。
         - CHOWN
         # # 代表禁用chroot命令
         - SYS_CHROOT
[root@master231 01-pods]#


- 安全上线文securityContext设置根文件系统为只读不允许修改
	1.编写资源清单
[root@master231 01-pods]# cat 31-pods-securityContext-readOnlyRootFilesystem.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-readonlyrootfilesystem
spec:
  restartPolicy: Always
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-tools/iptables:centos7
    imagePullPolicy: IfNotPresent
    securityContext:
       # 将文件系统设置为只读
       readOnlyRootFilesystem: true
[root@master231 01-pods]# 


- 安全上线文securityContext设置运行用户
	1.编写资源清单
[root@master231 01-pods]# cat 32-pods-securityContext-runAsUser.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-runasuser
spec:
  restartPolicy: Always
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-tools/iptables:centos7
    imagePullPolicy: IfNotPresent
    securityContext:
      # 如果容器的进程以root身份运行,则禁止容器启动!一般情况下需要和"runAsUser"参数搭配使用!
      runAsNonRoot: true
      # 指定运行程序的用户UID,注意,该用户的UID必须存在!
      runAsUser: 2024
      # 指定运行程序的组GID,注意,该组的GID必须存在!
      runAsGroup: 2024
[root@master231 01-pods]# 


- 安全上线文securityContext设置设置系统调用
	1.如果测试的是alpine相关镜像,可以使用如下的配置
[root@worker232 ~]# mkdir /var/lib/kubelet/seccomp
[root@worker232 ~]# 
[root@worker232 ~]# cat /var/lib/kubelet/seccomp/chmod.json
{
  "defaultAction": "SCMP_ACT_ALLOW",
  "syscalls": [
     {
       "names": [
         "chmod"
       ],
       "action": "SCMP_ACT_ERRNO"
     }
   ]
}
[root@worker232 ~]# 



相关字段说明:
	defaultAction:
		配置默认动作,默认动作为"SCMP_ACT_ALLOW",表示允许执行。
		
	syscalls:
		表示配置系统调用。
		names:
			指定系统调用的名称。
		action:
			对应系统调用的动作为"SCMP_ACT_ERRNO",此处表示拒绝。



	2.如果测试的是centos相关镜像,可以使用如下的配置
[root@worker232 ~]# cat /var/lib/kubelet/seccomp/chmod.json
{
  "defaultAction": "SCMP_ACT_ALLOW",
  "syscalls": [
     {
       "names": [
         "chmod",
          "mkdir"
       ],
       "action": "SCMP_ACT_ERRNO"
     }
   ]
}

	3.编写资源清单 
[root@master231 01-pods]# cat 33-pods-securityContext-seccompProfile.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-seccompprofile-01
spec:
  nodeName: "worker233"
  restartPolicy: Always
  containers:
  - name: c1
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    # image: harbor.supershy.com/supershy-tools/iptables:centos7
    imagePullPolicy: IfNotPresent
    securityContext:
     # Seccomp概述:
     #     是一种在Linux内核中实现的安全机制,用于过滤系统调用。它允许进程只能调用白名单内的系统调用,从而减少攻击面,提高系统的安全性。
     #     Seccomp最初是由Google开发并用于Chrome浏览器的安全机制,在Linux 2.6.12之后作为一个正式的补丁被合并进了内核中。
     #     Seccomp所做的工作就是将进程的系统调用过滤掉,只允许白名单内的调用通过。
     #     一旦进程试图执行不在白名单内的系统调用,Seccomp就会中断进程,并通过SIGSYS信号通知进程。
     #     在默认情况下,进程会终止。但是,这种行为可以通过安装SIGSYS的信号处理程序来定制。
     #     Seccomp是一种重要的系统保护机制,可以减少应用程序的攻击面和提高系统的安全性。
     #     在Linux安全中,Seccomp的使用场景十分广泛,可以应用于容器技术、Web服务器和Docker安全等多个领域。
     # 参考链接:
     #    https://www.python100.com/html/97662.html
     #
     # SeccompProfile定义pod/容器的seccomp配置文件设置。有效值如下:
     #    Localhost:
     #      应该使用在节点上的文件中定义的配置文件。
     #
     #    RuntimeDefault:
     #      应使用容器运行时默认配置文件。
     #
     #    Unconfined:
     #       不应应用任何配置文件。
     seccompProfile:
        # 指定Seccomp的类型
        type: Localhost
        # 指定本地要加载的文件,默认去加载的目录"/var/lib/kubelet/seccomp"
        # 温馨提示:
        #   对于系统调用,可能有些Linux发行版支持的调用命令不一致,比如alpine是支持chmod,而centos7不支持。
        #   对于centos测试,可以加一个mkdir指令,测试centos会很有效,但是alpine启动时应该调用了mkdir,所以导致无法启动
        localhostProfile: "chmod.json"

12. annotations资源注解(了解)

早期用于向POD传递配置参数的一种有效方式。
	
	你可以使用 Kubernetes 注解为对象附加任意的非标识的元数据。 客户端程序(例如工具和库)能够获取这些元数据信息。
	
	你可以使用标签或注解将元数据附加到 Kubernetes 对象。 标签可以用来选择对象和查找满足某些条件的对象集合。 相反,注解不用于标识和选择对象。 注解中的元数据,可以很小,也可以很大,可以是结构化的,也可以是非结构化的,能够包含标签不允许的字符。 可以在同一对象的元数据中同时使用标签和注解。
	
	
	推荐阅读:
		https://kubernetes.io/zh-cn/docs/concepts/overview/working-with-objects/annotations/

	值得注意的是,K8S会自行维护注解的一个内置key,名为"kubectl.kubernetes.io/last-applied-configuration"其中保存了用户的对于该资源的配置。
	
	
参考案例:
[root@master231 01-pods]# cat 34-pods-annotations.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-annotations
  labels:
    school: supershy2025
    class: linux89
  annotations:
    app: tools
    auther: jasonyin
spec:
  restartPolicy: Always
  containers:
  - name: c1
    image: jasonyin2020/supershy-linux-tools:v0.1
    imagePullPolicy: IfNotPresent
    command:
    - tail
    - -f
    - /etc/hosts
[root@master231 01-pods]# 
[root@master231 01-pods]# kubectl get pods supershy-linux89-annotations -o yaml

13. scheduler-Taints

Q1: 影响Pod调度的有哪些因素?
	- nodeName
	- hostNetwork
	- hostPort
	- resources
	  如果资源限制设置数量超过了服务器的核心或内存总数,也会调度失败
	- Taints
	- tolerations  
	  tolerations容忍污点为什么也会影响pod调度?
	  如果该节点设置了多个taint,调度该节点时未配置全部容忍也会影响pod调度
	- nodeSelector
	- Affinity
	  - nodeAffinity
	  - podAffinity
	  - podAntiAffinity
	- 各类控制器(ds,rc,rs,depoly,jobs,cj)

1. 设置污点

- NoSchedule
  不接受任何新pod加入,如果该节点本身有pod也不驱逐
- PreferNoSchedule
  有新的pod,先调度到其他节点,如果其他节点不可被调度,则本节点可以加入这个pod
  说白了,就是把该节点的调度优先级降低了
- NoExecute
  不接受任何新pod,也会把该节点现有的pod驱逐
- 一个Pod配置污点容忍时,如果想要调度到这个节点,则必须容忍该节点的所有污点。
  
	#设置taint方式
	1.查看taints
[root@master231 /supershy/manifests/scheduler]# kubectl describe nodes |grep Taints -A 2
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             apps=nginx:PreferNoSchedule
Unschedulable:      false
Lease:
--
Taints:             apps=nginx:NoExecute
                    apps=nginx:PreferNoSchedule
Unschedulable:      false
[root@master231 /supershy/manifests/scheduler]# 


	2.增加taint
[root@master231 /supershy/manifests/scheduler]# kubectl taint nodes worker233 apps=nginx:PreferNoSchedule
  node/worker233 tainted
	
	
	3.删除taint
[root@master231 /supershy/manifests/scheduler]# kubectl taint nodes worker233 apps=nginx:PreferNoSchedule-	   node/worker233 untainted

[root@master231 /supershy/manifests/scheduler]# kubectl taint nodes worker233 apps-
  node/worker233 untainted

2. 污点容忍

- 污点容忍实战案例
	1.编写资源清单 
[root@master231 13-scheduler]# cat 06-scheduler-tolerations.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-tolerations
spec:
  replicas: 10
  selector:
    matchLabels:
      apps: web
  template:
    metadata:
      labels:
        apps: web
    spec:
      # 配置Pod的污点容忍
      tolerations:
      - key: school
        value: supershy
        effect: NoExecute
        # 用于映射key和value之间的关系,其值为"Exists"和"Equal"。
        operator: Equal
      - key: www.supershy.com/class
        # 如果为Exists,表示存在key,value未指定则匹配所有value,如果effect未指定匹配所有污点类型
        operator: Exists
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
        operator: Exists
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
        ports:
        - containerPort: 80
[root@master231 13-scheduler]# 

14. scheduler-nodeSelector

- 基于节点标签进行调度
[root@master231 13-scheduler]# cat 08-scheduler-nodeSelector.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nodeselector
spec:
  replicas: 10
  selector:
    matchLabels:
      apps: web
  template:
    metadata:
      labels:
        apps: web
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
        operator: Exists
      # 基于节点的标签进行调度,只有包含"school=supershy"标签的节点可以进行调度Pod。
      nodeSelector:
         school: supershy
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
        ports:
        - containerPort: 80

15. scheduler-nodeAffinity

	#基于不同node标签,key相同,value不同
[root@master231 /supershy/manifests/scheduler]# kubectl get nodes --show-labels |grep school
master231   Ready    control-plane,master   17d   v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master231,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=,school=supershy
worker233   Ready    <none>                 17d   v1.23.17   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker233,kubernetes.io/os=linux,school=laonaihai
[root@master231 /supershy/manifests/scheduler]# 

	#定义node亲和度,匹配标签
[root@master231 13-scheduler]# cat 09-scheduler-affinity-nodeAffinity.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nodeaffinity
spec:
  replicas: 10
  selector:
    matchLabels:
      apps: web
  template:
    metadata:
      labels:
        apps: web
    spec:
      # 定义亲和性,所谓的亲和性就是让Pod更加倾向于往哪些节点或拓扑域进行调度的方式。
      affinity:
        # 定义节点亲和性,表示往哪些节点进行调度
        nodeAffinity: 
          # 定义硬限制,表示必须满足的条件
          requiredDuringSchedulingIgnoredDuringExecution:
            # 定义节点的匹配方式
            nodeSelectorTerms:
              # 基于节点标签进行匹配
            - matchExpressions:
              - key: school
                values:
                - supershy
                - laonanhai
                # 指定key和value之间的关系,有效值: In,NotIn, Exists, DoesNotExist. Gt, and Lt
                #   In:
                #      school的key值必须在values之内。
                #   NotIn:
                #      school的key值必须不在values之内。
                #   Exists:
                #      只要存在school的key即可。values可以不写。
                #   DoesNotExist:
                #      只要不存在schhol的key即可,values可以不写。
                #   Gt:
                #      大于,则要求values必须为单个数字。
                #   Lt:
                #      小于,则要求values必须是单个数字。
                operator: In
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
        operator: Exists
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
        ports:
        - containerPort: 80
[root@master231 13-scheduler]# 

16. scheduler-PodAffinity

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-podaffinity-matchexpressions
spec:
  replicas: 10
  selector:
    matchLabels:
      apps: web
  template:
    metadata:
      labels:
        apps: web
    spec:
      # 定义亲和性,所谓的亲和性就是让Pod更加倾向于往哪些节点或拓扑域进行调度的方式。
      affinity:
        # 定义Pod亲和性,当第一个Pod完成调度后,后续的所有Pod都将被调度到和第一个Pod相同的拓扑域。
        podAffinity: 
          # 定义硬限制,表示必须满足的条件
          requiredDuringSchedulingIgnoredDuringExecution:
            # 指定拓扑域,可以理解为机房
          - topologyKey: dc
            # 定义标签选择器,就是用于判定从第二个pod开始所调度节点参考哪个pods的标签
            labelSelector:
              matchExpressions:
              - key: apps
                values:
                - web
                - nginx
                operator: NotIn
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
        operator: Exists
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
        ports:
        - containerPort: 80

17. scheduler-PodAntiAffinity

[root@master231 13-scheduler]# cat 12-scheduler-affinity-podAntiAffinity-matchLabels.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-podantiaffinity-matchlabels
spec:
  replicas: 5
  selector:
    matchLabels:
      apps: web
  template:
    metadata:
      labels:
        apps: web
    spec:
      # 定义亲和性,所谓的亲和性就是让Pod更加倾向于往哪些节点或拓扑域进行调度的方式。
      affinity:
        # 定义Pod亲和性,当第一个Pod完成调度后,后续的所有Pod都将无法调度到该拓扑域(机房)
        podAntiAffinity: 
          # 定义硬限制,表示必须满足的条件
          requiredDuringSchedulingIgnoredDuringExecution:
            # 指定拓扑域,可以理解为机房
          - topologyKey: dc
            # 定义标签选择器,就是用于判定从第二个pod开始所调度节点参考哪个pods的标签
            labelSelector:
              matchLabels:
                apps: web
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
        operator: Exists
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
        ports:
        - containerPort: 80
[root@master231 13-scheduler]# 

2. configMap资源清单

1. 基于存储卷引用configMap资源

	1.基于存储卷引用configMap资源
[root@master231 02-configmaps]# cat 02-pods-configmaps-volumes.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-volumes-configmaps-01
spec:
  nodeName: worker232
  volumes:
  - name: data
    # 使用configMap存储卷
    configMap:
      # 指定configMap的名称
      name: game-demo
      # 引用的configMap指定的key,如果不指定items字段,则默认引用所有的key。
      items:
        # 指定configMap资源对应的key
      - key: school
        # 将来挂载到容器时的文件名称
        path: school.txt
      - key: class
        path: class.log
      - key: my.cnf
        path: my.cnf 
  containers:
  - name: web
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /supershy-data
[root@master231 02-configmaps]# 

2. 基于环境变量引用configMap资源

apiVersion: v1
kind: ConfigMap
metadata:
  name: game-demo
data:
  # 类属性键;每一个键都映射到一个简单的值
  player_initial_lives: "3"
  ui_properties_file_name: "user-interface.properties"
  school: "supershy"
  class: "linux89"

  # 类文件键
  game.properties: |
    enemy.types=aliens,monsters
    player.maximum-lives=5    
  user-interface.properties: |
    color.good=purple
    color.bad=yellow
    allow.textmode=true  
  my.cnf: |
    [mysqld]
    datadir=/supershy/data/mysql89
    basedir=/supershy/softwares/mysql80
    socket=/tmp/mysql80.sock
    port=3306

    [client]
    username=admin
    password=supershy


	2.基于环境变量引用configMap资源
[root@master231 02-configmaps]# cat 03-pods-configmaps-env.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-env-configmaps-02
spec:
  nodeName: worker232
  containers:
  - name: web
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    env:
    - name: SCHOOL
      # 值从哪里引用
      valueFrom:
        # 值从一个configMap资源引用
        configMapKeyRef:
          # 指定引用哪个configMap资源
          name: game-demo
          # 指定引用哪个key
          key: school
    - name: CLass
      valueFrom:
        configMapKeyRef:
          name: game-demo
          key: class
    - name: supershy-mysql.conf
      valueFrom:
        configMapKeyRef:
          name: game-demo
          key: my.cnf
[root@master231 02-configmaps]# 

3. 实战案例: 比如修改nginx的默认端口为81端口

[root@master231 02-configmaps]# cat 04-nginx-configmaps.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx
data:
  default.conf: |
    server {
        listen       81;
        listen  [::]:81;
        server_name  localhost;
        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }

---

apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-nginx-configmaps-01
spec:
  nodeName: worker232
  volumes:
  - name: data
    configMap:
      name: nginx 
      items:
      - key: default.conf
        path: default.conf
  containers:
  - name: web
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /etc/nginx/conf.d
[root@master231 02-configmaps]# 

4.subPath实战案例挂载点为文件

- subPath实战案例挂载点为文件
[root@master231 02-configmaps]# cat 05-nginx-configmaps-subPath.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx
data:
  default.conf: |
    server {
        listen       81;
        listen  [::]:81;
        server_name  localhost;
        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }

  nginx.conf: |
    user  nginx;
    worker_processes  auto;
    error_log  /var/log/nginx/error.log notice;
    pid        /var/run/nginx.pid;
    events {
        worker_connections  1024;
    }
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
        access_log  /var/log/nginx/access.log  main;
        sendfile        on;
        keepalive_timeout  65;
        include /etc/nginx/conf.d/*.conf;
    }

---

apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-nginx-configmaps-03
spec:
  nodeName: worker232
  volumes:
  - name: data
    configMap:
      name: nginx 
      items:
      - key: default.conf
        path: default.conf
  - name: data02
    configMap:
     name: nginx
     items:
     - key: nginx.conf
       path: nginx.conf
  containers:
  - name: web
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /etc/nginx/conf.d
    - name: data02
      mountPath: /etc/nginx/nginx.conf
      # 当subPath和items的path相同时,此时mountPath的挂载点为文件。
      subPath: nginx.conf
[root@master231 02-configmaps]# 

3. secret资源清单

1. stringData

	- stringData交给k8s加密方式
[root@master231 /supershy/manifests/secrets]# cat 02-secrets-stringData-pod.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: user-info
type: Opaque
stringData: 
  username: "gaojiaxing"
  password: "123456"


---

apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-secrets-pods
spec:
  volumes:
  - name: data
    secret:
      secretName: user-info
      items: 
      - key: username
        path: username.txt
      - key: password
        path: password.txt
      
  containers:
  - name: c1
    image: harbor.supershy.com/web/nginx:1.20.1-alpine
    volumeMounts:
    - name: data
      mountPath: /supershy-data
    env:
    - name: USERNAME
      valueFrom:
        secretKeyRef:
          name: user-info
          key: username
    - name: password
      valueFrom:
        secretKeyRef:
          name: user-info
          key: password

2.响应式式harbor认证

	- 响应式配置harbor仓库密码
[root@master231 /supershy/manifests/secrets]# kubectl create secret docker-registry supershy-harbor --docker-username=admin --docker-password=1 --docker-email=admin@123.com --docker-server=harbor.supershy.com
  secret/supershy-harbor created


[root@master231 /supershy/manifests/secrets]# cat 03-secrets-harbor.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-secrets-harbor
spec:
  #使用配置的harbor密码的secrets文件
  imagePullSecrets:
  - name: supershy-harbor
  containers:
  - name: c1
    image: harbor.supershy.com/web/nginx:1.20.1-alpine
    imagePullPolicy: Always
    

[root@master231 /supershy/manifests/secrets]# kubectl get pods -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
supershy-linux89-secrets-harbor   1/1     Running   0          16s   10.100.2.30   worker233   <none>           <none>

3. 声明式harbor认证

	0.响应式先创建出来拿到加密后的字符串
[root@master231 /supershy/manifests/secrets]# echo eyJhdXRocyI6eyJoYXJib3Iub2xkYm95ZWR1LmNvbSI6eyJ1c2VybmFtZSI6ImppYXhpbmciLCJwYXNzd29yZCI6IkxpbnV4ODlAMjAyNCIsImVtYWlsIjoiamlheGluZ0AxMjMuY29tIiwiYXV0aCI6ImFtbGhlR2x1WnpwTWFXNTFlRGc1UURJd01qUT0ifX19 |base64 -d

{"auths":{"harbor.supershy.com":{"username":"jiaxing","password":"Linux89@2024","email":"jiaxing@123.com","auth":"amlheGluZzpMaW51eDg5QDIwMjQ="}}}

[root@master231 /supershy/manifests/secrets]# echo amlheGluZzpMaW51eDg5QDIwMjQ= |base64 -d
jiaxing:Linux89@2024

	1.原数据
{"auths":{"harbor.supershy.com":{"username":"jiaxing","password":"Linux89@2024","email":"jiaxing@123.com","auth":"jiaxing:Linux89@2024"}}}

	2.对auth进行base64加密
{"auths":{"harbor.supershy.com":{"username":"jiaxing","password":"Linux89@2024","email":"jiaxing@123.com","auth":"amlheGluZzpMaW51eDg5QDIwMjQ="}}}

	3.整体加密
[root@master231 /supershy/manifests/secrets]# echo -n '{"auths":{"harbor.supershy.com":{"username":"jiaxing","password":"Linux89@2024","email":"jiaxing@123.com","auth":"amlheGluZzpMaW51eDg5QDIwMjQ="}}}' |base64
eyJhdXRocyI6eyJoYXJib3Iub2xkYm95ZWR1LmNvbSI6eyJ1c2VybmFtZSI6ImppYXhpbmciLCJwYXNzd29yZCI6IkxpbnV4ODlAMjAyNCIsImVtYWlsIjoiamlheGluZ0AxMjMuY29tIiwiYXV0aCI6ImFtbGhlR2x1WnpwTWFXNTFlRGc1UURJd01qUT0ifX19


	4.编写资源清单
[root@master231 /supershy/manifests/secrets]# cat 04-pods-harbor-secrets-jiaxing.yaml 
apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJoYXJib3Iub2xkYm95ZWR1LmNvbSI6eyJ1c2VybmFtZSI6ImppYXhpbmciLCJwYXNzd29yZCI6IkxpbnV4ODlAMjAyNCIsImVtYWlsIjoiamlheGluZ0AxMjMuY29tIiwiYXV0aCI6ImFtbGhlR2x1WnpwTWFXNTFlRGc1UURJd01qUT0ifX19
kind: Secret
metadata:
  name: harbor-jiaxing
type: kubernetes.io/dockerconfigjson

---

apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-secrets-harbor-jiaxing
spec:
  imagePullSecrets:
  - name: harbor-jiaxing
  containers:
  - name: c1
    image: harbor.supershy.com/web/nginx:1.20.1-alpine
    imagePullPolicy: Always
    
    5.创建,查看
[root@master231 /supershy/manifests/secrets]# kubectl apply -f 04-pods-harbor-secrets-jiaxing.yaml 
secret/harbor-jiaxing created
pod/supershy-linux89-secrets-harbor-jiaxing created
[root@master231 /supershy/manifests/secrets]# kubectl get po,secrets -o wide
NAME                                           READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
pod/supershy-linux89-secrets-harbor-jiaxing   1/1     Running   0          17s   10.100.1.75   worker232   <none>           <none>

NAME                         TYPE                                  DATA   AGE
secret/default-token-lsww5   kubernetes.io/service-account-token   3      9d
secret/harbor-jiaxing        kubernetes.io/dockerconfigjson        1      17s
secret/supershy-harbor      kubernetes.io/dockerconfigjson        1      89m
secret/student-info          Opaque                                2      146m

4. 声明式harbor认证脚本

[root@master231 /supershy/lianxi]# cat harbor_secret.sh 
#!/bin/bash

#{"auths":{"harbor.supershy.com":{"username":"jiaxing","password":"Linux89@2024","email":"jiaxing@123.com","auth":"jiaxing:Linux89@2024"}}}

read -p "harbor_url:" harbor_url
read -p "username:" username
echo -n
read -p "password:" password
echo -n
read -p "email:" email

echo   \{'"auths"':\{'"$harbor_url"':\{'"username"':\"${username}\",'"password"':\"${password}\",'"email"':\"$email\",'"auth"':\"${username}:${password}\"\}\}\}
json=`echo  -n \{'"auths"':\{\"$harbor_url\":\{'"username"':\"${username}\",'"password"':\"${password}\",'"email"':\"$email\",'"auth"':\"$(echo -n ${username}':'${password} |base64)\"\}\}\} |base64 -w 0`

cat > ./harbor-secret.yaml <<EOF
apiVersion: v1
data:
  .dockerconfigjson: $json 
kind: Secret
metadata:
  name: harbor-jiaxing
type: kubernetes.io/dockerconfigjson
EOF

kubectl apply -f harbor-secret.yaml

4. namespace命名空间

  • 温馨提示:
    • 删除名称空间时,该名称空间的下的所有资源都会被删除。因此生产环境中,删除名称空间一定要谨慎
    • 对于名称空间的理解,可以理解为根下的一个目录,该目录的作用就是实现资源隔离分类,删除该目录,则该目录下的所有文件都会被删除。

1. 增,查,删(没有修改方式)

什么是名称空间:
	名称空间是用来隔离K8S集群资源的。
	
	
- 响应式管理名称空间
	1.创建名称空间
[root@master231 04-namespaces]# kubectl create namespace supershy-linux89
namespace/supershy-linux89 created
[root@master231 04-namespaces]# 

	2.查看名称空间
[root@master231 04-namespaces]# kubectl get namespaces 
NAME                STATUS   AGE
default             Active   2d20h
kube-flannel        Active   2d19h
kube-node-lease     Active   2d20h
kube-public         Active   2d20h
kube-system         Active   2d20h
supershy-linux89   Active   4s
[root@master231 04-namespaces]# 

	3.删除名称空间
[root@master231 04-namespaces]# kubectl delete namespaces supershy-linux89 
namespace "supershy-linux89" deleted
[root@master231 04-namespaces]# 



- 声明式管理名称空间
[root@master231 04-namespaces]# cat 01-namespace.yaml 
apiVersion: v1
kind: Namespace
metadata:
  labels:
    kubernetes.io/metadata.name: supershy
    school: supershy
    class: linux89
  name: supershy
[root@master231 04-namespaces]# 

	

- 各组资源使用名称空间
	1.创建资源清单
[root@master231 04-namespaces]# cat 02-pods-habor-secrets-jasonyin.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: supershy-linux89-harbor-secrets-03
  # 指定该Pod术语supershy这个名称空间,若不指定,则默认为default名称空间哟~
  namespace: supershy
spec:
  nodeName: worker232
  imagePullSecrets:
  - name: supershy-harbor-jasonyin
  containers:
  - name: web
    image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
    imagePullPolicy: Always

--- 

apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJoYXJib3Iub2xkYm95ZWR1LmNvbSI6eyJ1c2VybmFtZSI6Imphc29ueWluIiwicGFzc3dvcmQiOiJMaW51eDg5QDIwMjQiLCJqYXNvbnlpbiI6ImFkbWluQG9sZGJveWVkdS5jb20iLCJhdXRoIjoiYW1GemIyNTVhVzQ2VEdsdWRYZzRPVUF5TURJMCJ9fX0=
kind: Secret
metadata:
  name: supershy-harbor-jasonyin
  namespace: supershy
type: kubernetes.io/dockerconfigjson
[root@master231 04-namespaces]# 


	2.查看指定名称空间下的资源
[root@master231 04-namespaces]# kubectl get pods,secrets -n supershy 
NAME                                      READY   STATUS    RESTARTS   AGE
pod/supershy-linux89-harbor-secrets-03   1/1     Running   0          43s

NAME                               TYPE                                  DATA   AGE
secret/default-token-cmr68         kubernetes.io/service-account-token   3      43s
secret/supershy-harbor-jasonyin   kubernetes.io/dockerconfigjson        1      43s
[root@master231 04-namespaces]# 
[root@master231 04-namespaces]# 
[root@master231 04-namespaces]# kubectl get pods --namespace supershy 
NAME                                  READY   STATUS    RESTARTS   AGE
supershy-linux89-harbor-secrets-03   1/1     Running   0          5m15s
[root@master231 04-namespaces]# 


	3.彩蛋-查看所有名称空间下的资源
[root@master231 04-namespaces]# kubectl get pods,configmaps -A

静态Pod

- 静态Pod
	所谓的静态pod,本质上是kubelet不需要经过apiServer下发创建Pod指令,直接就可以创建Pod的一种方式。
	
[root@master231 ~]# cat  /var/lib/kubelet/config.yaml 
...
staticPodPath: /etc/kubernetes/manifests

	
所谓的静态pod就是kubelet自己监视的一个目录,如果这个目录有Pod资源清单,就直接会在当前节点上创建该Pod。也就是说不基于APIServer就可以直接创建Pod。

静态Pod仅对Pod类型的资源有效,其他资源无视。


静态Pod创建的资源,后缀都会加一个当前节点的名称。


测试案例:
	将po,cm资源移动到"/etc/kubernetes/manifests"观察是否会自动创建静态Pod。
	
[root@master231 manifests]# kubectl get pods -o wide
NAME                                    READY   STATUS    RESTARTS   AGE     IP            NODE        NOMINATED NODE   READINESS GATES
...
supershy-linux89-labels-01-worker232   1/1     Running   0          8m50s   10.100.1.87   worker232   <none>           <none>
[root@master231 manifests]# 

5. replicationcontrollers(rc)

1. rc资源清单创建

	- rc控制器
作用:
	控制指定Pod数量的副本始终存活。
	

参考案例:
[root@master231 replicationcontrollers]# cat 01-rc-nginx.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: rc-nginx
spec:
  # 控制有多少个Pod副本始终存活
  replicas: 3
  # 选择器,基于标签关联Pod
  selector:
    app: nginx
  # 控制器如何创建pod的模板
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
        school: supershy
        class: linux89
    spec:
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
        ports:
        - containerPort: 80
[root@master231 replicationcontrollers]# 

2. rc升级回滚策略

1) 蓝绿部署

- 蓝绿部署
	独立部署两套环境,使用svc的标签选择器切换两套环境。
	
	优点:
		两套独立的环境互不影响,回滚很方便。
		
	缺点:
		始终只有一套资源对外提供服务,有一套资源时空转。说白了就是浪费一半资源。适合有钱的公司玩。
		

实战案例:
[root@master231 04-blue-green]# cat 01-rc-blue.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: rc-nginx-blue
spec:
  replicas: 3
  selector:
    app: blue
  template:
    metadata:
      labels:
        app: blue
    spec:
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-update/apps:v1
        ports:
        - containerPort: 80
[root@master231 04-blue-green]# 
[root@master231 04-blue-green]# cat 02-rc-green.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: rc-nginx-green
spec:
  replicas: 3
  selector:
    app: green
  template:
    metadata:
      labels:
        app: green
    spec:
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-update/apps:v2
        ports:
        - containerPort: 80
[root@master231 04-blue-green]# 
[root@master231 04-blue-green]# cat 03-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: supershy-svc-update
spec:
  type: ClusterIP
  clusterIP: 10.200.0.50
  selector:
    # app: blue
    app: green
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
[root@master231 04-blue-green]# 



[root@worker232 manifests]# while true; do curl 10.200.0.50;sleep 0.5; done

2) 金丝雀发布 / 灰度发布

	对部分用户进行升级测试,并不对全量的用户进行升级。灰度发布过程中,会存在新版本和老版本共存的现象。
	
	
	其实可以理解为滚动升级,只不过灰度发布更加强调用户自定义区域的更新策略。


[root@master231 05-huidu]# cat 01-rc-old.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: rc-nginx-old
spec:
  replicas: 0
  selector:
    app: huidu
  template:
    metadata:
      labels:
        app: huidu
    spec:
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-update/apps:v1
        ports:
        - containerPort: 80
[root@master231 05-huidu]# 
[root@master231 05-huidu]# 
[root@master231 05-huidu]# cat 02-rc-new.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: rc-nginx-new
spec:
  replicas: 3
  selector:
    app: huidu
  template:
    metadata:
      labels:
        app: huidu
    spec:
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-update/apps:v2
        ports:
        - containerPort: 80
[root@master231 05-huidu]# 
[root@master231 05-huidu]# 
[root@master231 05-huidu]# cat 03-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: supershy-svc-huidu
spec:
  type: ClusterIP
  clusterIP: 10.200.0.60
  selector:
    app: huidu
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
[root@master231 05-huidu]# 



对比灰度发布和蓝绿部署的区别:
	蓝绿部署:
		需要独立部署两套环境。实际投入使用时只有一套。有一套环境空跑。没有新旧版本共存的现象。
		
	灰度发布:
		最终只有一套环境对外提供服务,在升级过程中,有新旧版本共存的现象。

6. replicasets(rs)

- replicasets两种标签匹配方式:
		rs控制Pod的数量始终存活。
		rs是rc资源的升级版本,功能比rc资源更加大,且实现更加轻量级。
		支持标签匹配和标签表达式,说白了,就是一个key可以匹配多个值。

[root@master231 07-replicasets]# cat 01-rs-nginx-matchLabels.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: rs-nginx-matchlabels
spec:
  # 控制有多少个Pod副本始终存活
  replicas: 3
  # 定义选择器
  selector:
    # 基于标签关联Pod
    matchLabels:
      class: jiaoshi03
  # 控制器如何创建pod的模板
  template:
    metadata:
      name: nginx
      labels:
        class: jiaoshi03
    spec:
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
        ports:
        - containerPort: 80
[root@master231 07-replicasets]# 



[root@master231 07-replicasets]# cat 02-rs-nginx-matchExpressions.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: rs-nginx-matchexpressions
spec:
  replicas: 2
  selector:
    # 基于表达式关联Pod
    matchExpressions:
    - key: class
      values: 
      - jiaoshi02
      - jiaoshi03
      # 指定key和value之间的关系,有效值为: In, NotIn, Exists and DoesNotExist
      #   In: 
      #      class的值在values的列表中。
      #   NotIn:
      #      class的值不在values的列表中。
      #   Exists:
      #      只要存在class这个key即可,values可以不写,表示任意。
      #   DoesNotExist:
      #      只要不存在class这个key即可,values可以不写,表示任意。
      operator: In
  template:
    metadata:
      name: nginx
      labels:
        class: jiaoshi03
    spec:
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-update/apps:v1
        ports:
        - containerPort: 80
[root@master231 07-replicasets]# 

7. Deployment(deploy)

1. selector-关联标签

	- deploy
		用于部署应用,底层调用的是rs资源。由rs控制Pod副本数量,deploy并不会直接控制pod。
		deploy支持声明式更新。
		
		
[root@master231 08-deployments]# cat 01-deploy-nginx-matchLabels.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nginx-matchlabels
spec:
  # 控制有多少个Pod副本始终存活
  replicas: 3
  # 保留历史版本,默认保留10个
  revisionHistoryLimit: 5
  # 定义选择器
  selector:
    # 基于标签关联Pod
    matchLabels:
      class: jiaoshi03
  # 控制器如何创建pod的模板
  template:
    metadata:
      name: nginx
      labels:
        class: jiaoshi03
    spec:
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
        ports:
        - containerPort: 80
[root@master231 08-deployments]# 
[root@master231 08-deployments]# 
[root@master231 08-deployments]# cat 02-deploy-nginx-matchExpressions.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nginx-matchexpressions
spec:
  replicas: 3
  selector:
    # 基于表达式关联Pod
    matchExpressions:
    - key: class
      values: 
      - jiaoshi02
      - jiaoshi03
      # 指定key和value之间的关系,有效值为: In, NotIn, Exists and DoesNotExist
      #   In: 
      #      class的值在values的列表中。
      #   NotIn:
      #      class的值不在values的列表中。
      #   Exists:
      #      只要存在class这个key即可,values可以不写,表示任意。
      #   DoesNotExist:
      #      只要不存在class这个key即可,values可以不写,表示任意。
      operator: In
  template:
    metadata:
      name: nginx
      labels:
        class: jiaoshi03
    spec:
      containers:
      - name: nginx
        #image: harbor.supershy.com/supershy-update/apps:v1
        image: harbor.supershy.com/supershy-update/apps:v2
        ports:
        - containerPort: 80
[root@master231 08-deployments]# 
[root@master231 08-deployments]# cat 03-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: supershy-svc-deploy
spec:
  type: ClusterIP
  clusterIP: 10.200.0.80
  selector:
    class: jiaoshi03
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
[root@master231 08-deployments]# 

2. strategy-服务升级策略

- deployment的Recreate策略strategy
[root@master231 08-deployments]# cat 05-deploy-strategy-Recreate.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-strategy-recreate
spec:
  # 定义升级的策略
  strategy:
    # 指定升级策略的类型,"Recreate" or "RollingUpdate"。
    #   Recreate:
    #      删除所有旧的Pod,在批量创建新的Pod。
    #   RollingUpdate:
    #      删除一部分旧的Pod,启动一部分新的,逐渐实现滚动更新。默认值就是滚动更新。
    # type: Recreate
    type: RollingUpdate
    # 当类型为RollingUpdate时有效,用于配置滚动更新策略。
    rollingUpdate:
      # 在原有的Pod数量的基础之上,允许在升级过程中增加Pod数量的百分比或个数。
      maxSurge: 2  #或 "40%"
      # 在Pod升级过程中,在原有Pod基础数量之上最大不允许不可用数量的百分比或个数。
      maxUnavailable: 1  #或 "20%"
  replicas: 5
  selector:
    matchLabels:
      apps: web
  template:
    metadata:
      name: nginx
      labels:
        apps: web
    spec:
      containers:
      - name: nginx
        # image: harbor.supershy.com/supershy-update/apps:v1
        # image: harbor.supershy.com/supershy-update/apps:v2
        image: nginx:1.20.2
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          name: web


---

apiVersion: v1
kind: Service
metadata:
  name: supershy-web
spec:
  type: ClusterIP
  clusterIP: 10.200.0.150
  selector:
    apps: web
  ports:
    - protocol: TCP
      port: 80
      targetPort: web

3.响应式更新

	- 响应式更新,修改完成后立即生效
		1.交互式的方式进行响应式更新,不仅可以修改镜像,还可以修改其他的配置参数,比如副本数等。
[root@master231 ~]# kubectl edit deployments.apps deploy-strategy-rollingupdate-percentage 

		2.非交互式响应式更新,缺陷就是只能修改镜像。
[root@master231 08-deployments]# kubectl set image deploy deploy-strategy-rollingupdate-percentage nginx=harbor.supershy.com/supershy-update/apps:v3

4. 部署redis案例

- 使用deploy部署redis案例
[root@master231 08-deployments]# cat 07-deploy-redis.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-leader
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
        role: leader
        tier: backend
    spec:
      containers:
      - name: leader
        image: "docker.io/redis:6.0.5"
        # image: harbor.supershy.com/supershy-db/redis:6.0.5
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379
[root@master231 08-deployments]#

8. Service

1. ClusterIP

	- svc服务发现
	
rc保证了pod副本数量始终存活的问题,但是无法改变删除Pod时IP地址发生变化的现象。
数据持久化问题,我们可以考虑使用类似于nfs这样的文件系统实现数据存储。


svc有两个功能:
	- 1.流量的负载均衡
		访问svc时,会将流量转发给对应的Pod。
		
	- 2.Pod的自动发现
		当出现匹配的pod时,会自动关联pod相关信息,当pod删除时,会自动移除关联信息。
svc实际上是维护了endpoints列表,在创建svc时会自动创建,这个表中记录关联的列表标签,使用describe可以看到或者get endpoint

[root@master231 06-services]# cat 02-svc-clusterIP.yaml 
apiVersion: v1
kind: Service
metadata:
  name: supershy-nginx
spec:
  # 指定svc的类型,有效值为: ExternalName, ClusterIP, NodePort, and LoadBalancer
  # 默认值为"ClusterIP"
  type: ClusterIP
  # 指定svc的IP地址,如果不指定,则默认生成随机的IP地址。(10.200.0.0/16,咱们集群环境的网段)
  clusterIP: 10.200.0.100
  # 基于标签选择器关联Pod
  selector:
    app: nginx
    school: supershy
  # 端口配置
  ports:
      # 指定协议
    - protocol: TCP
      # 连接svc资源时访问端口
      port: 88
      # 后端pod(被关联的Pod)运行服务的端口
      targetPort: 80

2. NodePort

NodePort的svc对应iptables实现Pod流量代理验证
	1.NodePort案例
[root@master231 06-services]# cat 03-svc-nodePort.yaml 
apiVersion: v1
kind: Service
metadata:
  name: supershy-nginx
spec:
  # 指定svc的类型,有效值为: ExternalName, ClusterIP, NodePort, and LoadBalancer
  # 默认值为"ClusterIP"
  type: NodePort
  # 指定svc的IP地址,如果不指定,则默认生成随机的IP地址。(10.200.0.0/16,咱们集群环境的网段)
  #clusterIP: 10.200.0.100
  # 基于标签选择器关联Pod
  selector:
    class: jiaoshi03
  ports:
    - protocol: TCP
      # 对外暴露的端口
      port: 88
      # 容器内的端口
      targetPort: 80
      # 当"type: NodePort"时,可以自定义端口。默认的端口范围: 30000-32767
      # 这个端口是提供外部设备访问集群worker节点的ip:port,并不是service的cluster-ip:port
      # 集群内部互通是通过port: 88
      nodePort: 30088
[root@master231 06-services]# 


- 通过apiServer修改svc的NodePort指定的端口范围
	1.修改apiServer的配置文件(静态Pod)
[root@master231 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
 spec:
  containers:
  - command:
    - kube-apiserver
    - --service-node-port-range=2000-50000  # 进行添加这一行即可

3. ExternalName

- svc的ExternalName的类型映射K8S集群外部域名服务
	1.编写svc的配置文件
[root@master231 06-services]# 
[root@master231 06-services]# cat  03-svc-ExternalName.yaml 
apiVersion: v1
kind: Service
metadata:
  name: svc-externalname-supershy
spec:
  # svc类型
  type: ExternalName
  # 指定外部域名,有的域名貌似做了限制就不允许设置,
  externalName: www.cnblog.com
  #externalName: www.baidu.com
[root@master231 06-services]# 
[root@master231 06-services]# kubectl get svc svc-externalname-supershy 
NAME                         TYPE           CLUSTER-IP   EXTERNAL-IP      PORT(S)   AGE
svc-externalname-supershy   ExternalName   <none>       www.cnblog.com   <none>    22m
[root@master231 06-services]# 


	2.验证测试
[root@master231 06-services]# kubectl run test-dns --rm -it --image=harbor.supershy.com/supershy-linux/alpine:latest -- ping www.cnblog.com -c 3
If you don't see a command prompt, try pressing enter.
64 bytes from 149.28.121.93: seq=1 ttl=127 time=234.133 ms
64 bytes from 149.28.121.93: seq=2 ttl=127 time=232.634 ms

4. LoadBalancer

- svc的LoadBalancer的类型代理云环境
- 用metallb工具来模拟公有云环境,公有云环境不需要,公有云环境配置了LoadBalancer会自动分配ip
	# 修改两处: mode: "ipvs",strictARP: true
	1.修改kube-proxy的配置文件
[root@master231 06-services]# kubectl get configmap kube-proxy -n kube-system -o yaml | sed -e "s/strictARP: false/strictARP: true/"  | sed -e 's#mode: ""#mode: "ipvs"#'  | kubectl apply -f -

	2.创建deployments资源
略,随意创建一个web服务用于测试。


	3.下载资源清单
[root@master231 metallb]#  wget https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml


	4.部署metallb
[root@master231 metallb]#  kubectl apply -f metallb-native.yaml 
[root@master231 metallb]# 
[root@master231 metallb]# kubectl get pods -n metallb-system  -o wide  # 需要多等待一会
NAME                          READY   STATUS    RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
controller-5f99fd6568-h66bd   1/1     Running   0          2m6s   10.100.2.68   worker233   <none>           <none>
speaker-824sh                 1/1     Running   0          2m6s   10.0.0.231    master231   <none>           <none>
speaker-8qkwm                 1/1     Running   0          2m6s   10.0.0.232    worker232   <none>           <none>
speaker-9xthv                 1/1     Running   0          2m6s   10.0.0.233    worker233   <none>           <none>

	5.查看metallb的状态
[root@master231 metallb]# watch kubectl get all -o wide -n metallb-system

	6.创建MetalLB地址池
[root@master231 metallb]# cat 02-metallb-ip-pool.yaml 
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system
spec:
  addresses:
  # 注意改为你自己为MetalLB分配的IP地址,只要不和宿主机的IP地址范围冲突即可!最好是windows能够访问的网段哟~
  - 10.0.0.188-10.0.0.200
---

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec:
  ipAddressPools:
  - first-pool
  
[root@master231 metallb]# 
[root@master231 metallb]# kubectl apply -f 02-metallb-ip-pool.yaml 
ipaddresspool.metallb.io/first-pool created
l2advertisement.metallb.io/example created
[root@master231 metallb]# 

	7.声明式暴露服务
[root@master231 metallb]# cat 03-metallb-LoadBalancer.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  type: LoadBalancer
  ports:
  - nodePort: 8080
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: test-web
[root@master231 metallb]# 

	8.响应式暴露svc
[root@master231 05-metallb-LoadBalancer]# kubectl expose deployment deploy-nginx-matchexpressions --name=nginx-svc-lb --port=99 --target-port=80 --protocol=TCP --type=LoadBalancer --selector=class=jiaoshi03

参考链接:
	https://www.cnblogs.com/supershy/p/17811466.html

附加组件-ipvs-修改kube-proxy工作模式

	1.响应式修改kube-proxy 
[root@master231 06-services]# kubectl  -n kube-system edit cm kube-proxy  
...
data:
  config.conf: |-
	...
    mode: "ipvs"

  ... (相当于调用vi命令进行编辑,输入wq保存即可。)
configmap/kube-proxy edited
[root@master231 06-services]# 
[root@master231 06-services]# kubectl  -n kube-system get cm kube-proxy  -o yaml | grep mode
    mode: "ipvs"
[root@master231 06-services]# 


	2.所有节点加载ipvs的内核
		2.1 所有worker节点安装ipvs相关组件
yum -y install conntrack-tools ipvsadm.x86_64 


		2.2 所有节点编写加载ipvs的配置文件
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

		2.3 加载ipvs相关模块并查看
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4


	3. 刪除旧的kube-proxy会自动创建新的pod
[root@master231 06-services]# kubectl get pods -n kube-system -l k8s-app=kube-proxy | awk 'NR>1{print $1}' |  xargs kubectl -n kube-system delete pods 
pod "kube-proxy-gv4pc" deleted
pod "kube-proxy-jjmxp" deleted
pod "kube-proxy-qnb6h" deleted
[root@master231 06-services]# 
[root@master231 06-services]# kubectl get pods -n kube-system -l k8s-app=kube-proxy 
NAME               READY   STATUS    RESTARTS   AGE
kube-proxy-5g8mh   1/1     Running   0          5s
kube-proxy-cm55h   1/1     Running   0          4s
kube-proxy-dcht8   1/1     Running   0          5s
[root@master231 06-services]#


	4. 验证ipvs代理Pod流量
[root@master231 06-services]# ipvsadm -ln | grep 10.200.254.40 -A 3
TCP  10.200.254.40:88 rr
  -> 10.100.1.143:80              Masq    1      0          0         
  -> 10.100.1.144:80              Masq    1      0          0         
  -> 10.100.2.63:80               Masq    1      0          0         
[root@master231 06-services]# 

[root@master231 06-services]# kubectl describe svc supershy-nginx 
Name:                     supershy-nginx
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 class=jiaoshi03
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.200.254.40
IPs:                      10.200.254.40
Port:                     <unset>  88/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  2488/TCP
Endpoints:                10.100.1.143:80,10.100.1.144:80,10.100.2.63:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
[root@master231 06-services]# 



附加组件-coreDNS

- coreDNS组件
	1 coreDNS概述
coreDNS的作用就是将svc的名称解析为ClusterIP。

早期使用的skyDNS组件,需要单独部署,在k8s 1.9版本中,我们就可以直接使用kubeadm方式安装CoreDNS组件。

从k8s 1.12开始,CoreDNS就成为kubernetes默认的DNS服务器,但是kubeadm支持coreDNS的时间会更早。


推荐阅读:
	https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns


	2 coreDNS的IP地址,初始化master节点时自定义的,--service-dns-domain=supershy.com
vim  /var/lib/kubelet/config.yaml 
...
clusterDNS:
- 10.200.0.10
clusterDomain: supershy.com

	3 coreDNS的A记录 
		k8s的A记录格式:
<service name>[.<namespace name>.svc.cluster.local]

	参考案例:
kube-dns.kube-system.svc.cluster.local
supershy-mysql.default.svc.cluster.local


温馨提示:
	(1)如果部署时直接写svc的名称,不写名称空间,则默认的名称空间为其引用资源的名称空间;
	(2)kubeadm部署时,无需手动配置CoreDNS组件(默认在kube-system已创建),二进制部署时,需要手动安装该组件;


	4 测试coreDNS组件
	yum -y install bind-utils
	dig @10.200.0.10  supershy-tomcat-app.default.svc.cluster.local +short 


测试案例:
	方式一: 基于dig命令测试
[root@master231 06-services]# kubectl get svc -A
NAMESPACE     NAME                         TYPE           CLUSTER-IP      EXTERNAL-IP         PORT(S)                  AGE
default       kubernetes                   ClusterIP      10.200.0.1      <none>              443/TCP                  2d23h
default       supershy-nginx              NodePort       10.200.254.40   <none>              88:2488/TCP              4h45m
default       supershy-svc-rs             ClusterIP      10.200.0.80     <none>              80/TCP                   2d22h
default       svc-externalname-supershy   ExternalName   <none>          www.supershy.com   <none>                   8m26s
kube-system   kube-dns                     ClusterIP      10.200.0.10     <none>              53/UDP,53/TCP,9153/TCP   7d1h
[root@master231 06-services]# 
[root@master231 06-services]# dig @10.200.0.10 kubernetes.default.svc.supershy.com +short
10.200.0.1
[root@master231 06-services]# 
[root@master231 06-services]# dig @10.200.0.10 supershy-nginx.default.svc.supershy.com +short
10.200.254.40
[root@master231 06-services]# 
[root@master231 06-services]# dig @10.200.0.10 supershy-svc-rs.default.svc.supershy.com +short
10.200.0.80
[root@master231 06-services]# 
[root@master231 06-services]# dig @10.200.0.10 svc-externalname-supershy.default.svc.supershy.com +short
[root@master231 06-services]# 
[root@master231 06-services]# 
[root@master231 06-services]# dig @10.200.0.10 kube-dns.kube-system.svc.supershy.com +short
10.200.0.10
[root@master231 06-services]# 


	方式二: 基于ping命令测试
[root@master231 06-services]# kubectl run test-dns-01 --rm -it --image=harbor.supershy.com/supershy-linux/alpine:latest -- sh
If you don't see a command prompt, try pressing enter.
/ # 
/ # 
/ # ping kube-dns.kube-system.svc.supershy.com -c 3
PING kube-dns.kube-system.svc.supershy.com (10.200.0.10): 56 data bytes
64 bytes from 10.200.0.10: seq=0 ttl=64 time=0.069 ms
64 bytes from 10.200.0.10: seq=1 ttl=64 time=0.090 ms
64 bytes from 10.200.0.10: seq=2 ttl=64 time=0.073 ms

--- kube-dns.kube-system.svc.supershy.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.069/0.077/0.090 ms
/ # 
/ # 
/ # ping supershy-svc-rs -c 3
PING supershy-svc-rs (10.200.0.80): 56 data bytes
64 bytes from 10.200.0.80: seq=0 ttl=64 time=0.038 ms
64 bytes from 10.200.0.80: seq=1 ttl=64 time=0.087 ms
64 bytes from 10.200.0.80: seq=2 ttl=64 time=0.074 ms

--- supershy-svc-rs ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.038/0.066/0.087 ms
/ # 

9. Endpoints(ep)

- endpoints资源实现K8S外部服务映射
	Endpoints 是实现实际服务的端点的集合。

	endpoints资源是用来映射一个后端资源的,这个资源可以是K8S集群内部的Pod的IP地址,也可以是K8S集群外部的一个IP地址。
	
	说白了,可以直接理解为endpoints资源就是一个存储IP列表。
	
	svc会自动关联一个同名的ep资源。当删除svc时,会级联删除与之对应的同名ep资源。

1. svc同名endpoint资源清单

[root@master231 09-endpoints]# cat 01-ep-nginx.yaml 
apiVersion: v1
kind: Endpoints
metadata:
  name: supershy-web
# 存放后端服务的集合
subsets:
  # 指定IP地址集合
- addresses:
  - ip: 10.0.0.250
    # hostname: harbor250
  # 指定端口的集合
  ports:
    # 指定后端服务的端口名称
    # 经过测试发现,对于https貌似支持的不太友好,目前K8S 1.23.17测试http协议是正常。
  - port: 88
    # 给该端口起名字
    # name: harbor-server

---

apiVersion: v1
kind: Service
metadata:
  name: supershy-web
spec:
  ports:
  - port: 80
    # name: harbor-server
    
    
    	2.测试实战
[root@master231 ~]# kubectl  run  test-ep --rm --image=harbor.supershy.com/supershy-linux/alpine -it  -- sh
If you don't see a command prompt, try pressing enter.
/ # 
/ # wget supershy-web
Connecting to supershy-web (10.200.64.134:80)
saving to 'index.html'
index.html           100% |***************************************************************************************|    18  0:00:00 ETA
'index.html' saved
/ # 
/ # 
/ # cat index.html 
www.supershy.com
/ # 

10. jobs

- jobs控制器实现一次性任务类似于Linux的at命令
	1.运行资源清单
[root@master231 10-jobs]# cat 01-jobs.yaml 
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: harbor.supershy.com/perl-5.34.0/perl:5.34.0
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  # 指定任务失败尝试的次数,如果不指定默认为6次。
  backoffLimit: 4
[root@master231 10-jobs]#

11. cronjob(cj)

- cj控制器

	CronJob 创建基于时隔重复调度的 “Job”。

	CronJob 用于执行排期操作,例如备份、生成报告等。 一个 CronJob 对象就像 Unix 系统上的 crontab(cron table)文件中的一行。 它用 Cron 格式进行编写, 并周期性地在给定的调度时间执行 Job。


	官网连接:
		https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/cron-jobs/
		
		
参考案例:
[root@master231 11-cronjobs]# cat 01-cj.yaml 
apiVersion: batch/v1
kind: CronJob
metadata:
  name: hello
spec:
  # 周期性调度,每分钟调度一次,分别对应的是: 分,时,日,月,周
  # 定义调度格式,参考链接:https://en.wikipedia.org/wiki/Cron
  schedule: "* * * * *"
  # 保留成功执行的jobs数量,若不指定,则默认只保留3个。
  successfulJobsHistoryLimit: 5
  # 保留失败的jobs数量,若不指定,则默认保留1个。
  failedJobsHistoryLimit: 3
  # 定义jobs模板
  jobTemplate:
    spec:
      # 定义pods模板
      template:
        spec:
          containers:
          - name: hello
            image: busybox:1.28
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure
[root@master231 11-cronjobs]# 
[root@master231 11-cronjobs]# kubectl get cj,jobs,pods
NAME                  SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/hello   * * * * *   False     0        23s             21m

NAME                       COMPLETIONS   DURATION   AGE
job.batch/hello-28423151   1/1           3s         4m23s
job.batch/hello-28423152   1/1           3s         3m23s
job.batch/hello-28423153   1/1           3s         2m23s
job.batch/hello-28423154   1/1           3s         83s
job.batch/hello-28423155   1/1           3s         23s

NAME                       READY   STATUS      RESTARTS   AGE
pod/hello-28423151-hfm89   0/1     Completed   0          4m23s
pod/hello-28423152-m57vp   0/1     Completed   0          3m23s
pod/hello-28423153-zll95   0/1     Completed   0          2m23s
pod/hello-28423154-xz5lp   0/1     Completed   0          83s
pod/hello-28423155-gcg2f   0/1     Completed   0          23s
[root@master231 11-cronjobs]# 

12. DaemonSet(ds)

ds在各个worker节点有且只能有一个Pod运行
	DaemonSet确保全部(或者某些)节点上运行一个 Pod 的副本。 当有节点加入集群时, 也会为他们新增一个 Pod 。 当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。

	DaemonSet的一些典型用法:
		- 在每个节点上运行集群守护进程
		- 在每个节点上运行日志收集守护进程
		- 在每个节点上运行监控守护进程

参考案例:
[root@master231 12-daemonsets]# cat 01-ds.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: supershy-ds-web
spec:
  selector:
    matchLabels:
      apps: nginx
  template:
    metadata:
      labels:
        apps: nginx
    spec:
      containers:
      - name: c1
        image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
[root@master231 12-daemonsets]# 


JSONPath模板语法格式

- kubectl的JSONPath模板语法实战
	JSONPath 模板由 {} 包起来的 JSONPath 表达式组成。Kubectl 使用 JSONPath 表达式来过滤 JSON 对象中的特定字段并格式化输出。 除了原始的 JSONPath 模板语法,以下函数和语法也是有效的:

		- 使用双引号将 JSONPath 表达式内的文本引起来。
		- 使用 range,end 运算符来迭代列表。
		- 使用负片索引后退列表。负索引不会“环绕”列表,并且只要 -index + listLength> = 0 就有效。 

	JSONPath可以理解为将资源的输出格式按照我们预定义的方式进行输出,只不过这种输出方式需要学习对应的JSONPath模板语法。
	


举个例子:
	1.查看指定标签pod返回第0个元素的spec字段。
[root@master231 ~]# kubectl get pods -l job-name=pi -o jsonpath='{.items[0].spec }' 


	2.查看指定标签pod返回第0个元素的spec字段下的containers字段返回数组的第0个元素的command字段。
[root@master231 ~]# kubectl get pods -l job-name=pi -o jsonpath='{.items[0].spec.containers[0].command }' 


	3.获取容器的名称
[root@master231 ~]# kubectl get pods -l job-name=pi -o jsonpath='{.items[0].spec.containers[0].name }'
	

	4.综合案例,统计出容器名称及其重启次数。
[root@master231 10-jobs]# NAME=`kubectl get pods -l job-name=pi -o jsonpath='{.items[0].spec.containers[0].name }'`
[root@master231 10-jobs]# 
[root@master231 10-jobs]# COUNT=`kubectl get pods -l job-name=pi -o jsonpath='{.items[0].status.containerStatuses[0].restartCount }'`
[root@master231 10-jobs]# 
[root@master231 10-jobs]# echo $NAME:$COUNT
pi:0
[root@master231 10-jobs]# 


	推荐阅读:
https://kubernetes.io/zh-cn/docs/reference/kubectl/jsonpath/



- JSONPath扩展案例-自定义列
[root@master231 10-jobs]# kubectl get pods -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image,supershy_COUNT:.status.containerStatuses[0].restartCount

kubectl port-forward

"kubectl port-forward"案例:
	1.代理pods的nginx端口
[root@master231 13-scheduler]# kubectl port-forward po/deploy-podaffinity-86888d5cbd-zbr5f --address=0.0.0.0 :80
Forwarding from 0.0.0.0:44175 -> 80



	2.客户端访问10.0.0.231节点的44175端口。
http://10.0.0.232:44175/

RBAC

1. role的创建及使用权限

- RBAC基于用户(User)授权案例
	1.使用k8s ca签发客户端证书
		1.1 解压证书管理工具包-cfssl
下载地址:
	https://github.com/cloudflare/cfssl/releases

[root@master231 cfssl]# unzip supershy-cfssl.zip 
[root@master231 cfssl]#
[root@master231 cfssl]# rename _1.6.4_linux_amd64 "" *
[root@master231 cfssl]#
[root@master231 cfssl]# mv cfssl* /usr/local/bin/
[root@master231 cfssl]#
[root@master231 cfssl]# chmod +x /usr/local/bin/cfssl*

		1.2 编写证书请求
[root@master231 user]# cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF


[root@master231 user]# cat > supershy-csr.json <<EOF
{
  "CN": "supershy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF


		1.3 生成证书
[root@master231 user]# cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -config=ca-config.json -profile=kubernetes supershy-csr.json | cfssljson -bare supershy

[root@master231 user]#  cfssl-certinfo -cert supershy.pem

	注意,上一步相当创建了User用户,对应的是supershy,如果生产环境中,你想要自定义用户,只需要改一下名称即可。
	
	2 生成kubeconfig授权文件
		2.1 编写生成kubeconfig文件的脚本
cat > kubeconfig.sh <<'EOF'
# 配置集群,集群可以设置多套,此处之配置了一套
# --certificate-authority
#   指定K8s的ca根证书文件路径
# --embed-certs
#   如果设置为true,表示将根证书文件的内容写入到配置文件中,
#   如果设置为false,则只是引用配置文件,将kubeconfig
# --server
#   指定APIServer的地址。
# --kubeconfig
#   指定kubeconfig的配置文件名称
kubectl config set-cluster supershy-linux89 \
  --certificate-authority=/etc/kubernetes/pki/ca.crt \
  --embed-certs=true \
  --server=https://10.0.0.231:6443 \
  --kubeconfig=supershy-linux89.kubeconfig
 
# 设置客户端认证,客户端将来需要携带证书让服务端验证
kubectl config set-credentials supershy \
  --client-key=supershy-key.pem \
  --client-certificate=supershy.pem \
  --embed-certs=true \
  --kubeconfig=supershy-linux89.kubeconfig

# 设置默认上下文,可以用于绑定多个客户端和服务端的对应关系。
kubectl config set-context linux89 \
  --cluster=supershy-linux89 \
  --user=supershy \
  --kubeconfig=supershy-linux89.kubeconfig

# 设置当前使用的上下文
kubectl config use-context linux89 --kubeconfig=supershy-linux89.kubeconfig
EOF


		2.2 生成kubeconfig文件
bash kubeconfig.sh

	3 创建RBAC授权策略
	3.1 创建rbac等配置文件
[root@master231 user]# cat  rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: supershy-custom-role
rules:
  # API组,""表示核心组,该组包括但不限于"configmaps","nodes","pods","services"等资源.
  # 暂时这样理解:
  #    如果一个资源是apps/v1,则其组取"/"之前的,也就是apps.
  #    如果一个资源是v1,则默认为"/"。
  # 如果遇到不知道所述哪个组的也别着急,他会有报错提示,如下所示:
  #    User "supershy" cannot list resource "deployments" in API group "apps" in the namespace "default"
  # 如上所示,表示的是"deployments"的核心组是"apps"。
- apiGroups: ["","apps"]  
  # 资源类型,不支持写简称,必须写全称哟!!
  # resources: ["pods","deployments"]  
  resources: ["pods","deployments","services"]  
  # 对资源的操作方法.
  verbs: ["get", "list"]  
  # verbs: ["get", "list","delete"]  
- apiGroups: ["","apps"]
  resources: ["configmaps","secrets","daemonsets"]
  verbs: ["get", "list"]  
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["delete","create"]  

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: supershy-linux-rbac
  namespace: default
subjects:
  # 主体类型
- kind: User  
  # 用户名
  name: supershy  
  apiGroup: rbac.authorization.k8s.io
roleRef:
  # 角色类型
  kind: Role  
  # 绑定角色名称
  name: supershy-custom-role
  apiGroup: rbac.authorization.k8s.io
[root@master231 user]# 


		3.2 应用rbac授权
[root@master231 user]# kubectl apply -f  rbac.yaml
role.rbac.authorization.k8s.io/supershy-custom-role created
rolebinding.rbac.authorization.k8s.io/supershy-linux-rbac created
[root@master231 user]# 

		3.3 访问测试
[root@master231 user]# kubectl get po,cm,svc --kubeconfig=supershy-linux89.kubeconfig 

		3.6 回收权限
[root@master231 user]# kubectl delete -f rbac.yaml 

2. kubectl识别kubeconfig的三种方法

- 彩蛋:kubectl识别kubeconfig文件的三种方法
	1 方法一,使用别名
[root@worker233 ~]# vim ~/.bashrc 
...
alias kubectl='kubectl --kubeconfig=/root/supershy-linux.kubeconfig'
...
[root@worker233 ~]# source ~/.bashrc



	2 方法二,放置默认的kubeconfig读取位置
[root@worker233 ~]# mkdir -pv ~/.kube/
[root@worker233 ~]# 
[root@worker233 ~]# cp supershy-linux.kubeconfig ~/.kube/config
	

	3 方法三,使用环境变量
[root@worker233 ~]# cat /etc/profile.d/kubeconfig.sh 
#!/bin/bash

export KUBECONFIG=/root/supershy-linux.kubeconfig
[root@worker233 ~]# 
[root@worker233 ~]# source /etc/profile.d/kubeconfig.sh 

3. group的创建及使用权限

- RBAC基于用户组(Group)授权案例
	1 RBAC基于组的方式认证
用户组的好处是无需单独为某个用户创建权限,统一为这个组名进行授权,所有的用户都以组的身份访问资源。

  
温馨提示:
	(1)APIserver会优先校验用户名(CN字段),若用户名没有对应的权限,则再去校验用户组(O)的权限。
        CN:
            CN标识的是用户名称,比如"supershy"。。
        O:
            O标识的是用户组,比如"dev"组。
	
	(2)用户,用户组都是提取证书中的一个字段,不是在集群中创建的。
 

RBAC基于组的方式认证:
	CN: 代表用户,
	O: 组。
	
	2 将jasonyin用户添加到supershy组
		2.1使用k8s ca签发客户端证书
			2.1.1 编写证书请求
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF
cat > supershy-csr.json <<EOF
{
  "CN": "supershy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "supershy",
      "OU": "System"
    }
  ]
}
EOF


		2.1.2 生成证书
cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -config=ca-config.json -profile=kubernetes supershy-csr.json | cfssljson -bare supershy-groups

	2.2 生成kubeconfig授权文件
		2.2.1 编写生成kubeconfig文件的脚本
cat > kubeconfig.sh <<'EOF'
kubectl config set-cluster supershy-linux-groups \
  --certificate-authority=/etc/kubernetes/pki/ca.crt \
  --embed-certs=true \
  --server=https://10.0.0.231:6443 \
  --kubeconfig=supershy-linux89.kubeconfig
 
# 设置客户端认证
kubectl config set-credentials supershy \
  --client-key=supershy-groups-key.pem \
  --client-certificate=supershy-groups.pem \
  --embed-certs=true \
  --kubeconfig=supershy-linux89.kubeconfig

# 设置默认上下文
kubectl config set-context linux-groups \
  --cluster=supershy-linux-groups \
  --user=supershy \
  --kubeconfig=supershy-linux89.kubeconfig

# 设置当前使用的上下文
kubectl config use-context linux-groups --kubeconfig=supershy-linux89.kubeconfig
EOF

		2.2.2 生成kubeconfig文件
bash kubeconfig.sh



	2.3 创建RBAC授权策略
[root@master231 group]# cat rbac.yaml 
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: supershy-custom-role-reader
rules:
  # API组,""表示核心组,该组包括但不限于"configmaps","nodes","pods","services"等资源.
  # "extensions"组对于低于k8s 1.15版本而言,deployment资源在该组内,但高于k8s1.15版本,则为apps组。
  #
  # 想要知道哪个资源使用在哪个组,我们只需要根据"kubectl api-resources"命令等输出结果就可以轻松判断哟~
  # API组,""表示核心组。
- apiGroups: ["","apps"]  
  # 资源类型,不支持写简称,必须写全称哟!!
  resources: ["pods","nodes","services","deployments"]  
  # 对资源的操作方法.
  verbs: ["get", "watch", "list"]  

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: supershy-group-rolebinding
  namespace: default
subjects:
  # 主体类型
- kind: Group
  # 组名(对应的O字段)
  name: supershy  
  apiGroup: rbac.authorization.k8s.io
roleRef:
  # 角色类型
  kind: Role  
  # 绑定角色名称
  name: supershy-custom-role-reader
  apiGroup: rbac.authorization.k8s.io

[root@master231 group]# 
[root@master231 group]# kubectl apply -f rbac.yaml 

4. 将user加入group

# 修改user权限只需要修改对应group的权限就行(rbac.yaml)
	3 将linux89用户添加到supershy组
		3.1 使用k8s ca签发客户端证书
			3.1.1 编写证书请求
mkdir linux89 && cd linux89
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF
cat > supershy-csr.json <<EOF
{
  "CN": "linux89",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "supershy",
      "OU": "System"
    }
  ]
}
EOF

		3.1.2 生成证书
cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -config=ca-config.json -profile=kubernetes supershy-csr.json | cfssljson -bare linux89

 

	3.2 生成kubeconfig授权文件
		3.2.1编写生成kubeconfig文件的脚本
cat > kubeconfig.sh <<'EOF'
kubectl config set-cluster linux89-cluster \
  --certificate-authority=/etc/kubernetes/pki/ca.crt \
  --embed-certs=true \
  --server=https://10.0.0.231:6443 \
  --kubeconfig=linux89.kubeconfig
 
# 设置客户端认证
kubectl config set-credentials linux89 \
  --client-key=linux89-key.pem \
  --client-certificate=linux89.pem \
  --embed-certs=true \
  --kubeconfig=linux89.kubeconfig

# 设置默认上下文
kubectl config set-context linux89-supershy \
  --cluster=linux89-cluster \
  --user=linux89 \
  --kubeconfig=linux89.kubeconfig

# 设置当前使用的上下文
kubectl config use-context  linux89-supershy --kubeconfig=linux89.kubeconfig
EOF


		3.2.2 生成kubeconfig文件
bash kubeconfig.sh


	3.3 linux89用户测试访问
[root@master231 linux89]# kubectl get pods --kubeconfig=linux89.kubeconfig 

5. sa-serviceaccount

- 基于服务账号serviceaccount授权案例
	1 serviceaccount概述
所谓的serviceaccount简称"sa",一般用于程序的用户名。

	2.2 创建sa资源
		2.2.1 响应式创建serviceAccounts
[root@master231 serviceaccounts]# kubectl get sa
NAME      SECRETS   AGE
default   1         7d23h
[root@master231 serviceaccounts]# 
[root@master231 serviceaccounts]# kubectl create serviceaccount supershy-linux89
serviceaccount/supershy-linux89 created
[root@master231 serviceaccounts]# 
[root@master231 serviceaccounts]# kubectl get sa
NAME                 SECRETS   AGE
default              1         7d23h
supershy-linux89   1         2s
[root@master231 serviceaccounts]# 

		2.2.2 声明式创建serviceaccount
[root@master231 serviceaccounts]# cat 01-sa.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: supershy-linux89-jiaoshi05
  
  
	3 授权容器中的Python程序对K8S API访问权限案例
		3.1 授权容器中Python程序对K8S API访问权限步骤:
- 创建Role;
- 创建ServiceAccount;
- 将ServiceAccount于Role绑定;
- 为Pod指定自定义的SA;
- 进入容器执行Python程序测试操作K8S API权限;

		3.2基于服务账号授权案例
			3.2.1 创建服务账号
[root@master231 sa]# cat 01-sa.yaml 
apiVersion: v1
kind: ServiceAccount 
metadata:
  name: supershy-python 
[root@master231 sa]# 

			3.2.2 创建角色
[root@master231 sa]# cat 02-Role.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: supershy-pod-reader 
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
[root@master231 sa]# 


			3.2.3 创建角色绑定
[root@master231 sa]# cat 03-RoleBinding.yaml 
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: supershy-sa-to-role
subjects:
- kind: ServiceAccount 
  name: supershy-python
roleRef:
  kind: Role
  name: supershy-pod-reader
  apiGroup: rbac.authorization.k8s.io
[root@master231 sa]# 



			3.2.4 部署python的Pod(模拟你是运维开发)
[root@master231 sa]# cat 04-deploy-python.yaml 
# apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: supershy-deploy-python-sa
spec:
  replicas: 2
  selector:
    matchLabels:
      apps: python
  template:
    metadata:
      labels:
         apps: python
    spec:
      # 指定sa的名称,请确认该账号是有权限访问K8S集群的哟!
      serviceAccountName: supershy-python
      containers:
      - image: harbor.supershy.com/supershy-test/python:3.9.16-alpine3.16
        name: py
        command:
        - tail 
        - -f
        - /etc/hosts
[root@master231 sa]#  

		3.2.5 编写Python程序,进入到"python"Pod所在的容器执行以下Python代码即可!
[root@master231 sa]# kubectl get pods -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP             NODE                   NOMINATED NODE   READINESS GATES
supershy-deploy-python-sa-dd745966-cxmm7   1/1     Running   0          32s   10.100.2.125   worker233   <none>           <none>
supershy-deploy-python-sa-dd745966-mc446   1/1     Running   0          32s   10.100.2.124   worker233   <none>           <none>
[root@master231 sa]# 
[root@master231 sa]# 
[root@master231 sa]# kubectl exec -it supershy-deploy-python-sa-dd745966-cxmm7 -- sh
/ # 
/ # python --version
Python 3.9.16

/ #  cat > view-k8s-resources.py <<EOF
from kubernetes import client, config

with open('/var/run/secrets/kubernetes.io/serviceaccount/token') as f:
     token = f.read()

configuration = client.Configuration()
configuration.host = "https://kubernetes"  # APISERVER地址
configuration.ssl_ca_cert="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"  # CA证书 
configuration.verify_ssl = True   # 启用证书验证
configuration.api_key = {"authorization": "Bearer " + token}  # 指定Token字符串
client.Configuration.set_default(configuration)
apps_api = client.AppsV1Api() 
core_api = client.CoreV1Api() 
try:
  print("###### Deployment列表 ######")
  #列出default命名空间所有deployment名称
  for dp in apps_api.list_namespaced_deployment("default").items:
    print(dp.metadata.name)
except:
  print("没有权限访问Deployment资源!")

try:
  #列出default命名空间所有pod名称
  print("###### Pod列表 ######")
  for po in core_api.list_namespaced_pod("default").items:
    print(po.metadata.name)
except:
  print("没有权限访问Pod资源!")
EOF


		3.2.6 安装Python程序依赖的软件包并测试
pip install kubernetes -i https://pypi.tuna.tsinghua.edu.cn/simple/
python3 view-k8s-resources.py



	4.给sa账号添加权限
[root@master231 serviceaccounts]# cat 02-Role.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: supershy-pod-reader 
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
- apiGroups: ["apps"]
  resources: ["deployments"]
  verbs: ["get", "watch", "list"]
[root@master231 serviceaccounts]# 
[root@master231 serviceaccounts]# kubectl apply -f 02-Role.yaml 
role.rbac.authorization.k8s.io/supershy-pod-reader configured
[root@master231 serviceaccounts]# 

附加组件-dashboard图形化管理k8s

1. 基本部署

- dashboard以图形化方式管理K8S集群
	1 Dashboard概述
Dashboard是K8S集群管理的一个GUI的WebUI实现,它是一个k8s附加组件,所以需要单独部署。

我们可以以图形化的方式创建k8s资源。


GitHub地址:
	https://github.com/kubernetes/dashboard#kubernetes-dashboard

	
	2 部署dashboard组件
		2.1 查看K8S对于dashboard的版本依赖
			2.1.1 查看k8s 1.15版本依赖的dashboard(这种查看方式仅适用于k8s 1.17-)
	https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md#unchanged
	
			2.1.2 查看K8S 1.17+依赖可以直接在dashboard官网查看
参考链接:
	https://github.com/kubernetes/dashboard/releases/tag/v2.5.1
	
	

	3 部署dashboard
		3.1 下载k8s 1.23版本依赖的dashboard
[root@master231 dashboard]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml




		3.2 修改kubernetes-dashboard.yaml配置文件
目的是指定私有仓库镜像和svc该类型为NodePort。

 
		3.3 部署dashboard组件
kubectl apply -f kubernetes-dashboard.yaml 


		3.4 访问Dashboar的WebUI
https://10.0.0.233:8443/


温馨提示:
	如果页面打不开可以单机鼠标在空白处,然后依次输入: "thisisunsafe"
	
	
	注意,是小写字母,如果Google浏览器不好使,可以使用火狐浏览器访问。

2. 基于token方式登录

	4 登录dashboard
		4.1 基于token登录
			4.1.1 官方默认的sa账号权限不足
[root@master231 dashboard]# kubectl -n kubernetes-dashboard describe sa  kubernetes-dashboard | grep Tokens
Tokens:              kubernetes-dashboard-token-2q94n
[root@master231 dashboard]# kubectl -n kubernetes-dashboard describe secrets kubernetes-dashboard-token-2q94n

		4.1.2 解决权限不足方案
			4.1.2.1 编写K8S的yaml资源清单文件
cat > supershy-dashboard-rbac.yaml <<'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  # 创建一个名为"supershy"的账户
  name: supershy
  namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-supershy
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  # 既然绑定的是集群角色,那么类型也应该为"ClusterRole",而不是"Role"哟~
  kind: ClusterRole
  # 关于集群角色可以使用"kubectl get clusterrole | grep admin"进行过滤哟~
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    # 此处要注意哈,绑定的要和我们上面的服务账户一致哟~
    name: supershy
    namespace: kubernetes-dashboard
EOF
 
			4.1.2.2 创建资源清单
kubectl apply -f supershy-dashboard-rbac.yaml


			4.1.2.3 查看sa资源的Tokens名称
[root@master231 dashboard]# kubectl describe serviceaccounts -n kubernetes-dashboard  supershy | grep Tokens
Tokens:              supershy-token-pbj9j
[root@master231 dashboard]# 


			4.1.2.4 根据上一步的token名称的查看token值
[root@master231 dashboard]# kubectl -n kubernetes-dashboard describe secrets supershy-token-pbj9j | awk '/^token/{print $2}'

3. 基于kubeconfig方式登录

	4.2 基于kubeconfig文件登录
		4.2.1 编写生成kubeconf的配置文件的脚本
cat > supershy-generate-context-conf.sh <<'EOF'
#!/bin/bash
# auther: Jason Yin


# 获取secret的名称
SECRET_NAME=`kubectl get secrets -n kubernetes-dashboard  | grep supershy | awk {'print $1'}`

# 指定API SERVER的地址
API_SERVER=10.0.0.231:6443

# 指定kubeconfig配置文件的路径名称
KUBECONFIG_NAME=./supershy-k8s-dashboard-admin.conf

# 获取supershy用户的tocken
supershy_TOCKEN=`kubectl get secrets -n kubernetes-dashboard $SECRET_NAME -o jsonpath={.data.token} | base64 -d`

# 在kubeconfig配置文件中设置群集项
kubectl config set-cluster supershy-k8s-dashboard-cluster --server=$API_SERVER --kubeconfig=$KUBECONFIG_NAME

# 在kubeconfig中设置用户项
kubectl config set-credentials supershy-k8s-dashboard-user --token=$supershy_TOCKEN --kubeconfig=$KUBECONFIG_NAME

# 配置上下文,即绑定用户和集群的上下文关系,可以将多个集群和用户进行绑定哟~
kubectl config set-context supershy-admin --cluster=supershy-k8s-dashboard-cluster --user=supershy-k8s-dashboard-user --kubeconfig=$KUBECONFIG_NAME

# 配置当前使用的上下文
kubectl config use-context supershy-admin --kubeconfig=$KUBECONFIG_NAME
EOF

bash supershy-generate-context-conf.sh


		4.2.2运行上述脚本并下载上一步生成的配置文件到桌面,如上图所示,选择并选择该文件进行登录
sz supershy-k8s-dashboard-admin.conf



		4.2.3 进入到dashboard的WebUI
我们可以访问任意的Pod,当然也可以直接进入到有终端的容器哟。

附加组件-metrics-server及hpa水平伸缩

1. metrics-server环境部署

- metrics-server环境部署
	1 metrics-server概述
Metrics Server从kubelets收集资源指标,并通过Metrics API将它们暴露在Kubernetes apiserver中,以供HPA(Horizontal Pod Autoscaler)和VPA(Vertical Pod Autoscaler)使用。

Metrics API也可以通过kubectl top访问。
为hpa资源提供自动扩缩容

参考链接:
	https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server
	https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/
	https://github.com/kubernetes-sigs/metrics-server
	
	
	2 部署metric-server
		2.1 下载资源清单
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability-1.21+.yaml

	
		2.2 修改资源清单,修改deploy资源两处
[root@master231 metrics-server]# vim high-availability-1.21+.yaml 
...
apiVersion: apps/v1
kind: Deployment
...
spec:
  ...
  template:
    ...
    spec:
		# 在args后添加"--kubelet-insecure-tls",和"image"字段。
      - args:
        - --kubelet-insecure-tls
        # image: registry.k8s.io/metrics-server/metrics-server:v0.6.3
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.3
		
		
		2.3 创建应用
[root@master231 metrics-server]# kubectl apply -f high-availability-1.21+.yaml 
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
poddisruptionbudget.policy/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
[root@master231 metrics-server]# 


		2.4 检查状态
[root@master231 metrics-server]# kubectl -n kube-system get pods  | grep metrics-server

	2.5 验证 metrics-server是否正常
[root@master231 metrics-server]# kubectl top node 

[root@master231 metrics-server]# kubectl top pods

2. 基于scale手动伸缩pods副本数量

	3.手动伸缩pods数量-三种方式
		3.1 修改资源清单的副本数
[root@master231 metrics-server]# vim deploy-stress.yaml 
...
spec:
  replicas: 5
  
  
		3.2 响应式修改Pod的副本数量,交互式
[root@master231 metrics-server]# kubectl edit deployments.apps deploy-strees 
...
spec:
  replicas: 5

		3.3 scale响应式手动伸缩Pod副本数量,非交互
[root@master231 metrics-server]# kubectl scale deployment deploy-strees --replicas=5
deployment.apps/deploy-strees scaled

3. 基于hpa水平pods自动伸缩

- hpa水平Pod自动伸缩案例
  hpa资源依赖于metrics-server组件
	1 编写资源清单
[root@master231 metrics-server]# cat deploy-stress-hpa.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: supershy-stress-hpa
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: stress
  template:
    metadata:
      labels:
        apps: stress
    spec:
      containers:
      - name: stress
        image: jasonyin2020/supershy-linux-tools:v0.1
        command: 
        - tail
        - -f
        - /etc/hosts
        resources:
          requests:
            cpu: "50m"
          limits:
            cpu: "150m"

---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: supershy-linux-tools-stress
spec:
  # 指定最大的Pod数量
  maxReplicas: 10
  # 指定最小的Pod数量
  minReplicas: 2
  # 弹性伸缩引用目标
  scaleTargetRef:
    # 目标的API版本
    # apiVersion: extensions/v1beta1
    apiVersion: "apps/v1"
    # 目标的类型
    kind: Deployment
    # 目标的名称
    name: supershy-stress-hpa
  # 使用CPU的阈值
  targetCPUUtilizationPercentage: 95
[root@master231 metrics-server]# 


	2 进行压力测试
stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout  20m

	3 补充说明
		3.1 hpa规则同样也支持响应式创建,命令如下:
[root@master231 ~]# kubectl get deployments.apps 
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           4h44m
supershy-stress-hpa     2/2     2            2           20m
[root@master231 ~]# 
[root@master231 ~]# 
[root@master231 ~]# kubectl autoscale deployment supershy-stress-hpa --min=3 --max=5 --cpu-percent=75
horizontalpodautoscaler.autoscaling/supershy-stress-hpa autoscaled
[root@master231 ~]# 
[root@master231 ~]# kubectl get hpa
NAME                   REFERENCE                         TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
supershy-stress-hpa   Deployment/supershy-stress-hpa   <unknown>/75%   3         5         2          27s
[root@master231 ~]# 
[root@master231 ~]# kubectl get hpa
NAME                   REFERENCE                         TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
supershy-stress-hpa   Deployment/supershy-stress-hpa   2%/75%    3         5         3          64s
[root@master231 ~]# 

		3.2 hpa会自动伸缩,要注意以下几点:
- 当资源使用超过阈值,会自动扩容,但是扩容数量不会超过最大Pod数量;
- 扩容时无延迟,只要监控资源使用超过阈值,则直接新建Pod;
- 当资源使用率恢复到阈值以下时,需要等待一段时间才会释放资源,大概5min;

13. quota-resourcequotas资源配额限制

0. 概述

- 资源配额ResourceQuota概述
	1.资源配额概述
当多个用户或团队共享具有固定节点数目的集群时,人们会担心有人使用超过其基于公平原则所分配到的资源量。

资源配额是帮助管理员解决这一问题的工具。

资源配额,通过 ResourceQuota 对象来定义,对每个命名空间的资源消耗总量提供限制。 它可以限制命名空间中某种类型的对象的总数目上限,也可以限制命名空间中的 Pod 可以使用的计算资源的总上限。

参考链接:
	https://kubernetes.io/zh-cn/docs/concepts/policy/resource-quotas/

	2.资源配额ResourceQuota的工作方式
- 不同的团队可以在不同的命名空间下工作。这可以通过RBAC强制执行。

- 集群管理员可以为每个命名空间创建一个或多个ResourceQuota对象。

- 当用户在命名空间下创建资源(如 Pod、Service 等)时,Kubernetes 的配额系统会跟踪集群的资源使用情况, 以确保使用的资源用量不超过 ResourceQuota 中定义的硬性资源限额。

- 如果资源创建或者更新请求违反了配额约束,那么该请求会报错(HTTP 403 FORBIDDEN), 并在消息中给出有可能违反的约束。

- 如果命名空间下的计算资源 (如 cpu 和 memory)的配额被启用, 则用户必须为这些资源设定请求值(request)和约束值(limit),否则配额系统将拒绝 Pod 的创建。 提示: 可使用 LimitRanger 准入控制器来为没有设置计算资源需求的 Pod 设置默认值。


参考链接:
	https://www.cnblogs.com/supershy/p/17955506

特征 resourceQuota LimitRange
作用范围 对命名空间级限制 对个别资源限制(pod,container)
控制对象 对命名空间内所有资源限制(如cpu,内存,pod数等) 对单个容器的资源请求限制
主要用途 限制命名空间使用的总资源量 限制单个容器的资源配置范围
功能 设置命名空间的资源上限 为容器设置资源的最大、最小值、默认值
应用场景 控制整个命名空间的资源配额 控制单个容器资源配置,确保合理分配资源

1. 资源配额ResourceQuota之限制计算资源实战

- 资源配额ResourceQuota之限制计算资源实战
	1.创建资源配额
[root@master231 16-resourcequotas]# cat 01-compute-resources.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-resources
  namespace: kube-public
spec:
  # 定义硬性配置
  hard:
    # 配置cpu 的相关参数
    requests.cpu: "1"
    limits.cpu: "2"
    # 定义memory相关的参数
    requests.memory: 2Gi
    limits.memory: 3Gi
    # 定义GPU相关的参数
    # requests.nvidia.com/gpu: 4

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: supershy-stress
  namespace: kube-public
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: stress
  template:
    metadata:
      labels:
        apps: stress
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
        operator: Exists
      containers:
      - name: stress
        image: harbor.supershy.com/web/nginx:1.20.1-alpine
        command: ["tail","-f","/etc/hosts"]
        resources:
          requests:
            # cpu: "50m"
            cpu: 0.5
            memory: 200Mi
          limits:
            # cpu: "150m"
            cpu: 1
            memory: 500Mi
[root@master231 16-resourcequotas]#
	2.验证资源
[root@master231 16-resourcequotas]# kubectl get po,deploy,rs,quota -n kube-public 

2. 资源配额ResourceQuota之限制资源数量实战

- 资源配额ResourceQuota之限制资源数量实战
	1.编写资源清单
[root@master231 16-resourcequotas]# cat 02-object-counts.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: object-counts
  namespace: kube-public
spec:
 hard:
   pods: "5"
   count/deployments.apps: "3"
   count/services: "3"


--- 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: supershy-stress-objects
  namespace: kube-public
spec:
  replicas: 10
  selector:
    matchLabels:
      apps: stress
  template:
    metadata:
      labels:
        apps: stress
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
        operator: Exists
      containers:
      - name: stress
        image: harbor.supershy.com/web/nginx:1.20.1-alpine
        command: ["tail","-f","/etc/hosts"]
        resources:
          requests:
            cpu: "50m"
            memory: 200Mi
          limits:
            cpu: "200m"
            memory: 500Mi
[root@master231 16-resourcequotas]# 



	2.验证资源
[root@master231 16-resourcequotas]# kubectl get po,deploy,rs,quota -n kube-public 

3. 资源配额ResourceQuota之限制存储资源实战

- 资源配额ResourceQuota之限制存储资源实战
	1.编写资源清单
[root@master231 16-resourcequotas]# cat 03-storage-reources.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: storage-resources
  namespace: kube-public
spec:
  hard:
    requests.storage: "10Gi"

--- 

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-demo-01
  namespace: kube-public
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-demo-02
  namespace: kube-public
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
[root@master231 16-resourcequotas]# 

	
	2.验证资源
[root@master231 16-resourcequotas]# kubectl apply -f 03-storage-reources.yaml 

14. LimitRange容器资源申请限制

0. 概述

- LimitRange限制Pod的申请资源
	1.什么是资源限制
默认情况下, Kubernetes集群上的容器运行使用的计算资源没有限制。 

使用Kubernetes资源配额, 管理员(也称为集群操作者)可以在一个指定的命名空间内限制集群资源的使用与创建。 在命名空间中,一个 Pod 最多能够使用命名空间的资源配额所定义的 CPU 和内存用量。 

作为集群操作者或命名空间级的管理员,你可能也会担心如何确保一个Pod不会垄断命名空间内所有可用的资源。

LimitRange是限制命名空间内可为每个适用的对象类别 (例如 Pod 或 PersistentVolumeClaim) 指定的资源分配量(限制和请求)的策略对象。

一个 LimitRange(限制范围) 对象提供的限制能够做到:
	- 在一个命名空间中实施对每个 Pod 或 Container 最小和最大的资源使用量的限制。
	- 在一个命名空间中实施对每个 PersistentVolumeClaim 能申请的最小和最大的存储空间大小的限制。
	- 在一个命名空间中实施对一种资源的申请值和限制值的比值的控制。
	- 设置一个命名空间中对计算资源的默认申请/限制值,并且自动的在运行时注入到多个 Container 中。


当某命名空间中有一个 LimitRange 对象时,将在该命名空间中实施LimitRange限制。LimitRange 的名称必须是合法的 DNS 子域名。


参考链接:
	https://kubernetes.io/zh-cn/docs/concepts/policy/limit-range/
	
	
	2.资源限制LimitRange和请求的约束
参考链接:
	https://www.cnblogs.com/supershy/p/17968870

1. 资源限制LimitRange之计算资源最大,最小限制

- 资源限制LimitRange之计算资源最大,最小限制
	1.编写资源清单
[root@master231 17-limitranges]# cat 01-cpu-memory-min-max.yaml 
apiVersion: v1
kind: LimitRange
metadata:
  name: cpu-memory-min-max
  namespace: kube-public
spec:
  limits:
    # 容器能设置limit的最大值
  - max:
      cpu: 2
      memory: 4Gi
    # 容器能设置limit的最小值
    min:
      cpu: 200m
      memory: 100Mi
    # 限制的类型是容器
    type: Container

---

apiVersion: v1
kind: Pod
metadata:
  name: pods-01
  namespace: kube-public
spec:
  containers:
  - name: web
    image: nginx:1.20.1-alpine
    resources:
      requests:
        cpu: 0.1
        memory: 1Gi
      limits:
        cpu: 1
        memory: 2Gi

---

apiVersion: v1
kind: Pod
metadata:
  name: pods-02
  namespace: kube-public
spec:
  containers:
  - name: web
    image: nginx:1.20.1-alpine
    resources:
      requests:
        cpu: 0.5
        memory: 1Gi
      limits:
        cpu: 1
        memory: 5Gi


---

apiVersion: v1
kind: Pod
metadata:
  name: pods-03
  namespace: kube-public
spec:
  containers:
  - name: web
    image: nginx:1.20.1-alpine

---
apiVersion: v1
kind: Pod
metadata:
  name: pods-04
  namespace: kube-public
spec:
  containers:
  - name: web
    image: nginx:1.20.1-alpine
    resources:
      requests:
        cpu: 0.5
        memory: 1Gi
      limits:
        cpu: 2
        memory: 1.5Gi
[root@master231 17-limitranges]# 

	
	2.验证资源
[root@master231 17-limitranges]# kubectl apply -f 01-cpu-memory-min-max.yaml 
limitrange/cpu-memory-min-max created
pod/pods-03 created
pod/pods-04 created
Error from server (Forbidden): error when creating "01-cpu-memory-min-max.yaml": pods "pods-01" is forbidden: minimum cpu usage per Container is 200m, but request is 100m
Error from server (Forbidden): error when creating "01-cpu-memory-min-max.yaml": pods "pods-02" is forbidden: maximum memory usage per Container is 4Gi, but limit is 5Gi
[root@master231 17-limitranges]# 
[root@master231 17-limitranges]# kubectl -n kube-public get pods,limits
NAME          READY   STATUS    RESTARTS   AGE
pod/pods-03   0/1     Pending   0          15s
pod/pods-04   1/1     Running   0          15s
[root@master231 17-limitranges]# kubectl -n kube-public get pods pods-03 -o yaml  | grep resources -A 6
    resources:
      limits:
        cpu: "2"
        memory: 4Gi
      requests:
        cpu: "2"
        memory: 4Gi

2. 资源限制LimitRange之计算资源默认值限制

- 资源限制LimitRange之计算资源默认值限制
	1.编写资源清单
[root@master231 17-limitranges]# cat 02-cpu-memory-default.yaml 
apiVersion: v1
kind: LimitRange
metadata:
  name: cpu-memory-min-max-default
  namespace: kube-public
spec:
  limits:
  - max:
      cpu: 2
      memory: 4Gi
    min:
      cpu: 200m
      memory: 100Mi
    type: Container
    # 设置默认值的Request
    # defaultRequest 是资源请求被省略时按资源名称设定的默认资源要求请求值。
    defaultRequest:
      cpu: 200m
      memory: 500Mi
    # 设置默认值的Limit
    # 资源限制被省略时按资源名称设定的默认资源要求限制值。
    default:
      cpu: 1
      memory: 2Gi

---

apiVersion: v1
kind: Pod
metadata:
  name: pods-05
  namespace: kube-public
spec:
  containers:
  - name: web
    image: nginx:1.20.1-alpine
[root@master231 17-limitranges]# 


	2.验证资源
[root@master231 17-limitranges]# kubectl apply -f 02-cpu-memory-default.yaml 
limitrange/cpu-memory-min-max-default created
pod/pods-05 created
[root@master231 17-limitranges]# 
[root@master231 17-limitranges]# 
[root@master231 17-limitranges]# kubectl get pods,limits -n kube-public 
NAME          READY   STATUS    RESTARTS   AGE
pod/pods-05   1/1     Running   0          9s

NAME                                    CREATED AT
limitrange/cpu-memory-min-max-default   2024-01-18T02:57:03Z
[root@master231 17-limitranges]# 
[root@master231 17-limitranges]# kubectl -n kube-public get pods pods-05 -o yaml | grep resources -A 6
    resources:
      limits:
        cpu: "1"
        memory: 2Gi
      requests:
        cpu: 200m
        memory: 500Mi
[root@master231 17-limitranges]# 
[root@master231 17-limitranges]# kubectl -n kube-public describe limitrange/cpu-memory-min-max-default 
Name:       cpu-memory-min-max-default
Namespace:  kube-public
Type        Resource  Min    Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---    ---  ---------------  -------------  -----------------------
Container   cpu       200m   2    200m             1              -
Container   memory    100Mi  4Gi  500Mi            2Gi            -
[root@master231 17-limitranges]# 

3. 资源限制LimitRange之存储资源最大,最小限制

- 资源限制LimitRange之存储资源最大,最小限制
	1.编写资源清单
[root@master231 17-limitranges]# cat 03-storage-min-max.yaml 
apiVersion: v1
kind: LimitRange
metadata:
  name: storage-min-max
  namespace: kube-public
spec:
  limits:
  - type: PersistentVolumeClaim
    max:
      storage: 10Gi
    min:
      storage: 1Gi

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-01
  namespace: kube-public
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 0.5Gi

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-02
  namespace: kube-public
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 15Gi

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-03
  namespace: kube-public
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi

---


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-04
  namespace: kube-public
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

[root@master231 17-limitranges]# 

	
	2.验证资源
[root@master231 17-limitranges]# kubectl apply -f 03-storage-min-max.yaml 
limitrange/storage-min-max created
persistentvolumeclaim/pvc-03 created
persistentvolumeclaim/pvc-04 created
Error from server (Forbidden): error when creating "03-storage-min-max.yaml": persistentvolumeclaims "pvc-01" is forbidden: minimum storage usage per PersistentVolumeClaim is 1Gi, but request is 512Mi
Error from server (Forbidden): error when creating "03-storage-min-max.yaml": persistentvolumeclaims "pvc-02" is forbidden: maximum storage usage per PersistentVolumeClaim is 10Gi, but request is 15Gi
[root@master231 17-limitranges]# 
[root@master231 17-limitranges]# 
[root@master231 17-limitranges]# kubectl -n kube-public describe limits

15. pv-PersistentVolume

PersistentVolume:
	简称pv,作用就是和后端存储进行关联。
	
PersistentVolumeClaim
	简称pvc,作用是和pv进行一对一的关联。
	
StorageClass:
	简称sc,作用就是动态创建pv。


- Pod的数据存储进阶pv和pvc
	1.为什么需要动态存储
1.1 传统基于存储卷的方式挂载的缺点
1.2 引入PV和PVC实现后端存储解耦
1.3 引入动态存储类实现自动创建PV 
	 

	2.持久卷Persistent Volume环境创建(简称"PV")  
		2.1 编写PV资源清单
cat > manual-pv.yaml <<'EOF'
apiVersion: v1
kind: PersistentVolume
metadata:
  name: supershy-linux-pv01
  labels:
    school: supershy
spec:
   # 声明PV的访问模式,常用的有"ReadWriteOnce","ReadOnlyMany"和"ReadWriteMany":
   #   ReadWriteOnce:(简称:"RWO")
   #      只允许单个worker节点读写存储卷,但是该节点的多个Pod是可以同时访问该存储卷的。
   #   ReadOnlyMany:(简称:"ROX")
   #      允许多个worker节点进行只读存储卷。
   #   ReadWriteMany:(简称:"RWX")
   #      允许多个worker节点进行读写存储卷。
   #   ReadWriteOncePod:(简称:"RWOP")
   #       该卷可以通过单个Pod以读写方式装入。
   #       如果您想确保整个集群中只有一个pod可以读取或写入PVC,请使用ReadWriteOncePod访问模式。
   #       这仅适用于CSI卷和Kubernetes版本1.22+。
   accessModes:
   - ReadWriteMany
   # 声明存储卷的类型为nfs
   nfs:
     path: /supershy/data/kubernetes/pv/linux/pv001
     server: 10.0.0.231
   # 指定存储卷的回收策略,常用的有"Retain"和"Delete"
   #    Retain:
   #       "保留回收"策略允许手动回收资源。
   #       删除PersistentVolumeClaim时,PersistentVolume仍然存在,并且该卷被视为"已释放"。
   #       在管理员手动回收资源之前,使用该策略其他Pod将无法直接使用。
   #    Delete:
   #       对于支持删除回收策略的卷插件,k8s将删除pv及其对应的数据卷数据。
   #    Recycle:
   #       对于"回收利用"策略官方已弃用。相反,推荐的方法是使用动态资源调配。
   #       如果基础卷插件支持,回收回收策略将对卷执行基本清理(rm -rf /thevolume/*),并使其再次可用于新的声明。
   persistentVolumeReclaimPolicy: Retain
   # 声明存储的容量
   capacity:
     storage: 2Gi

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: supershy-linux-pv02
  labels:
    school: supershy
spec:
   accessModes:
   - ReadWriteMany
   nfs:
     path: /supershy/data/kubernetes/pv/linux/pv002
     server: 10.0.0.231
   persistentVolumeReclaimPolicy: Retain
   capacity:
     storage: 5Gi

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: supershy-linux-pv03
  labels:
    school: supershy
spec:
   accessModes:
   - ReadWriteMany
   nfs:
     path: /supershy/data/kubernetes/pv/linux/pv003
     server: 10.0.0.231
   persistentVolumeReclaimPolicy: Retain
   capacity:
     storage: 10Gi
EOF

16. pvc-persistentvolumeclaims

- 持久卷声明Persistent Volume Claim环境创建(简称"PVC")
	1 编写pvc的资源清单
[root@master231 persistentvolumeclaims]# cat > manual-pvc.yaml <<'EOF'
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: supershy-linux-pvc
spec:
  # 声明要是用的pv
  # volumeName: supershy-linux-pv03
  # 声明资源的访问模式
  accessModes:
  - ReadWriteMany
  # 声明资源的使用量
  resources:
    limits:
       storage: 4Gi
    requests:
       storage: 3Gi
EOF


	2 创建资源
[root@master231 persistentvolumeclaims]# kubectl apply -f manual-pvc.yaml

	3 查看pvc资源
[root@master231 persistentvolumeclaims]# kubectl get pvc

17. sc-storageclasses

- 部署nfs动态存储类
	1 k8s组件原生并不支持NFS动态存储
https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner

	2 NFS不提供内部配置器实现动态存储,但可以使用外部配置器。
git clone https://gitee.com/supershy/k8s-external-storage.git

	3 修改api-server属性,添加"--feature-gates=RemoveSelfLink"参数。
[root@master231 storageclasses]# vim /etc/kubernetes/manifests/kube-apiserver.yaml 
...
spec:
  containers:
  - command:
    - kube-apiserver
    - --service-node-port-range=3000-50000
    - --feature-gates=RemoveSelfLink=false  # 加改行配置即可。
	...

温馨提示:
	上述操作的K8S版本时1.23.17,但是在早期的K8S 1.15是不需要做任何修改的。

	当然,有网友说换镜像也可以搞定这个问题。因此镜像尽量不要使用"latest"标签。 
	
	做完上述操作后,保证集群正常在进行下一步。

	4  nfs服务器端创建sc需要共享路径
mkdir -pv /supershy/data/kubernetes/sc

	5 编写资源清单
[root@master231 nfs]# cat class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: supershy-nfs-sc
  annotations:
    # 这里注解说明了这个是默认的storageclass,这个值为true则使用默认的存储类
    # 参考链接:
    #    https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/
    storageclass.kubernetes.io/is-default-class: "true"
# provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
provisioner: supershy/linux
parameters:
  # 注意哈,仅对"reclaimPolicy: Delete"时生效,如果回收策略是"reclaimPolicy: Retain",则无视此参数!
  # 如果设置为false,删除数据后,不会在存储卷路径创建"archived-*"前缀的目录哟!
  # archiveOnDelete: "false"
  # 如果设置为true,删除数据后,会在存储卷路径创建"archived-*"前缀的目录哟
  archiveOnDelete: "true"
# 声明PV回收策略,默认值为Delete
# reclaimPolicy: Retain
reclaimPolicy: Delete
[root@master231 nfs]# 


[root@master231 nfs]# cat deploy-nfs-sc-provisioner.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nfs-sc-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
      - name: nfs-client-provisioner
        image: registry.cn-hangzhou.aliyuncs.com/supershy-k8s/nfs-client-provisioner:latest
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes
        env:
        - name: PROVISIONER_NAME
          value: supershy/linux
        - name: NFS_SERVER
          value: 10.0.0.231
        - name: NFS_PATH
          value: /supershy/data/kubernetes/sc
      volumes:
      - name: nfs-client-root
        nfs:
          server: 10.0.0.231
          path: /supershy/data/kubernetes/sc
[root@master231 nfs]# 


[root@master231 nfs]# cat rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
[root@master231 nfs]# 


[root@master231 nfs]# cat test-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: supershy-linux-pvc-sc-demo
spec:
  # 声明使用哪个存储类,使用默认存储类,如果没有默认的存储类,则该属性必须指定,否则会抛出异常:
  #     no persistent volumes available for this claim and no storage class is set
  # storageClassName: supershy-nfs-sc
  accessModes:
  - ReadWriteMany
  resources:
    limits:
       storage: 10M
    requests:
       storage: 20M
[root@master231 nfs]# 


[root@master231 nfs]# cat test-pods.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: harbor.supershy.com/supershy-linux/alpine:latest
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: supershy-linux-pvc-sc-demo
[root@master231 nfs]# 

18. sts-statefulsets

0. 概述

- StatefulSets控制器
	1 StatefulSets概述
以Nginx的为例,当任意一个Nginx挂掉,其处理的逻辑是相同的,即仅需重新创建一个Pod副本即可,这类服务我们称之为无状态服务。

以MySQL主从同步为例,master,slave两个库任意一个库挂掉,其处理逻辑是不相同的,这类服务我们称之为有状态服务。

有状态服务面临的难题:
	(1)启动/停止顺序;
	(2)pod实例的数据是独立存储;
	(3)需要固定的IP地址或者主机名;
	
 
StatefulSet一般用于有状态服务,StatefulSets对于需要满足以下一个或多个需求的应用程序很有价值。
	(1)稳定唯一的网络标识符。
	(2)稳定独立持久的存储。
	(3)有序优雅的部署和缩放。
	(4)有序自动的滚动更新。	
	
	
稳定的网络标识:
	其本质对应的是一个service资源,只不过这个service没有定义VIP,我们称之为headless service,即"无头服务"。
	通过"headless service"来维护Pod的网络身份,会为每个Pod分配一个数字编号并且按照编号顺序部署。
	综上所述,无头服务("headless service")要求满足以下两点:
		(1)将svc资源的clusterIP字段设置None,即"clusterIP: None";
		(2)将sts资源的serviceName字段声明为无头服务的名称;
			
			
独享存储:
	Statefulset的存储卷使用VolumeClaimTemplate创建,称为"存储卷申请模板"。
	当sts资源使用VolumeClaimTemplate创建一个PVC时,同样也会为每个Pod分配并创建唯一的pvc编号,每个pvc绑定对应pv,从而保证每个Pod都有独立的存储。

1. StatefulSets控制器-网络唯一标识之headless案例

- StatefulSets控制器-网络唯一标识之headless案例
		2.1 编写资源清单
cat > 01-statefulset-headless-network.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
  name: supershy-linux-headless
spec:
  ports:
  - port: 80
    name: web
  # 将clusterIP字段设置为None表示为一个无头服务,即svc将不会分配VIP。
  clusterIP: None
  selector:
    app: nginx


---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: supershy-linux-web
spec:
  selector:
    matchLabels:
      app: nginx
  # 声明无头服务    
  serviceName: supershy-linux-headless
  replicas: 3 
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
EOF


9.2.2 使用响应式API创建测试Pod
# kubectl run -it dns-test --rm --image=harbor.supershy.com/supershy-linux/alpine:latest -- sh
#
# for i in `seq 0 2`;do ping supershy-linux-web-${i}.supershy-linux-headless.default.svc.supershy.com  -c 3;done

2. StatefulSets控制器-独享存储

-  StatefulSets控制器-独享存储
	1 编写资源清单
cat > 02-statefulset-headless-volumeClaimTemplates.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
  name: supershy-linux-headless
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: supershy-linux-web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: supershy-linux-headless
  replicas: 3 
  # 卷申请模板,会为每个Pod去创建唯一的pvc并与之关联哟!
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      # 声明咱们自定义的动态存储类,即sc资源。
      storageClassName: "supershy-nfs-sc"
      resources:
        requests:
          storage: 2Gi
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
---
apiVersion: v1
kind: Service
metadata:
  name: supershy-linux-sts-svc
spec:
  selector:
     app: nginx
  ports:
  - port: 80
    targetPort: 80
EOF

3. sts的分段更新

statefulSet控制器的分段更新:
	1.编写资源清单
[root@master231 statefulsets]# cat 03-statefuleset-updateStrategy-partition.yaml 
apiVersion: v1
kind: Service
metadata:
  name: sts-headless
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: web

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: supershy-sts-web
spec:
  # 指定sts资源的更新策略
  updateStrategy:
    # 配置滚动更新
    rollingUpdate:
      # 当编号小于3时不更新。
      partition: 3
  selector:
    matchLabels:
      app: web
  serviceName: sts-headless
  replicas: 5
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "supershy-nfs-sc"
      resources:
        requests:
          storage: 2Gi
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: c1
        # image: harbor.supershy.com/supershy-update/web:v1
        image: harbor.supershy.com/supershy-update/web:v2
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
---
apiVersion: v1
kind: Service
metadata:
  name: supershy-sts-svc
spec:
  selector:
     app: web
  ports:
  - port: 80
    targetPort: 80
[root@master231 statefulsets]# 

附加组件-helm

0.概述

- helm概述
	1 什么是helm
helm是k8s资源清单的管理工具,它就像Linux下的包管理器,比如centos的yum,ubuntu的apt。

helm有以下几个术语:
	helm:
		命令行工具,主要用于k8s的chart的创建,打包,发布和管理。
	chart:
		应用描述,一系列用于描述k8s资源相关文件的集合。
	release: 
		基于chart的部署实体,一个chart被helm运行后会生成一个release实体。
		这个release实体会在k8s集群中创建对应的资源对象。

	2 为什么需要helm
部署服务面临很多的挑战:
	(1)资源清单过多,不容易管理,如何将这些资源清单当成一个整体的服务进行管理?
		- deploy,ds,rs,...
		- cm,secret
		- pv,pvc,sc
		- ...
	(2)如何实现应用的版本管理,比如发布,回滚到指定版本?
	(3)如何实现资源清单文件到高效复用?
	...

	3 helm的版本
如上图所示,Helm目前有两个版本,即V2和V3。
    
2019年11月Helm团队发布V3版本,相比v2版本最大变化是将Tiller删除,并大部分代码重构。

helm v3相比helm :
	v2还做了很多优化,比如不同命名空间资源同名的情况在v3版本是允许的。
	我们在生产环境中使用建议大家使用v3版本,不仅仅是因为它版本功能较强,而且相对来说也更加稳定了。


官方地址:
	https://helm.sh/docs/intro/install/

github地址:
	https://github.com/helm/helm/releases

1. 部署

	1 下载helm软件包
[root@master231 /supershy/manifests/helm]# wget https://get.helm.sh/helm-v3.15.4-linux-amd64.tar.gz

	2 解压软件包
[root@master231 /supershy/manifests/helm]# tar xf helm-v3.15.4-linux-amd64.tar.gz -C /usr/local/bin/  linux-amd64/helm --strip-components=1

	3 添加自动补全功能-新手必备
# 对新打开的会话生效,开启自动补全功能
[root@master231 /supershy/manifests/helm]# helm completion bash > /etc/bash_completion.d/helm
[root@master231 /supershy/manifests/helm]# source /etc/bash_completion.d/helm

	4 查看帮助信息
[root@master231 helm]# helm  --help
...
completion:
    生成命令补全的功能。使用"source <(helm completion bash)"

create:
    创建一个chart并指定名称。

dependency:
    管理chart依赖关系。

env:
    查看当前客户端的helm环境变量信息。

get:
    下载指定版本的扩展信息。

help:
    查看帮助信息。

history:
    获取发布历史记录。

install:
    安装chart。

lint:
    检查chart中可能出现的问题。

list:
    列出releases信息。

package:
    将chart目录打包到chart存档文件中。

plugin:
    安装、列出或卸载Helm插件。

pull:
    从存储库下载chart并将其解包到本地目录。

repo:
    添加、列出、删除、更新和索引chart存储库。

rollback:
    将版本回滚到以前的版本。

search:
    在chart中搜索关键字。

show:
    显示chart详细信息。

status:
    显示已有的"RELEASE_NAME"状态。

template:
    本地渲染模板。

test:
    运行版本测试。

uninstall:
    卸载版本。

upgrade:
    升级版本。

verify:
    验证给定路径上的chart是否已签名且有效

version:
    查看客户端版本。

2. helm两种升级服务方式

- helm两种升级方式
	1 基于文件的方式进行升级
		1.1 创建chart
[root@master231 helm]# helm install myweb supershy-linux
NAME: myweb
LAST DEPLOYED: Fri Jan 19 15:58:58 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
welcome to use supershy web system ...

老男孩教育欢迎您,官网地址: https://www.supershy.com

本次您部署的应用版本是[harbor.supershy.com/supershy-update/apps:v1]

您的所属学校 --->【supershy】
您的所属班级 --->【linux89】

Successful deploy web:v0.1 !!!
[root@master231 helm]#

[root@master231 helm]# helm list
NAME 	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART             	APP VERSION
myweb	default  	1       	2024-01-19 15:58:58.125656564 +0800 CST	deployed	supershy-linux-v1	1.0        
[root@master231 helm]# 
[root@master231 helm]# kubectl get pods,svc  -o wide



		1.2 修改chart的values.yaml文件,指定安装v2版本
[root@master231 helm]# grep  tag: supershy-linux/values.yaml 
    tag: v1
[root@master231 helm]# 
[root@master231 helm]# sed -i '/tag:/s#v1#v2#' supershy-linux/values.yaml 
[root@master231 helm]# 
[root@master231 helm]# grep  tag: supershy-linux/values.yaml 
    tag: v2
[root@master231 helm]# 



		1.3 基于文件进行升级
[root@master231 helm]# helm upgrade myweb supershy-linux -f supershy-linux/values.yaml 
Release "myweb" has been upgraded. Happy Helming!
NAME: myweb
LAST DEPLOYED: Fri Jan 19 16:02:50 2024
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
welcome to use supershy web system ...

老男孩教育欢迎您,官网地址: https://www.supershy.com

本次您部署的应用版本是[harbor.supershy.com/supershy-update/apps:v2]

您的所属学校 --->【supershy】
您的所属班级 --->【linux89】

Successful deploy web:v0.1 !!!
[root@master231 helm]# 
[root@master231 helm]# helm list
NAME 	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART             	APP VERSION
myweb	default  	2       	2024-01-19 16:02:50.454746826 +0800 CST	deployed	supershy-linux-v1	1.0        
[root@master231 helm]# 
[root@master231 helm]# 
[root@master231 helm]# kubectl get pods,svc  -o wide

[root@master231 helm]# curl 10.200.162.112
<h1>apps v2</h1>


-----------------------
	2 基于命令行的进行升级
		2.1 基于命令行的方式升级,注意变量名称来自于"values.yaml"文件哟!
[root@master231 helm]# helm list
NAME 	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART             	APP VERSION
myweb	default  	2       	2024-01-19 16:02:50.454746826 +0800 CST	deployed	supershy-linux-v1	1.0        
[root@master231 helm]# 
[root@master231 helm]# helm history myweb
REVISION	UPDATED                 	STATUS    	CHART             	APP VERSION	DESCRIPTION     
1       	Fri Jan 19 15:58:58 2024	superseded	supershy-linux-v1	1.0        	Install complete
2       	Fri Jan 19 16:02:50 2024	deployed  	supershy-linux-v1	1.0        	Upgrade complete
[root@master231 helm]# 
[root@master231 helm]# cat supershy-linux/values.yaml 
# 指定Pod副本数量
myReplicaNumbers: 5

# 配置的是镜像相关属性
image:
  apps:
    repository: harbor.supershy.com/supershy-update/apps
    tag: v2

labels:
  school: supershy
  class: linux89

name:  web
version: v0.1

office: "https://www.supershy.com"
[root@master231 helm]# 
[root@master231 helm]# helm upgrade --set image.apps.tag=v3,myReplicaNumbers=3  myweb  supershy-linux
Release "myweb" has been upgraded. Happy Helming!
NAME: myweb
LAST DEPLOYED: Fri Jan 19 16:06:27 2024
NAMESPACE: default
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
welcome to use supershy web system ...

老男孩教育欢迎您,官网地址: https://www.supershy.com

本次您部署的应用版本是[harbor.supershy.com/supershy-update/apps:v3]

您的所属学校 --->【supershy】
您的所属班级 --->【linux89】

Successful deploy web:v0.1 !!!
[root@master231 helm]# 


		2.2 如下所示,查看myweb的RELEASE的发型版本历史。
[root@master231 helm]# helm list
NAME 	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART             	APP VERSION
myweb	default  	3       	2024-01-19 16:06:27.085849546 +0800 CST	deployed	supershy-linux-v1	1.0        
[root@master231 helm]# 
[root@master231 helm]# helm history myweb 
REVISION	UPDATED                 	STATUS    	CHART             	APP VERSION	DESCRIPTION     
1       	Fri Jan 19 15:58:58 2024	superseded	supershy-linux-v1	1.0        	Install complete
2       	Fri Jan 19 16:02:50 2024	superseded	supershy-linux-v1	1.0        	Upgrade complete
3       	Fri Jan 19 16:06:27 2024	deployed  	supershy-linux-v1	1.0        	Upgrade complete
[root@master231 helm]# 
[root@master231 helm]# 
[root@master231 helm]# kubectl get pods,svc
NAME                                        READY   STATUS    RESTARTS   AGE
pod/supershy-deploy-apps-96fb5dbfd-9wcrw   1/1     Running   0          118s
pod/supershy-deploy-apps-96fb5dbfd-j87j8   1/1     Running   0          2m
pod/supershy-deploy-apps-96fb5dbfd-qf4qp   1/1     Running   0          116s

NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/kubernetes           ClusterIP   10.200.0.1       <none>        443/TCP   5h5m
service/supershy-svc-apps   ClusterIP   10.200.162.112   <none>        80/TCP    9m29s
[root@master231 helm]# 
[root@master231 helm]# curl 10.200.162.112  
<h1>apps v3</h1>
[root@master231 helm]# 

3. helm两种版本回滚方式

-  helm回滚的两种方式
	1 不指定发行版,默认回滚到上一个版本
[root@master231 helm]# helm list
NAME 	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART             	APP VERSION
myweb	default  	3       	2024-01-19 16:06:27.085849546 +0800 CST	deployed	supershy-linux-v1	1.0        
[root@master231 helm]# 
[root@master231 helm]# helm history myweb 
REVISION	UPDATED                 	STATUS    	CHART             	APP VERSION	DESCRIPTION     
1       	Fri Jan 19 15:58:58 2024	superseded	supershy-linux-v1	1.0        	Install complete
2       	Fri Jan 19 16:02:50 2024	superseded	supershy-linux-v1	1.0        	Upgrade complete
3       	Fri Jan 19 16:06:27 2024	deployed  	supershy-linux-v1	1.0        	Upgrade complete
[root@master231 helm]# 
[root@master231 helm]# 
[root@master231 helm]# helm rollback myweb 
Rollback was a success! Happy Helming!
[root@master231 helm]# 
[root@master231 helm]# helm history myweb 
REVISION	UPDATED                 	STATUS    	CHART             	APP VERSION	DESCRIPTION     
1       	Fri Jan 19 15:58:58 2024	superseded	supershy-linux-v1	1.0        	Install complete
2       	Fri Jan 19 16:02:50 2024	superseded	supershy-linux-v1	1.0        	Upgrade complete
3       	Fri Jan 19 16:06:27 2024	superseded	supershy-linux-v1	1.0        	Upgrade complete
4       	Fri Jan 19 16:12:56 2024	deployed  	supershy-linux-v1	1.0        	Rollback to 2   
[root@master231 helm]# 
[root@master231 helm]# kubectl get po,svc 
NAME                                         READY   STATUS    RESTARTS   AGE
pod/supershy-deploy-apps-59b6447c69-4hk87   1/1     Running   0          37s
pod/supershy-deploy-apps-59b6447c69-4jd65   1/1     Running   0          37s
pod/supershy-deploy-apps-59b6447c69-cgjx8   1/1     Running   0          36s
pod/supershy-deploy-apps-59b6447c69-fbft7   1/1     Running   0          37s
pod/supershy-deploy-apps-59b6447c69-ttf7n   1/1     Running   0          35s

NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/kubernetes           ClusterIP   10.200.0.1       <none>        443/TCP   5h11m
service/supershy-svc-apps   ClusterIP   10.200.162.112   <none>        80/TCP    14m
[root@master231 helm]# 
[root@master231 helm]# curl 10.200.162.112
<h1>apps v2</h1>
[root@master231 helm]# 


	2 指定发行版,回滚到指定版本
[root@master231 helm]# helm history myweb 
REVISION	UPDATED                 	STATUS    	CHART             	APP VERSION	DESCRIPTION     
1       	Fri Jan 19 15:58:58 2024	superseded	supershy-linux-v1	1.0        	Install complete
2       	Fri Jan 19 16:02:50 2024	superseded	supershy-linux-v1	1.0        	Upgrade complete
3       	Fri Jan 19 16:06:27 2024	superseded	supershy-linux-v1	1.0        	Upgrade complete
4       	Fri Jan 19 16:12:56 2024	deployed  	supershy-linux-v1	1.0        	Rollback to 2   
[root@master231 helm]# 
[root@master231 helm]# helm rollback myweb 1
Rollback was a success! Happy Helming!
[root@master231 helm]# 
[root@master231 helm]# helm history myweb 
REVISION	UPDATED                 	STATUS    	CHART             	APP VERSION	DESCRIPTION     
1       	Fri Jan 19 15:58:58 2024	superseded	supershy-linux-v1	1.0        	Install complete
2       	Fri Jan 19 16:02:50 2024	superseded	supershy-linux-v1	1.0        	Upgrade complete
3       	Fri Jan 19 16:06:27 2024	superseded	supershy-linux-v1	1.0        	Upgrade complete
4       	Fri Jan 19 16:12:56 2024	superseded	supershy-linux-v1	1.0        	Rollback to 2   
5       	Fri Jan 19 16:15:41 2024	deployed  	supershy-linux-v1	1.0        	Rollback to 1   
[root@master231 helm]# 
[root@master231 helm]# kubectl get pods,svc
NAME                                        READY   STATUS    RESTARTS   AGE
pod/supershy-deploy-apps-6f9bd8b6d-9w64s   1/1     Running   0          11s
pod/supershy-deploy-apps-6f9bd8b6d-blcvj   1/1     Running   0          10s
pod/supershy-deploy-apps-6f9bd8b6d-bnhhk   1/1     Running   0          11s
pod/supershy-deploy-apps-6f9bd8b6d-cfqwm   1/1     Running   0          11s
pod/supershy-deploy-apps-6f9bd8b6d-slj57   1/1     Running   0          9s

NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/kubernetes           ClusterIP   10.200.0.1       <none>        443/TCP   5h13m
service/supershy-svc-apps   ClusterIP   10.200.162.112   <none>        80/TCP    16m
[root@master231 helm]# curl 10.200.162.112
<h1>apps v1</h1>
[root@master231 helm]# 

4. helm的公有仓库添加

- helm的公有仓库添加
	1 主流的Chart仓库概述
互联网公开Chart仓库,可以直接使用他们制作好的包:
	微软仓库:
		http://mirror.azure.cn/kubernetes/charts/

	阿里云仓库:
		https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
		
		

	2 添加仓库的方式
helm repo list
		查看现有的仓库信息,默认情况下是没有任何仓库地址的

helm repo add azure http://mirror.azure.cn/kubernetes/charts/ 
		注意哈,此处我们将微软云的仓库添加到咱们的helm客户端仓库啦~

helm repo update  
		我们也可以更新仓库信息哟~


[root@master231 helm]# helm repo list
Error: no repositories to show
[root@master231 helm]# helm repo add supershy-azure http://mirror.azure.cn/kubernetes/charts/
"supershy-azure" has been added to your repositories
[root@master231 helm]# 
[root@master231 helm]# helm repo list
NAME           	URL                                      
supershy-azure	http://mirror.azure.cn/kubernetes/charts/
[root@master231 helm]# 
[root@master231 helm]# helm repo add supershy-aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
"supershy-aliyun" has been added to your repositories
[root@master231 helm]# 
[root@master231 helm]# helm repo list
NAME            	URL                                                   
supershy-azure 	http://mirror.azure.cn/kubernetes/charts/             
supershy-aliyun	https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
[root@master231 helm]# 


	3 搜索我们关心的chart
[root@master231 helm]# helm search repo mysql
NAME                                     	CHART VERSION	APP VERSION	DESCRIPTION                                       
supershy-azure/mysql                    	1.6.9        	5.7.30     	DEPRECATED - Fast, reliable, scalable, and easy...
supershy-azure/mysqldump                	2.6.2        	2.4.1      	DEPRECATED! - A Helm chart to help backup MySQL...
supershy-azure/prometheus-mysql-exporter	0.7.1        	v0.11.0    	DEPRECATED A Helm chart for prometheus mysql ex...
supershy-azure/percona                  	1.2.3        	5.7.26     	DEPRECATED - free, fully compatible, enhanced, ...
supershy-azure/percona-xtradb-cluster   	1.0.8        	5.7.19     	DEPRECATED - free, fully compatible, enhanced, ...
supershy-azure/phpmyadmin               	4.3.5        	5.0.1      	DEPRECATED phpMyAdmin is an mysql administratio...
supershy-azure/gcloud-sqlproxy          	0.6.1        	1.11       	DEPRECATED Google Cloud SQL Proxy                 
supershy-azure/mariadb                  	7.3.14       	10.3.22    	DEPRECATED Fast, reliable, scalable, and easy t...
[root@master231 helm]# 



	4 拉取第三方的chart
		4.1 搜索chart
[root@master231 helm]# helm search repo elasticsearch-exporter
NAME                                   	CHART VERSION	APP VERSION	DESCRIPTION                                       
supershy-aliyun/elasticsearch-exporter	0.1.2        	1.0.2      	Elasticsearch stats exporter for Prometheus       
supershy-azure/elasticsearch-exporter 	3.7.1        	1.1.0      	DEPRECATED Elasticsearch stats exporter for Pro...
[root@master231 helm]# 
[root@master231 helm]# 

		4.2 下载chart
[root@master231 helm]# helm pull supershy-aliyun/elasticsearch-exporter
	

		4.3 解压chart
[root@master231 helm]# tar xf elasticsearch-exporter-0.1.2.tgz


		4.4 部署chart
[root@master231 helm]# helm install supershy-es-exporter elasticsearch-exporter


提示下:
	直接部署会报错,需要修改deployment的API为"apps/v1",默认是"apps/v1beat2"

		4.5 测试服务是否部署成功
curl `kubectl get svc/supershy-es-exporter-elasticsearch-exporter  -o custom-columns=clusterIP:.spec.clusterIP  | tail -1`:9108/metrics

5. 模板

-  helm模板
	- helm部署及版本选择
	- helm的基础管理:
		- helm install RELEASE_NAME CHART [-n NAMESPACE] [--dry-run] [--kubeconfig] [...]
	- helm的工作逻辑:
		- value.yaml
		- chart.yaml 
		- template
			- NOTE.txt
			- _helpers.tpl
	- 模板语法:
		- 常用函数:
			- quote
			- toYaml
			- indent | nindent
			- upper|title|lower
		- 流程控制
			- if ... else if .. else ...end
			- range ... end 
		- 作用域和变量
			- with ... end 
		
helm函数模板大全:
	https://helm.sh/zh/docs/chart_template_guide/function_list/



课堂笔记:
	https://gitee.com/jasonyin2020/cloud-computing-stack.git

附加组件-ingress

0. 概述

使用svc的NodePort类型暴露端口存在以下问题:
	(1)随着服务的增多,占用端口会越来越多;
	(2)当一个端口被多个服务使用的时候就力不从心了,比如将下面的域名都映射到 80或443端口时就暴露问题了(无法识别7层协议);
		www.supershy.com
		www.laonanhai.com
		www.yitiantain.com

	
ingress:
	k8s中的抽象资源,给管理员提供暴露服务的入口定义方法,换句话说,就是编写规则。
	
Ingress 翻译器:Ingress Controller
	根据ingress生成具体路由规则,并借助svc实现Pod的负载均衡。
	典型的ingress controller方案是官方维护的组件:ingress-nginx,traefik
	
如果非要拿nginx对比的话,那么Ingress就对应的是nginx.conf,而Ingress Controller对应的是nginx服务本身。

1. helm部署ingress

- 基于helm安装Ingress-nginx
  https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.2.5/ingress-nginx-4.2.5.tgz
  
	1.添加Ingress-nginx的官方仓库
[root@master231 ingress-nginx]# helm repo add supershy-nginx  https://kubernetes.github.io/ingress-nginx
"supershy-nginx" has been added to your repositories
[root@master231 ingress-nginx]# 
[root@master231 ingress-nginx]# helm repo list
NAME            	URL                                                   
azure           	http://mirror.azure.cn/kubernetes/charts/             
supershy-aliyun	https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
supershy-nginx 	https://kubernetes.github.io/ingress-nginx            
[root@master231 ingress-nginx]# 

·
	2.更新软件源
[root@master231 ingress-nginx]# helm repo update 
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "supershy-aliyun" chart repository
...Successfully got an update from the "supershy-nginx" chart repository
...Successfully got an update from the "azure" chart repository
Update Complete. ⎈Happy Helming!⎈
[root@master231 ingress-nginx]# 



	3.下载指定版本的ingres-nginx软件包
[root@master231 ~]# helm search repo ingress-nginx
NAME                                   	CHART VERSION	APP VERSION	DESCRIPTION                                       
supershy-ingress-nginx/ingress-nginx	4.9.0        	1.9.5      	Ingress controller for Kubernetes using NGINX a...

[root@master231 ~]# 
	# https://github.com/kubernetes/ingress-nginx 
	# 选择好对应k8s的版本的ingress-nginx版本
[root@master231 ~]# helm pull supershy-ingress-nginx/ingress-nginx --version 4.2.5


	4.解压软件包
[root@master231 ~]# tar xf ingress-nginx-4.2.5.tgz 


	5.修改配置文件
[root@master231 ~]# sed -i '/registry:/s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com#g' ingress-nginx/values.yaml
[root@master231 ~]# sed -i 's#ingress-nginx/controller#supershy-k8s/ingress-nginx#' ingress-nginx/values.yaml 
[root@master231 ~]# sed -i 's#ingress-nginx/kube-webhook-certgen#supershy-k8s/ingress-nginx#' ingress-nginx/values.yaml
[root@master231 ~]# sed -i 's#v1.3.0#kube-webhook-certgen-v1.3.0#' ingress-nginx/values.yaml
[root@master231 ~]# sed -ri '/digest:/s@^@#@' ingress-nginx/values.yaml
[root@master231 ~]# sed -i '/hostNetwork:/s#false#true#' ingress-nginx/values.yaml
[root@master231 ~]# sed -i  '/dnsPolicy/s#ClusterFirst#ClusterFirstWithHostNet#' ingress-nginx/values.yaml
[root@master231 ~]# sed -i '/kind/s#Deployment#DaemonSet#' ingress-nginx/values.yaml 
[root@master231 ~]# sed -i '/default:/s#false#true#'  ingress-nginx/values.yaml


温馨提示:
	- 修改镜像为国内的镜像,否则无法下载海外镜像,除非你会FQ;
	- 如果使用我提供的镜像需要将digest注释掉,因为我的镜像是从海外同步过来的,被重新构建过,其digest不一致;
	- 建议大家使用宿主机网络效率最高,但是使用宿主机网络将来DNS解析策略会直接使用宿主机的解析;
	- 如果还想要继续使用K8S内部的svc名称解析,则需要将默认的"ClusterFirst"的DNS解析策略修改为"ClusterFirstWithHostNet";
	- 建议将Deployment类型改为DaemonSet类型,可以确保在各个节点部署一个Pod,也可以修改"nodeSelector"字段让其调度到指定节点;
	- 如果仅有一个ingress controller,可以考虑将"ingressClassResource.default"设置为true,表示让其成为默认的ingress controller;




	6.创建Ingress专用的名称空间
[root@master231 ~]# kubectl create ns supershy-ingress
namespace/supershy-ingress created
[root@master231 ~]# 


	7.使用helm一键安装Ingress
[root@master231 ~]# helm install myingress ingress-nginx -n supershy-ingress 
NAME: myingress
LAST DEPLOYED: Fri Jan 19 20:43:01 2024
NAMESPACE: supershy-ingress
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace supershy-ingress get services -o wide -w myingress-ingress-nginx-controller'

An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
[root@master231 ~]# 

	7.查看创建的创建的资源
[root@master231 manifests]# kubectl get all -n supershy-ingress  -o wide
NAME                                           READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
pod/myingress-ingress-nginx-controller-2skwt   1/1     Running   0          41s   10.0.0.233   worker233   <none>           <none>
pod/myingress-ingress-nginx-controller-z6slc   1/1     Running   0          41s   10.0.0.232   worker232   <none>           <none>

NAME                                                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
service/myingress-ingress-nginx-controller             LoadBalancer   10.200.239.174   <pending>     80:31882/TCP,443:31490/TCP   41s   app.kubernetes.io/component=controller,app.kubernetes.io/instance=myingress,app.kubernetes.io/name=ingress-nginx
service/myingress-ingress-nginx-controller-admission   ClusterIP      10.200.191.152   <none>        443/TCP                      41s   app.kubernetes.io/component=controller,app.kubernetes.io/instance=myingress,app.kubernetes.io/name=ingress-nginx

NAME                                                DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS   IMAGES                                                                   SELECTOR
daemonset.apps/myingress-ingress-nginx-controller   2         2         2       2            2           kubernetes.io/os=linux   41s   controller   registry.cn-hangzhou.aliyuncs.com/supershy-k8s/ingress-nginx:v1.3.1   app.kubernetes.io/component=controller,app.kubernetes.io/instance=myingress,app.kubernetes.io/name=ingress-nginx
[root@master231 manifests]# 



	8.查看ingressclasses资源信息(就是咱们部署的Ingress controller)
[root@master231 ~]# kubectl get ingressclasses nginx 
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       78s
[root@master231 ~]# 
[root@master231 ~]# 
[root@master231 ~]# kubectl describe ingressclasses nginx 
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=myingress
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.3.1
              helm.sh/chart=ingress-nginx-4.2.5
Annotations:  ingressclass.kubernetes.io/is-default-class: true
              meta.helm.sh/release-name: myingress
              meta.helm.sh/release-namespace: supershy-ingress
Controller:   k8s.io/ingress-nginx
Events:       <none>
[root@master231 ~]# 

2. 编写单主机ingress资源清单

	1.创建测试环境
[root@master231 supershy]# cat deploy-apps.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-apps-v1
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: v1
  template:
    metadata:
      labels:
        apps: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/supershy-k8s/apps:v1
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-apps
spec:
  selector:
    apps: v1
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
[root@master231 supershy]# 
[root@master231 supershy]# kubectl apply -f deploy-apps.yaml 

	2.创建Ingress资源
[root@master231 supershy]# cat 01-apps-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: apps-ingress
spec:
  rules:
  	# 对应nginx.conf中的域名配置,配置hosts解析对外访问该域名
  - host: apps.supershy.com
    http:
      paths:
      - backend:
      	  # 配置对应pod的svc资源
          service:
            name: svc-apps
            port:
              number: 80
         
        path: /
        pathType: ImplementationSpecific
[root@master231 supershy]# 
[root@master231 supershy]# kubectl  apply -f 01-apps-ingress.yaml 
ingress.networking.k8s.io/apps-ingress created
[root@master231 supershy]# 
[root@master231 supershy]# kubectl get ing
NAME           CLASS   HOSTS                  ADDRESS   PORTS   AGE
apps-ingress   nginx   apps.supershy.com             80      71s


报错信息:
[root@master231 01-ingress]# kubectl apply -f 01-apps-ingress.yaml 
Error from server (InternalError): error when creating "01-apps-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://myingress-ingress-nginx-controller-admission.supershy-ingress.svc:443/networking/v1/ingresses?timeout=10s": x509: certificate is not valid for any names, but wanted to match myingress-ingress-nginx-controller-admission.supershy-ingress.svc
[root@master231 01-ingress]# 
[root@master231 01-ingress]# 

解决方案
	1.禁用admissionWebhooks功能
[root@master231 ingress-nginx]# vim ingress-nginx/values.yaml 

...
  admissionWebhooks:
    ...
	# 默认启用了admissionWebhooks功能,我们可以将其禁用。
    # enabled: true
    enabled: false


	2.重新安装应用
[root@master231 ingress-nginx]# helm -n supershy-ingress uninstall myingress 
release "myingress" uninstalled
[root@master231 ingress-nginx]# 
[root@master231 ingress-nginx]# helm install myingress ingress-nginx -n supershy-ingress 

3. 多主机ingress资源清单

- 多主机案例实战
[root@master231 02-ingress-multiple]# cat deploy-apps.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-apps-v1
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: v1
  template:
    metadata:
      labels:
        apps: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/supershy-k8s/apps:v1
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-apps
spec:
  selector:
    apps: v1
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-apps-v2
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: v2
  template:
    metadata:
      labels:
        apps: v2
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/supershy-k8s/apps:v2
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-apps-v2
spec:
  selector:
    apps: v2
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
[root@master231 02-ingress-multiple]# 


[root@master231 02-ingress-multiple]# cat 01-apps-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: apps-ingress
spec:
  rules:
  - host: v1.supershy.com
    http:
      paths:
      - backend:
          service:
            name: svc-apps
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific
  - host: v2.supershy.com
    http:
      paths:
      - backend:
          service:
            name: svc-apps-v2
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific
[root@master231 02-ingress-multiple]# 

4. 基于资源注解实现域名跳转

参考链接:
https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md

- Ingress Nginx实现域名重定向
[root@master231 03-redirect]# cat 01-apps-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: apps-redirect
  annotations:
    nginx.ingress.kubernetes.io/permanent-redirect: https://www.supershy.com/
spec:
  rules:
  - host: blog.supershy.com
    http:
      paths:
      - backend:
          service:
            name: svc-apps
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific

5.基于资源注解跳转手机移动端

2.部署pc端测试
	1.创建deploy,svc资源
[root@master231 supershy]# cat 03-deploy-apple.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-apple
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: apple
  template:
    metadata:
      labels:
        apps: apple
    spec:
      containers:
      - name: apple
        image: registry.cn-hangzhou.aliyuncs.com/supershy-k8s/apps:apple
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-apple
spec:
  selector:
    apps: apple
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
[root@master231 supershy]# 


	2.创建Ingress资源暴露pc端
[root@master231 supershy]# cat 04-apple-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: apple
  annotations:
    nginx.ingress.kubernetes.io/server-snippet: |
        set $agentflag 0;

        if ($http_user_agent ~* "(Mobile)" ){
          set $agentflag 1;
        }

        if ( $agentflag = 1 ) {
          return 301 http://m.supershy.com;
        }

spec:
  rules:
  - host: pc.supershy.com
    http:
      paths:
      - backend:
          service:
            name: svc-apple
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific
[root@master231 supershy]# 


参考连接:
    https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#server-snippet

6. ingress nginx的基本认证

- Ingress Nginx基本认证
1.使用htpasswd工具创建生成nginx认证用户
	1.安装htpasswd工具
[root@master231 supershy]# yum -y install httpd 


	2.使用htpasswd工具生成测试用户名和密码
[root@master231 supershy]# htpasswd -c auth jasonyin
New password: 
Re-type new password: 
Adding password for user jasonyin
[root@master231 supershy]# 
[root@master231 supershy]# cat auth 
jasonyin:$apr1$v.iw5HUE$n7xcqnT3Aj23qIK0vurGU1
[root@master231 supershy]# 


	3.将创建的密码文件用secrets资源存储
[root@master231 supershy]# kubectl create secret generic nginx-basic-auth --from-file=auth 
secret/nginx-basic-auth created
[root@master231 supershy]# 
[root@master231 supershy]# kubectl get secrets nginx-basic-auth 
NAME               TYPE     DATA   AGE
nginx-basic-auth   Opaque   1      12s
[root@master231 supershy]# 


	4.创建Ingress用于认证信息
[root@master231 05-nginx-basic-auth]# cat ingress-basic-auth.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-basic-auth
  annotations:
    # 登录的提示信息
    nginx.ingress.kubernetes.io/auth-realm: Please Input Your Username and Passowrd
    # 对应认证信息,也就是我们创建的secrets资源名称,里面保存了我们创建的有效用户
    nginx.ingress.kubernetes.io/auth-secret: nginx-basic-auth
    # 指定认证类型
    nginx.ingress.kubernetes.io/auth-type: basic
spec:
  rules:
  - host: auth.supershy.com
    http:
      paths:
      - backend:
          service:
            name: svc-apple
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific
[root@master231 05-nginx-basic-auth]# 

7. ingress nginx 前后端分离

- Ingress Nginx实现前后端分离
	1.创建测试服务
[root@master231 supershy]# cat deploy-apple.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-apple
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: apple
  template:
    metadata:
      labels:
        apps: apple
    spec:
      containers:
      - name: apple
        image: registry.cn-hangzhou.aliyuncs.com/supershy-k8s/apps:apple
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-apple
spec:
  selector:
    apps: apple
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
[root@master231 supershy]# 


	2.编写Ingress规则实现rewrite
[root@master231 supershy]# cat ingress-rewrite.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-rewrite
  annotations:
    # 这句话的是意思是将"/api(/|$)(.*)"改写为"/.*",
    # 后端在调用时会直接将"/api(/|$)"的内容取消掉.
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  rules:
  - host: www.supershy.com
    http:
      paths:
      - backend:
          service:
            name: svc-apple
            port:
              number: 80
        # 注意,这里用到了2个分组,小括号代表分组,共计2个小括号,
        # 上面的注解中"rewrite-target"使用到第二个小括号的参数。
        path: /api(/|$)(.*)
        pathType: ImplementationSpecific
[root@master231 supershy]# 

8. 基于https证书访问域名

- Ingress Nginx https
1.生成自建证书
	1.生成证书文件
[root@master231 supershy]# openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=www.supershy.com"


	2.将证书文件以secrets形式存储
[root@master231 supershy]# kubectl create secret tls ca-secret --cert=tls.crt --key=tls.key 
secret/ca-secret created
[root@master231 supershy]# 
[root@master231 supershy]# kubectl get secrets ca-secret 
NAME        TYPE                DATA   AGE
ca-secret   kubernetes.io/tls   2      84s
[root@master231 supershy]# 


2.部署测试服务
[root@master231 supershy]# cat deploy-apple.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-apple
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: apple
  template:
    metadata:
      labels:
        apps: apple
    spec:
      containers:
      - name: apple
        image: registry.cn-hangzhou.aliyuncs.com/supershy-k8s/apps:apple
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-apple
spec:
  selector:
    apps: apple
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
[root@master231 supershy]# 

	3.配置Ingress添加TLS证书
[root@master231 07-ingress-https]# cat ingress-https.html 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-https
  # 如果指定了"ingressClassName"参数,就不需要在这里重复声明啦。
  # 如果你的K8S 1.22- 版本,则使用注解的方式进行传参即可。
  #annotations:
  #  kubernetes.io/ingress.class: "nginx"
spec:
  # 指定Ingress controller,要求你的K8S 1.22+
  ingressClassName: nginx
  rules:
  - host: www.supershy.com
    http:
      paths:
      - backend:
          service:
            name: svc-apple
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific
  # 配置https证书
  tls:
  - hosts:
    - www.supershy.com
    secretName: ca-secret
[root@master231 07-ingress-https]# 

9. helm部署traefik

- 基于helm工具部署traefik案例
	1 添加helm仓库
[root@master231 ingress]# helm repo add traefik https://traefik.github.io/charts
"traefik" has been added to your repositories
[root@master231 ingress]# 

	2 更新仓库
[root@master231 ingress]#  helm repo update 

	3 拉取chart并解压
[root@master231 ingress]# helm pull  traefik/traefik
[root@master231 ingress]# 
[root@master231 ingress]# ll
total 104
-rw-r--r-- 1 root root 104831 Aug 31 19:26 traefik-24.0.0.tgz
[root@master231 ingress]# 
[root@master231 ingress]# tar xf traefik-24.0.0.tgz 
[root@master231 ingress]# 
[root@master231 ingress]# ll
total 104
drwxr-xr-x 4 root root    181 Aug 31 19:26 traefik
-rw-r--r-- 1 root root 104831 Aug 31 19:26 traefik-24.0.0.tgz
[root@master231 ingress]# 

	4 修改svc的类型,将默认的LoadBalancer改为NodePort,咱们的环境由于有"metallb"模拟负载均衡器,所以可以不修改。相当于在云环境。
[root@master231 ingress]# vim traefik/values.yaml 
...

service:
  ...
  # 将719行注释并修改为NodePort
  # type: LoadBalancer
  type: NodePort
  # 将181行dashboard enable设置为true,否则不会访问到dashboard页面
179   dashboard:
180     # -- Create an IngressRoute for the dashboard
181     enabled: true



	5 安装traefik
[root@master231 ingress-traefik]# kubectl create ns supershy-traefik
namespace/supershy-traefik created
[root@master231 ingress-traefik]# 
[root@master231 ingress-traefik]#  helm install mytraefik traefik -n supershy-traefik


	6 暴露traefik的dashboard
[root@master231 ~]# kubectl port-forward $(kubectl get pods --selector "app.kubernetes.io/name=traefik" --output=name -n supershy-traefik) 9000:9000 --address 0.0.0.0 -n supershy-traefik

	7 访问dashboard
http://10.0.0.231:9000/dashboard/#/



参考链接:
	https://doc.traefik.io/traefik/getting-started/install-traefik/#use-the-helm-chart
-  编写ingress http规则测试traefik的可用性
	1 准备测试环境
[root@master231 ingress-demo]# cat deploy-apps.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-apps-v1
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: v1
  template:
    metadata:
      labels:
        apps: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/supershy-k8s/apps:v1
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-apps-v1
spec:
  selector:
    apps: v1
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-apps-v2
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: v2
  template:
    metadata:
      labels:
        apps: v2
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/supershy-k8s/apps:v2
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-apps-v2
spec:
  selector:
    apps: v2
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-apps-v3
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: v3
  template:
    metadata:
      labels:
        apps: v3
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/supershy-k8s/apps:v3
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-apps-v3
spec:
  selector:
    apps: v3
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
[root@master231 ingress-demo]# 

	
	2 编写ingress规则
[root@master231 ingress-demo]# cat ingress-apps.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: supershy-traefik-apps
  # annotations:
  #   kubernetes.io/ingress.class: traefik
spec:
  ingressClassName: mytraefik
  # 定义Ingress规则
  rules:
    # 访问的主机名
  - host: v1.supershy.com
    # 定义http的相关规则
    http:
      paths:
        # 指定后端svc信息
      - backend:
          # 定义svc信息
          service:
             # 定义svc的名称
             name: svc-apps-v1
             # 定义svc的端口
             port:
               number: 80
        # 指定匹配的类型,此处我们使用前缀匹配即可,容错性强
        pathType: Prefix
        path: "/"
  - host: v2.supershy.com
    http:
      paths:
      - backend:
          service:
             name: svc-apps-v2
             port: 
               number: 80
        pathType: Prefix
        path: "/"
  - host: v3.supershy.com
    http:
      paths:
      - backend:
          service:
             name: svc-apps-v3
             port: 
               number: 80
        pathType: Prefix
        path: "/"
[root@master231 ingress-demo]# 


	3 验证并访问Ingress翻译器的随机端口
[root@master231 ingress-demo]# kubectl describe ingress supershy-traefik-apps 

chartmuseum私有仓库

- 使用k8s搭建chartmuseum私有仓库
	# 1.编写资源清单
[root@master231 /supershy/chartmuseum]# cat chartmuseum.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: chartmuseum
  namespace: chartmuseum 
  labels:
    app: chartmuseum 
spec:
  containers:
  - image: ghcr.io/helm/chartmuseum:v0.16.2
    name: chartmuseum
    ports:
    - containerPort: 8080
      protocol: TCP
    env:
    - name: DEBUG
      value: "1"
    - name: STORAGE
      value: local
    - name: STORAGE_LOCAL_ROOTDIR
      value: /charts
    volumeMounts:
    - name: chartspath
      mountPath: /charts
  volumes:
  - name: chartspath
    nfs:
      server: master231
      # 这个目录需要777权限,chmod 777 /supershy/data/kubernetes/charts
      path: /supershy/data/kubernetes/charts

---

apiVersion: v1
kind: Service
metadata:
  name: chartmuseum
  namespace: chartmuseum
  labels:
    app: chartmuseum
spec:
  selector:
    app: chartmuseum
  type: NodePort
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080 
[root@master231 /supershy/manifests/helm]# 

    # 2.推送chart
[root@master231 /supershy/manifests/helm]# helm package elasticsearch-exporter/
Successfully packaged chart and saved it to: /supershy/manifests/helm/elasticsearch-exporter-0.1.2.tgz

[root@master231 /supershy/manifests/helm]# ll
total 16244
drwxr-xr-x 3 root root       77 Aug 31 11:03 elasticsearch-exporter
-rw-r--r-- 1 root root     3772 Aug 31 13:37 elasticsearch-exporter-0.1.2.tgz

[root@master231 /supershy/manifests/helm]# curl --data-binary "@elasticsearch-exporter-0.1.2.tgz" http://10.0.0.232:5829/api/charts

项目篇

1. kubeadm实现K8S集群的扩容、缩容

1)扩容

	- kubelet节点首次启动流程,bootstrap阶段
		推荐阅读: 
		   https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/ 
	
	# 初始化前集群状态
[root@master231 ~]# kubectl get po -A -o wide|egrep 'flannel|kube-proxy'
kube-flannel           kube-flannel-ds-5qzl7                        1/1     Running   19 (31h ago)   9d      10.0.0.233     worker233   <none>           <none>
kube-flannel           kube-flannel-ds-f2bzd                        1/1     Running   34 (29h ago)   23d     10.0.0.232     worker232   <none>           <none>
kube-flannel           kube-flannel-ds-v6zld                        1/1     Running   28 (31h ago)   23d     10.0.0.231     master231   <none>           <none>
kube-system            kube-proxy-4dl6p                             1/1     Running   20 (29h ago)   11d     10.0.0.232     worker232   <none>           <none>
kube-system            kube-proxy-dv99k                             1/1     Running   14 (31h ago)   11d     10.0.0.231     master231   <none>           <none>
kube-system            kube-proxy-g7kzd                             1/1     Running   18 (31h ago)   11d     10.0.0.233     worker233   <none>           <none>

	
	0.添加hosts文件的解析
[root@worker234 ~]# cat /etc/hosts
10.0.0.231 master231
10.0.0.232 worker232
10.0.0.233 worker233
10.0.0.250 harbor.supershy.com
10.0.0.234 worker234

	1.初始化节点步骤
扩容节点需要有docker和kubeadm-kubelet-kubectl环境,参考 “部署节点初始化环境笔记”


	2.master创建token
[root@master231 ~]# kubeadm token list
[root@master231 ~]# 
	# --ttl 0 表示生成永久token,不指定默认保存24小时
	# --print-join-command 打印出来在worker节点执行的命令
[root@master231 ~]# kubeadm token create  oldboy.supershyjason --ttl 0 --print-join-command
kubeadm join 10.0.0.231:6443 --token oldboy.supershyjason --discovery-token-ca-cert-hash sha256:f3baddb1fd501b701c676ad128e3badba9369424fdea2faad0f5dd653714e113 
[root@master231 ~]# 
[root@master231 ~]# kubectl get secrets -n kube-system | grep bootstrap-token-oldboy
bootstrap-token-oldboy                           bootstrap.kubernetes.io/token         5      34s
[root@master231 ~]# 
[root@master231 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION                                                EXTRA GROUPS
oldboy.supershyjason   <forever>   <never>   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
[root@master231 ~]# 


	3.工作节点加入现有的集群
[root@worker234 ~]# swapoff -a
[root@worker234 ~]# 
[root@worker234 ~]# systemctl enable --now kubelet.service
[root@worker234 ~]# 
[root@worker234 ~]# kubeadm join 10.0.0.231:6443 --token oldboy.supershyjason --discovery-token-ca-cert-hash sha256:f3baddb1fd501b701c676ad128e3badba9369424fdea2faad0f5dd653714e113

可能会遇到的故障: cni0和flannel.1不在 同一个网段,导致该节点网络插件不正常工作。
[root@worker234 ~]# ifconfig 
cni0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.100.4.1  netmask 255.255.255.0  broadcast 10.100.4.255
        inet6 fe80::b8b0:89ff:fe9f:f871  prefixlen 64  scopeid 0x20<link>
        ether ba:b0:89:9f:f8:71  txqueuelen 1000  (Ethernet)
        RX packets 157  bytes 4396 (4.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3  bytes 266 (266.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

...
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.100.3.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::d42f:c9ff:fecd:2699  prefixlen 64  scopeid 0x20<link>
        ether d6:2f:c9:cd:26:99  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

[root@worker234 ~]# 


解决方案:
ip link del cni0 
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.3.1/24 dev cni0

2)缩容

	1.驱逐已经调度worker234节点的所有Pod
[root@master231 01-nodescale]# kubectl drain worker234 --ignore-daemonsets

	2.实际上drain是为被驱逐的节点打了污点
[root@master231 01-nodescale]# kubectl describe nodes | grep Taints -A 2  # 除了标记scheduler不可调度外,还为节点配置了"node.kubernetes.io/unschedulable:NoSchedule"污点
Taints:             node-role.kubernetes.io/master:NoSchedule
                    school=supershy:NoSchedule
Unschedulable:      false
--
Taints:             school=supershy:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             school=supershy:NoSchedule
Unschedulable:      false
Lease:
--
Taints:             school=supershy:NoSchedule
					node.kubernetes.io/unschedulable:NoSchedule
Unschedulable:      true
[root@master231 01-nodescale]# 

	3.停止worker234节点上报状态
[root@worker234 ~]# systemctl disable --now kubelet
Removed symlink /etc/systemd/system/multi-user.target.wants/kubelet.service.
[root@worker234 ~]# 

	4.重置worker234节点的所有配置并重新安装操作系统给其他部门同事使用
[root@worker234 ~]# kubeadm reset -f

	5.master节点从etcd中移除worker234
[root@master231 01-nodescale]# kubectl delete nodes worker234 
node "worker234" deleted
[root@master231 01-nodescale]# 
[root@master231 01-nodescale]# kubectl get nodes
NAME        STATUS   ROLES                  AGE   VERSION
master231   Ready    control-plane,master   13d   v1.23.17
worker232   Ready    <none>                 13d   v1.23.17
worker233   Ready    <none>                 13d   v1.23.17
[root@master231 01-nodescale]# 

2. kubeadm实现证书升级

问题复现:
[root@worker233 ~]# tail -100f /var/log/messages  # 类型这样的报错就是证书过期导致的,需要解决后才能让worker节点成功加入集群。
...  
Feb 22 00:02:58 worker233 kubelet: E0222 00:02:58.231209   23952 kubelet.go:2469] "Error getting node" err="node \"worker233\" not found"
...
Feb 22 00:00:41 worker233 kubelet: E0222 00:00:41.311327   23952 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.231:6443/api/v1/nodes\": x509: certificate has expired or is not yet valid: current time 2025-02-22T00:00:41+08:00 is after 2025-01-21T06:53:56Z" node="worker233"

1)master节点证书升级方案

- kubeadm证书master升级方案
	0.修改静态Pod的kube-controller-manager资源清单
[root@master231 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml 
...
spec:
  containers:
  - command:
    - kube-controller-manager
	...
	# 所签名证书的有效期限。每个 CSR 可以通过设置 spec.expirationSeconds 来请求更短的证书。
    - --cluster-signing-duration=87600h0m0s

    # 启用cm自动签发CSR证书,可以不配置,默认就是启用的,但是建议配置上!害怕未来版本发生变化!
    - --feature-gates=RotateKubeletServerCertificate=true
	0.验证kube-controller-manager是否启动成功
[root@master231 ~]# kubectl get pod -n kube-system  kube-controller-manager-master231

	1.检查证书有效期
[root@master231 ~]# kubeadm certs check-expiration
	2.检查CA证书的有效期
[root@master231 ~]# openssl x509 -in /etc/kubernetes/pki/etcd/ca.crt -text -noout
	3.查看worker节点kubelet的证书
[root@worker233 ~]# openssl x509 -in /var/lib/kubelet/pki/kubelet.crt -text -noout
	4.为master节点办理证书自动续期1年
[root@master231 ~]# kubeadm certs renew all
	5.再次查看证书续期
[root@master231 ~]# date 
Mon Jan 22 14:52:33 CST 2024
[root@master231 ~]# 
[root@master231 ~]# kubeadm certs check-expiration
	6.再次观察master组件的相关证书,发现的确升级续期成功啦
[root@master231 ~]# 
[root@master231 ~]# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout
	7.但是worker节点的证书,并没有成功续期
[root@worker233 ~]# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -text -noout

2)worker节点证书升级方案

  • 如果你修改的时间超过了证书的过期时间,可以尝试重启一下节点试试看,如果还不行,则可以考虑使用集群缩容,扩容方案重新加入节点即可。
- kubeadm证书worker升级方案
	1.查看worker节点的证书的有效期
[root@worker233 ~]# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -text -noout
	2.要求kubelet的配置文件中支持证书滚动,默认是启用的,无需配置。
[root@worker233 ~]# vim /var/lib/kubelet/config.yaml 
...
rotateCertificates: true
	3.客户端节点修改节点的时间这个时间必须是证书的即将过期的时间并重启kubelet
[root@worker233 ~]#  date -s "2025-1-5"  # 注意这个时间不要超过之前办法的证书就可以,时间最好控制在2天以内。
Sat Feb 22 00:00:00 CST 2025
[root@worker233 ~]# 
[root@worker233 ~]# systemctl restart kubelet
	4.查看证书有效期
[root@worker233 ~]# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -text -noout

如果你修改的时间超过了证书的过期时间,可以尝试重启一下节点试试看,如果还不行,则可以考虑使用集群缩容,扩容方案重新加入节点即可。


	5.将时间同步回去
[root@worker233 ~]# ntpdate ntp.aliyun.com
22 Jan 15:34:29 ntpdate[15500]: step time server 203.107.6.88 offset -0.714176 sec
[root@worker233 ~]#  


温馨提示:
	生产环境中对于worker证书升级应该注意的事项:
		- 对证书有效期有效期进行监控,很多开源组件都支持,比如zabbix,prometheus等。
		- 在重启kubelet节点时,应该注意滚动更新,不要批量重启,避免Pod大面积无法访问的情况,从而造成业务的损失,甚至生产故障;
		- 尽量在业务的低谷期做升级操作,影响最小;

3. jenkins

1)Jenkins集成k8s之模拟开发人员开发代码并编写Dockerfile

- Jenkins集成k8s之模拟开发人员开发代码并编写Dockerfile
	1.下载医疗软件包,模拟开发写的代码
[root@kaifaji yiliao]# wget http://192.168.11.253/Linux89/Kubernetes/day20-/softwares/jenkins/supershy-yiliao.zip


	2.解压开发人员的代码
[root@kaifaji yiliao]# unzip supershy-yiliao.zip 
[root@kaifaji yiliao]#
[root@kaifaji yiliao]# rm -f supershy-yiliao.zip 


	3.编写Dockerfile
[root@kaifaji yiliao]# cat Dockerfile 
FROM registry.cn-hangzhou.aliyuncs.com/supershy-k8s/apps:apple

MAINTAINER JasonYin

LABEL school=supershy \
      class=linux89 \
      auther="Jason Yin"

COPY . /usr/share/nginx/html
[root@kaifaji yiliao]# 


	4.编写docker-compose配置文件
[root@kaifaji yiliao]# cat docker-compose.yaml 
version: "3.8"
services:
   yiliao:
     image: harbor.supershy.com/jenkins-project/yiliao:v1
     build:
       context: .
       dockerfile: Dockerfile
[root@kaifaji yiliao]# 



- Jenkins集成k8s之模拟开发人员将代码推送到gitee
	1.创建gitee账号并登录
https://gitee.com/

	2.创建测试项目
自行创建即可,推荐项目名称为"supershy-linux89-yiliao"

	3.Git 全局设置: (请设置自己的用户名,不要直接复制)
[root@kaifaji yiliao]# yum -y install git
[root@kaifaji yiliao]# git config --global user.name "jasonyin2020"
[root@kaifaji yiliao]# git config --global user.email "y1053419035@qq.com"

	4.初始化项目目录
[root@kaifaji ~]# mv yiliao supershy-linux89-yiliao && cd supershy-linux89-yiliao
[root@kaifaji supershy-linux89-yiliao]# git init 
Initialized empty Git repository in /root/supershy-linux89-yiliao/.git/
[root@kaifaji supershy-linux89-yiliao]# 

	5.提交代码到本地仓库
[root@kaifaji supershy-linux89-yiliao]# git add .
[root@kaifaji supershy-linux89-yiliao]# git commit -m "我知道第1次提交,...."
[root@kaifaji supershy-linux89-yiliao]# git remote add origin https://gitee.com/jasonyin2020/supershy-linux89-yiliao.git
[root@kaifaji supershy-linux89-yiliao]# git push -u origin "master"
Username for 'https://gitee.com': jasonyin2020   # 输入你的自己的用户名。
Password for 'https://jasonyin2020@gitee.com':   # 输入你自己的密码。

2)Jenkins集成k8s之Ubuntu系统部署Jenkins实战.

- Jenkins集成k8s之Ubuntu系统部署Jenkins实战.
参考链接:
	https://www.oracle.com/java/technologies/downloads/#java17
	https://pkg.jenkins.io/debian-stable/
	https://mirrors.jenkins-ci.org/debian-stable/
	https://mirrors.jenkins-ci.org/

	1.安装JDK环境
root@jenkins211:~# wget http://192.168.11.253/Linux89/Kubernetes/day20-/softwares/jenkins/jdk-17_linux-x64_bin.tar.gz
root@jenkins211:~# mkdir -pv /supershy/softwares
root@jenkins211:~# tar xf jdk-17_linux-x64_bin.tar.gz -C /supershy/softwares/
root@jenkins211:~# 
root@jenkins211:~# cat  /etc/profile.d/jdk.sh
#!/bin/bash

export JAVA_HOME=/supershy/sof000twares/jdk-17.0.8
export PATH=$PATH:$JAVA_HOME/bin
root@jenkins211:~# 
root@jenkins211:~# source  /etc/profile.d/jdk.sh
root@jenkins211:~# 
root@jenkins211:~# 
root@jenkins211:~# java --version
java 17.0.8 2023-07-18 LTS

	2.安装Jenkins环境
root@jenkins211:~# wget -O /usr/share/keyrings/jenkins-keyring.asc  https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
 
root@jenkins211:~# echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]     https://pkg.jenkins.io/debian-stable binary/ | tee     /etc/apt/sources.list.d/jenkins.list > /dev/null
 
root@jenkins211:~# apt-get update

root@jenkins211:~# apt-get -y install fontconfig 


# java.lang.NullPointerException: Cannot load from short array because "sun.awt.FontConfiguration.head" is null
# 温馨提示,如果遇到上述问题,说明你没有安装fontconfig 软件包。


root@jenkins211:~# wget http://192.168.11.253/Linux89/Kubernetes/day20-/softwares/jenkins/jenkins_2.426.2_all.deb

root@jenkins211:~# dpkg -i jenkins_2.426.2_all.deb 


	3.修改Jenkins的启动脚本并重启服务
root@jenkins211:~# vim /lib/systemd/system/jenkins.service
...
User=root
Group=root
Environment="JAVA_HOME=/supershy/softwares/jdk-17.0.8"
...

	4.重启Jenkins
root@jenkins211:~# systemctl daemon-reload
root@jenkins211:~# systemctl restart jenkins


	5.访问Jenkins的WebUI
http://10.0.0.211:8080/

	6.设置密码
root@jenkins211:~# cat /var/lib/jenkins/secrets/initialAdminPassword
1f385080b35d416094ffc2bb4b5ba263
root@jenkins211:~# 

3)jenkins实现自动化代码拉取部署实战

	1.添加hosts解析
root@jenkins211:~# echo 10.0.0.250 harbor.supershy.com >> /etc/hosts


	2.拷贝harbor秘钥到Jenkins节点
[root@master231 ~]# scp -r /etc/docker/certs.d/ 10.0.0.211:/etc/docker/


	3.Jenkins登录harbor仓库
root@jenkins211:~# docker login -u admin -p 1 harbor.supershy.com

- Jenkins集成k8s之实现代码的部署
	1.安装kubectl工具
软件包链接:
	https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#server-binaries
    https://dl.k8s.io/v1.23.17/kubernetes-server-linux-amd64.tar.gz
    # 在这个压缩包中server目录下有kubectl文件,拷贝出来直接使用

root@jenkins211:~# 
root@jenkins211:~# mv  kubectl  /usr/local/bin/
root@jenkins211:~# 
root@jenkins211:~# chmod +x /usr/local/bin/kubectl 
root@jenkins211:~# 
root@jenkins211:~# ll /usr/local/bin/kubectl
-rwxr-xr-x 1 root root 45174784 Sep  4 10:27 /usr/local/bin/kubectl*
	
	2.拷贝kubeconfig文件
root@jenkins211:~# 
root@jenkins211:~# mkdir -pv /root/.kube
root@jenkins211:~# 
root@jenkins211:~# scp 10.0.0.231:/root/.kube/config   .kube/

	3.一键部署Jenkins环境
root@jenkins211:~# cat /supershy/manifests/yiliao/deploy-yiliao.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-yiliao
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: yiliao
  template:
    metadata:
      labels:
        apps: yiliao
    spec:
      tolerations:
      - operator: Exists
        key: node-role.kubernetes.io/master 
      - operator: Equal
        key: school
        value: supershy
        effect: NoSchedule
        # 注意,千万别直接无视污点,否则缩容的效果就看不到啦,生产环境中禁止使用。
      # - operator: Exists
      containers:
      - name: c1
        image: harbor.supershy.com/jenkins-project/yiliao:v1
        ports:
        - containerPort: 80
root@jenkins211:~# 
root@jenkins211:~# 
root@jenkins211:~# cat /supershy/manifests/yiliao/svc-yiliao.yaml 
apiVersion: v1
kind: Service
metadata:
  name: supershy-yiliao
spec:
  selector:
    apps: yiliao
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
root@jenkins211:~# 
root@jenkins211:~# cat /supershy/manifests/yiliao/ingress-yiliao.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: supershy-traefik-yiliao
  # annotations:
  #   kubernetes.io/ingress.class: traefik
spec:
	#这里使用的是traefik环境,traefik的svc使用的是loadbla方法
	# 使用ingress-nginx需要把上面svc的type改为NodePort/LoadBlalancer
  ingressClassName: mytraefik
  rules:
  - host: yiliao.supershy.com
    http:
      paths:
      - backend:
          service:
             name: supershy-yiliao
             port:
               number: 80
        # 使用ingress-nginx这里需要改为ImplementationSpecific
        pathType: Prefix
        path: "/"
root@jenkins211:~# 

	4.编写Dockerfile推送镜到harbor
[root@kaifaji ~/supershy-linux89-yiliao]# ls
about.html  album.html  article_detail.html  article.html  comment.html  contact.html  css  Dockerfile  images  index.html  js  product_detail.html  product.html  README.md
[root@kaifaji ~/supershy-linux89-yiliao]# cat Dockerfile 
FROM registry.cn-hangzhou.aliyuncs.com/supershy-k8s/apps:apple

MAINTAINER JasonYin

LABEL school=supershy \
      class=linux89 \
      auther="Jason Yin"

COPY . /usr/share/nginx/html



	4.Jenkins部署医疗项目shell
	4.1 jenkins中设置$version参数化构建变量,用于传递参数
自定义选项参数变量名称: version

v1 
v2
v3
v4
v5
...
	4.2 shell中添加命令
docker build -t harbor.supershy.com/jenkins-project/yiliao:${version:-v1} .
docker push harbor.supershy.com/jenkins-project/yiliao:${version:-v1}
sleep 5
#首次构建使用apply,更新版本用set image
kubectl apply -f /supershy/manifests/yiliao/
# kubectl set image deploy deploy-yiliao c1=harbor.supershy.com/jenkins-project/yiliao:$version

1. 代码版本更新

	4.Jenkins部署医疗项目shell
	4.1 jenkins中设置$version参数化构建变量,用于传递参数
自定义选项参数变量名称: version

v1 
v2
v3
v4
v5
...
	4.2 shell中添加命令
docker build -t harbor.supershy.com/jenkins-project/yiliao:${version:-v1} .
docker push harbor.supershy.com/jenkins-project/yiliao:${version:-v1}
sleep 5
#首次构建使用apply,更新版本用set image
# kubectl apply -f /supershy/manifests/yiliao/
kubectl set image deploy deploy-yiliao c1=harbor.supershy.com/jenkins-project/yiliao:$version

2. 代码版本回滚

- Jenkins集成k8s之代码的回滚
	1.添加选项参数
自定义选项参数变量名称: tag 

v1 
v2
v3
v4
v5
...


	2.添加shell
kubectl set image deploy deploy-yiliao c1=harbor.supershy.com/jenkins-project/yiliao:$tag

3. K8S集成Jenkins后续优化思路展望

K8S集成Jenkins后续优化思路展望:
	1.使用pipeline改写项目;
	2.如何判断第一次构建三种思路:
		思路一,使用Jenkins自带的版本ID,BUILD_ID
			http://10.0.0.211:8080/env-vars.html/
			
		思路二: 使用CMDB数据库,判断服务是否部署,若没有部署则apply,若部署了则set。
		
		思路三: 执行shell命令的返回结果判断来判断一个服务是否部署成功。
[root@master231 02-jenkins]# kubectl get deployments.apps deploy-yiliao2 &>/dev/null 
[root@master231 02-jenkins]# 
[root@master231 02-jenkins]# echo $? 
1
[root@master231 02-jenkins]# kubectl get deployments.apps deploy-yiliao &>/dev/null 
[root@master231 02-jenkins]# 
[root@master231 02-jenkins]# echo $? 
0
[root@master231 02-jenkins]# 

	3.实现钉钉告警等工模。

4. ELFK架构采集k8s日志

0)架构图

image-20240913132612550

1)es+kibana单点部署

- k8s部署单节点的ES服务(生产环境不建议es集群使用k8s部署,这里学习环境,电脑虚拟机资源有限)
[root@master231 03-elasticstack]# cat deploy-es.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-es7
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: elasticsearch
  template:
    metadata:
      labels:
        apps: elasticsearch
    spec:
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: es7-pvc
      tolerations:
      - key: school
        value: supershy
        effect: NoSchedule
        operator: Equal
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
        operator: Exists
      containers:
      - name: es
        # 参考链接:
        #    https://www.elastic.co/guide/en/elasticsearch/reference/7.17/docker.html
        # image: docker.elastic.co/elasticsearch/elasticsearch:7.17.16
        image: harbor.supershy.com/elasticstack-project/elasticsearch:7.17.16
        ports:
        - containerPort: 9200
          name: http
        - containerPort: 9300
          name: tcp
        env:
        - name: discovery.type
          value: single-node
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
[root@master231 03-elasticstack]# 
[root@master231 03-elasticstack]# cat svc-es.yaml 
apiVersion: v1
kind: Service
metadata:
  name: es7-svc
spec:
  type: ClusterIP
  selector:
    apps: elasticsearch
  ports:
    - protocol: TCP
      name: p1
      port: 9200
      targetPort: 9200
    - protocol: TCP
      name: p2
      port: 9300
      targetPort: 9300
[root@master231 03-elasticstack]# 
[root@master231 03-elasticstack]# 
[root@master231 03-elasticstack]# cat ing-es.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: es7-ingress
#  annotations:
#    kubernetes.io/ingress.class: traefik
spec:
  ingressClassName: mytraefik
  rules:
  - host: es.supershy.com
    http:
      paths:
      - backend:
          service:
            name: es7-svc
            port:
              number: 9200
        path: /
        pathType: ImplementationSpecific
[root@master231 03-elasticstack]# 
[root@master231 03-elasticstack]# cat pvc-es.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: es7-pvc
spec:
  accessModes:
  - ReadWriteMany
  resources:
    limits:
       storage: 20Gi
    requests:
       storage: 10Gi
[root@master231 03-elasticstack]# 



- k8s部署kibana 
[root@master231 03-elasticstack]# cat deploy-kibana.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: kibana
  template:
    metadata:
      labels:
        apps: kibana
    spec:
      nodeName: worker233
      tolerations:
      - key: school
        value: supershy
        effect: NoSchedule
        operator: Equal
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
        operator: Exists
      containers:
      - name: es
        # 参考链接:
        #    https://www.elastic.co/guide/en/kibana/7.17/docker.html
        # image: docker.elastic.co/kibana/kibana:7.17.16
        image: harbor.supershy.com/elasticstack-project/kibana:7.17.16
        env:
          - name: ELASTICSEARCH_HOSTS
            value: http://es7-svc.default.svc.supershy.com:9200
          - name: I18N_LOCALE
            value: zh-CN
        ports:
        - containerPort: 5601
          name: kibana
        resources:
          limits:
            cpu: 2
            memory: 3Gi
          requests:
            cpu: 0.5 
            memory: 500Mi
[root@master231 03-elasticstack]# 
[root@master231 /supershy/project/elfk]# cat svc-kibana.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kibana-svc
spec:
  type: ClusterIP
  selector:
    apps: kibana
  ports:
    - protocol: TCP
      name: k1
      port: 5601
      targetPort: 5601
[root@master231 /supershy/project/elfk]# 

[root@master231 03-elasticstack]# cat ing-kibana.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kibana-ingress
  #annotations:
  #  kubernetes.io/ingress.class: traefik
spec:
  ingressClassName: mytraefik
  rules:
  - host: kibana.supershy.com
    http:
      paths:
      - backend:
          service:
            name: kibana-svc
            port:
              number: 5601
        path: /
        pathType: ImplementationSpecific
[root@master231 03-elasticstack]# 

2)filebeat采集pod内服务日志

- 部署filebeat采集Pod日志实战
[root@master231 01-elastic-on-k8s]# cat deploy-filebeat-all-in-one.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
data:
  filebeat.yml: |-
    filebeat.config:
      inputs:
        path: ${path.config}/inputs.d/*.yml
        reload.enabled: false
      modules:
        path: ${path.config}/modules.d/*.yml
        reload.enabled: false


    #output.logstash:
    #  hosts: ["10.0.0.234:9999"]
      
    output.elasticsearch:
      hosts: ['es7-svc:9200']
      # 不建议修改索引,因为索引名称该成功后,pod的数据也将收集不到啦!但是7.10.2是可以采集到的。
      # 除非你明确知道自己不收集Pod日志且需要自定义索引名称的情况下,可以打开下面的注释哟~
      index: 'supershy-linux-elk-xixi-haha-%{+yyyy.MM.dd}'
    
    # 配置索引模板,向logstash/kafka中output注释掉索引模板
    setup.ilm.enabled: false
    setup.template.name: "supershy-linux-elk"
    setup.template.pattern: "supershy-linux-elk*"
    setup.template.overwrite: true
    setup.template.settings:
      index.number_of_shards: 3
      index.number_of_replicas: 0

---

# 注意,官方在filebeat 7.2就已经废弃docker类型,建议后期更换为container.
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-inputs
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true

---

apiVersion: apps/v1 
kind: DaemonSet
metadata:
  name: ds-filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
        operator: Exists
      - key: school
        value: supershy
        effect: NoSchedule
        operator: Equal
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        # image: elastic/filebeat:7.17.16
        image: harbor.supershy.com/elasticstack-project/filebeat:7.17.16
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        securityContext:
          runAsUser: 0
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: inputs
          mountPath: /usr/share/filebeat/inputs.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          # containered容器路径会有所改变
          path: /var/lib/docker/containers
      - name: inputs
        configMap:
          defaultMode: 0600
          name: filebeat-inputs
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: default
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  - nodes
  verbs:
  - get
  - watch
  - list

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat

[root@master231 01-elastic-on-k8s]# 

5. k8s集群迁移kubesphere

0. 概述

-  K8S集群迁移kubesphere
	1 项目概述
基于命令行方式管理K8S集群操作起来比较有难度,尤其是资源清单编写起来很费劲,很容易忘记字段现象,k8s集群资源过多等一些问题。


以图形化管理K8S集群的方案孕育而生,典型代表dashboard,rancher,kubesphere,kuboard等。

kubesphere底层基于kubeadm快速部署K8S集群,并提供了图形化管理K8S集群资源,且对K8S已有资源做了二次开发,对很多资源进行二次封装。

推荐阅读:
	https://www.kubesphere.io/zh/docs/v3.4/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped/

1. K8S集群一键部署kubesphere实战

	2 K8S集群一键部署kubesphere实战
2.1 下载配置文件
wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/cluster-configuration.yaml
wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/kubesphere-installer.yaml


2.2 安装服务,这里注意apply顺序
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml


2.3 验证安装
[root@master231 04-kubesphere]# kubectl get pods -n kubesphere-system 
NAME                            READY   STATUS    RESTARTS   AGE
ks-installer-6676cdd45f-s7jh8   1/1     Running   0          2m43s
[root@master231 04-kubesphere]# 
[root@master231 04-kubesphere]# kubectl -n kubesphere-system  logs -f ks-installer -6676cdd45f-s7jh8 
...
**************************************************
Waiting for all tasks to be completed ...
task network status is successful  (1/4)
task openpitrix status is successful  (2/4)
task multicluster status is successful  (3/4)
task monitoring status is successful  (4/4)
**************************************************
Collecting installation results ...
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://10.0.0.231:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2024-01-23 16:55:43
#####################################################



2.4 登录认证

修改密码: Supershy@2024

2. 裸机部署

- 裸机部署kubesphere集群

推荐阅读:
	https://www.kubesphere.io/zh/docs/v3.4/installing-on-linux/on-premises/install-kubesphere-on-bare-metal/


1.环境准备
	1.准备主机环境
cat >> /etc/hosts <<EOF
10.0.0.166 node166
10.0.0.167 node167
10.0.0.168 node168
EOF

	2 安装依赖工具
curl  -s -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum -y install chrony openssl openssl-devel socat conntrack-tools wget

	3 配置时间同步
systemctl enable --now chronyd
timedatectl set-ntp true
timedatectl set-timezone Asia/Shanghai
chronyc activity -v

	4 禁用防火墙并关闭selinux
systemctl disable --now firewalld
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config



2.部署kubesphere
	2.1 下载kubekey软件包
wget https://github.com/kubesphere/kubekey/releases/download/v3.1.0-alpha.4/kubekey-v3.1.0-alpha.4-linux-amd64.tar.gz

# wget https://github.com/kubesphere/kubekey/releases/download/v3.1.0-alpha.7/kubekey-v3.1.0-alpha.7-linux-amd64.tar.gz


tar xf kubekey-v3.1.0-alpha.4-linux-amd64.tar.gz


	2.2 创建安装集群的模板文件
export KKZONE=cn
./kk create config --with-kubernetes v1.21.14 --with-kubesphere v3.4.0

 

	2.3 修改模板文件自定义集群配置
[root@node166 ~]#  vim config-sample.yaml 
...

apiVersion: kubekey.kubesphere.io/v1alpha2
...
spec:
  hosts:
  - {name: node166, address: 10.0.0.166, internalAddress: 10.0.0.166, user: root, password: "supershy"}
  - {name: node167, address: 10.0.0.167, internalAddress: 10.0.0.167, user: root, password: "supershy"}
  - {name: node168, address: 10.0.0.168, internalAddress: 10.0.0.168, user: root, password: "supershy"}
  roleGroups:
    etcd:
    - node166
    - node167
    - node168
    control-plane: 
    - node166
    worker:
    - node166
    - node167
    - node168
  ...
  kubernetes:
    ...
    clusterName: supershy.com
  network:
    plugin: calico
    ...
    kubePodsCIDR: 10.100.0.0/16
    kubeServiceCIDR: 10.200.0.0/16


	2.4 使用自定义的配置文件创建集群
[root@node166 ~]# ./kk create cluster -f config-sample.yaml


...
 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

18:35:36 CST [GreetingsModule] Greetings
18:35:36 CST message: [node168]
Greetings, KubeKey!
18:35:36 CST message: [node166]
Greetings, KubeKey!
18:35:36 CST message: [node167]
Greetings, KubeKey!
18:35:36 CST success: [node168]
18:35:36 CST success: [node166]
18:35:36 CST success: [node167]
18:35:36 CST [NodePreCheckModule] A pre-check on nodes
18:35:37 CST success: [node167]
18:35:37 CST success: [node168]
18:35:37 CST success: [node166]
18:35:37 CST [ConfirmModule] Display confirmation form
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name    | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| node166 | y    | y    | y       | y        | y     | y     |         | y         | y      |        |            |            |             |                  | CST 18:35:37 |
| node167 | y    | y    | y       | y        | y     | y     |         | y         | y      |        |            |            |             |                  | CST 18:35:37 |
| node168 | y    | y    | y       | y        | y     | y     |         | y         | y      |        |            |            |             |                  | CST 18:35:37 |
+---------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes

...
 
我的网速不较快大概15-20分钟左右就完成了。我网速是10M/s。

...
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://10.0.0.166:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2024-01-23 18:58:31
#####################################################
18:58:33 CST success: [node166]
18:58:33 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

	kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

[root@node166 ~]#  

6. 切换flannel工作模式

Flannel的工作模式:
	- udp:
		早期支持的一种工作模式,由于性能差,目前官方已弃用。
	
	- vxlan:
		将源数据报文进行封装为二层报文(需要借助物理网卡转发),进行跨主机转发。

	- host-gw:
		将容器网络的路由信息写到宿主机的路由表上。尽管效率高,但不支持跨网段。
		
	- directrouting:
		将vxlan和host-gw工作模式工作。
		
		
- 切换flannel的工作模式为"vxlan"
	1.修改配置文件
[root@master231 cni]# vim kube-flannel.yml
...
  net-conf.json: |
    {
      "Network": "10.100.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }


	2.重新创建资源
[root@master231 cni]# kubectl delete -f kube-flannel.yml 
[root@master231 cni]# kubectl apply -f kube-flannel.yml 

	3.检查网络
[root@worker232 ~]# ip route 
default via 10.0.0.254 dev ens33 proto static metric 100 
10.0.0.0/24 dev ens33 proto kernel scope link src 10.0.0.232 metric 100 
10.100.0.0/24 via 10.100.0.0 dev flannel.1 onlink 
10.100.1.0/24 via 10.100.1.0 dev flannel.1 onlink 
10.100.2.0/24 dev cni0 proto kernel scope link src 10.100.2.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
[root@worker232 ~]# 


- 切换flannel的工作模式为"host-gw"
	1.修改配置文件
[root@master231 cni]# vim kube-flannel.yml
...
  net-conf.json: |
    {
      "Network": "10.100.0.0/16",
      "Backend": {
        "Type": "host-gw"
      }
    }


	2.重新创建资源
[root@master231 cni]# kubectl delete -f kube-flannel.yml 
[root@master231 cni]# kubectl apply -f kube-flannel.yml 

	3.检查网络
[root@worker232 ~]# ip route 
default via 10.0.0.254 dev ens33 proto static metric 100 
10.0.0.0/24 dev ens33 proto kernel scope link src 10.0.0.232 metric 100 
10.100.0.0/24 via 10.0.0.231 dev ens33 
10.100.1.0/24 via 10.0.0.233 dev ens33 
10.100.2.0/24 dev cni0 proto kernel scope link src 10.100.2.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
[root@worker232 ~]# 
[root@worker232 ~]# 


- 切换flannel的工作模式为"Directrouting"(推荐配置)
	1.修改配置文件
[root@master231 cni]# vim kube-flannel.yml
...
  net-conf.json: |
    {
      "Network": "10.100.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "Directrouting": true
      }
    }

	2.重新创建资源
[root@master231 cni]# kubectl delete -f kube-flannel.yml 
[root@master231 cni]# kubectl apply -f kube-flannel.yml 

	3.检查网络
[root@worker232 ~]# ip route 
default via 10.0.0.254 dev ens33 proto static metric 100 
10.0.0.0/24 dev ens33 proto kernel scope link src 10.0.0.232 metric 100 
10.100.0.0/24 via 10.0.0.231 dev ens33 
10.100.1.0/24 via 10.0.0.233 dev ens33 
10.100.2.0/24 dev cni0 proto kernel scope link src 10.100.2.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
[root@worker232 ~]# 

7. k8s二进制部署高可用集群(k8s-v1.28.3)

0)说明

部署参考笔记:
	https://www.cnblogs.com/supershy/p/17981419
目前生产环境部署kubernetes集群主要由两种方式:
	- kubeadm:
		kubeadm是一个K8S部署工具,提供kubeadm init和kubejoin,用于快速部署kubernetes集群。
	- 二进制部署:
		从GitHub下载发行版的二进制包,手动部署每个组件,组成kubernetes集群。
		
		
除了上述介绍的两种方式部署外,还有其他部署方式的途径:
	- yum: 
		已废弃,目前支持的最新版本为2017年发行的1.5.2版本。
	- minikube:
		适合开发环境,能够快速在Windows或者Linux构建K8S集群。
		参考链接:
			https://minikube.sigs.k8s.io/docs/
	- rancher:
		基于K8S改进发行了轻量级K8S,让K3S孕育而生。
		参考链接:
			https://www.rancher.com/
	- KubeSphere:
		青云科技基于开源KubeSphere快速部署K8S集群。
		参考链接:
			https://kubesphere.com.cn
	- kuboard:
		也是对k8s进行二次开发的产品,新增了很多独有的功能。
		参考链接: 
			https://kuboard.cn/
			
	- kind:
		快速构建K8S集群,用于测试,学习非常方便。		
		
     - kubeasz:
                使用ansible部署,扩容,缩容kubernetes集群,安装步骤官方文档已经非常详细了。
                参考链接: 
                  https://github.com/easzlab/kubeasz/
			
	- 第三方云厂商:
		比如aws,阿里云,腾讯云,京东云等云厂商均有K8S的相关SAAS产品。

	- 更多的第三方部署工具:
		参考链接:
                    https://landscape.cncf.io/
	

1)二进制部署基本环境准备

1. 角色规划

| 主机名 | IP地址 | 角色划分 |
| k8s-master01 | 10.0.0.241 | api-server,control manager,scheduler,etcd |
| k8s-master02 | 10.0.0.242 | api-server,control manager,scheduler,etcd |
| k8s-master03 | 10.0.0.243 | api-server,control manager,scheduler,etcd |
| k8s-worker04 | 10.0.0.244 | kubelet,kube-proxy |
| k8s-worker05 | 10.0.0.245 | kubelet,kube-proxy |
| apiserver-lb | 10.0.0.240 | apiserver的负载均衡器IP地址 |

2. 所有节点安装常用软件包

	1.所有节点CentOS 7安装yum源如下:
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl  -s -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
curl -o /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo



	2.所有节点安装常用的软件包
yum -y install bind-utils expect rsync wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git ntpdate bash-completion

3. master01节点配置集群免密钥登陆

	1.设置主机名,各节点参考如下命令修改即可
hostnamectl set-hostname k8s-master01

	2.设置相应的主机名及hosts文件解析
cat >> /etc/hosts <<'EOF'
10.0.0.240 apiserver-lb
10.0.0.241 k8s-master01
10.0.0.242 k8s-master02
10.0.0.243 k8s-master03
10.0.0.244 k8s-worker04
10.0.0.245 k8s-worker05
EOF


	3.配置免密码登录其他节点
cat > password_free_login.sh <<'EOF'
#!/bin/bash
# label: 配置免密登录脚本

# 检查密钥对是否存在,如果存在则删除
if [ -f /root/.ssh/id_rsa ]; then
    rm -f /root/.ssh/id_rsa /root/.ssh/id_rsa.pub
fi

# 创建密钥对
ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa -q

# 声明服务器密码,建议所有节点的密码一致
export mypasswd=666666

# 定义主机列表
k8s_host_list=(apiserver-lb k8s-master01 k8s-master02 k8s-master03 k8s-worker04 k8s-worker05)

# 安装expect,如果没有的话
if ! command -v expect &> /dev/null; then
    echo "Expect 工具未安装,正在安装..."
    yum install -y expect # 针对RHEL/CentOS,适当修改以适应你的发行版
fi

# 配置免密登录
for i in ${k8s_host_list[@]}; do
    expect -c "
    spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
    expect {
        \"*yes/no*\" {send \"yes\r\"; exp_continue}
        \"*password*\" {send \"$mypasswd\r\"; exp_continue}
        \"*failed*\" {send_user \"Failed to connect to $i.\n\"; exit 1}
    }"
done
EOF
sh password_free_login.sh



	4.编写同步脚本
cat > /usr/local/sbin/data_rsync.sh <<'EOF'
#!/bin/bash
# Auther: Jason Yin

if  [ $# -ne 1 ];then
   echo "Usage: $0 /path/to/file(绝对路径)"
   exit
fi 

if [ ! -e $1 ];then
    echo "[ $1 ] dir or file not find!"
    exit
fi

fullpath=`dirname $1`

basename=`basename $1`

cd $fullpath

k8s_host_list=(k8s-master02 k8s-master03 k8s-worker04 k8s-worker05)

for host in ${k8s_host_list[@]};do
  tput setaf 2
    echo ===== rsyncing ${host}: $basename =====
    tput setaf 7
    rsync -az $basename  `whoami`@${host}:$fullpath
    if [ $? -eq 0 ];then
      echo "命令执行成功!"
    fi
done
EOF
chmod +x /usr/local/sbin/data_rsync.sh


	5.同步"/etc/hosts"文件到集群
data_rsync.sh /etc/hosts 

4. 所有节点linux基础环境优化

	1.所有节点关闭firewalld,selinux,NetworkManager,postfix
systemctl disable --now NetworkManager firewalld postfix
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config


	2.所有节点关闭swap分区,fstab注释swap
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
free -h


	3.所有节点同步时间
		- 手动同步时区和时间
ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
ntpdate ntp.aliyun.com

		- 定期任务同步(也可以使用"crontab -e"手动编辑,但我更推荐我下面的做法,可以非交互)
echo "*/5 * * * * /usr/sbin/ntpdate ntp.aliyun.com" > /var/spool/cron/root
crontab -l

	4.所有节点配置limit
cat >> /etc/security/limits.conf <<'EOF'
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF


	5.所有节点优化sshd服务
sed -i 's@#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config
sed -i 's@^GSSAPIAuthentication yes@GSSAPIAuthentication no@g' /etc/ssh/sshd_config

		- UseDNS选项:
	打开状态下,当客户端试图登录SSH服务器时,服务器端先根据客户端的IP地址进行DNS PTR反向查询出客户端的主机名,然后根据查询出的客户端主机名进行DNS正向A记录查询,验证与其原始IP地址是否一致,这是防止客户端欺骗的一种措施,但一般我们的是动态IP不会有PTR记录,打开这个选项不过是在白白浪费时间而已,不如将其关闭。

		- GSSAPIAuthentication:
	当这个参数开启( GSSAPIAuthentication  yes )的时候,通过SSH登陆服务器时候会有些会很慢!这是由于服务器端启用了GSSAPI。登陆的时候客户端需要对服务器端的IP地址进行反解析,如果服务器的IP地址没有配置PTR记录,那么就容易在这里卡住了。
	
	
	
	6.Linux内核调优
cat > /etc/sysctl.d/k8s.conf <<'EOF'
# 以下3个参数是containerd所依赖的内核参数
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv6.conf.all.disable_ipv6 = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system



	7.修改终端颜色
cat <<EOF >>  ~/.bashrc 
PS1='[\[\e[34;1m\]\u@\[\e[0m\]\[\e[32;1m\]\H\[\e[0m\]\[\e[31;1m\] \W\[\e[0m\]]# '
EOF
source ~/.bashrc

	8 修改ens33网卡名称为eth0【选做,建议修改】
        8.1 修改配置文件
vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="... net.ifnames=0 biosdevname=0"

        8.2 用grub2-mkconfig重新生成配置。
grub2-mkconfig -o /boot/grub2/grub.cfg
	
        8.3 修改网卡配置
mv /etc/sysconfig/network-scripts/ifcfg-{ens33,eth0}
sed -i 's#ens33#eth0#g' /etc/sysconfig/network-scripts/ifcfg-eth0
cat /etc/sysconfig/network-scripts/ifcfg-eth0 

	
    8.4 重启操作系统即可
reboot 

温馨提示:
	如果无法正常启动,则可用考虑将ens33网卡替换为eth0网卡,建议不要忘记写"DEVICE"字段哟。

5. 所有节点linux内核升级更新系统

    1.k8s-master01节点下载并安装内核软件包
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

	2.k8s-master01节点将下载的软件包同步到其他节点
data_rsync.sh kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
data_rsync.sh kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm

	3.所有节点执行安装升级Linux内核命令
yum -y localinstall kernel-ml*

	4.更改内核启动顺序
grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
grubby --default-kernel

	5.所有节点更新软件版本,但不需要更新内核,因为我内核已经更新到了指定的版本
yum -y update --exclude=kernel* 

6. 所有节点安装ipvsadm实现kube-proxy的负载均衡

	1.所有安装ipvsadm等相关工具
yum -y install ipvsadm ipset sysstat conntrack libseccomp 

	2.所有节点创建要开机自动加载的模块配置文件
cat > /etc/modules-load.d/ipvs.conf << 'EOF'
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

	3.重启操作系统
reboot

	4.验证加载的模块
lsmod | grep --color=auto -e ip_vs -e nf_conntrack


温馨提示:
	Linux kernel 4.19+版本已经将之前的"nf_conntrack_ipv4"模块更名为"nf_conntrack"模块哟~

2)安装k8s相关基础组件

1. 所有节点安装containerd

	#1 所有节点安装containerd组件
yum -y install containerd.io


温馨提示:
	其实我们只需要安装containerd组件即可,但是安装docker也无妨,可以管理起来很方便。
    如果你想要安装docker的话,我推荐使用"docker-ce-20.10.24 docker-ce-cli-20.10.24"版本。



	#2 配置containerd需要的模块
#2.1 临时手动加载模块
modprobe -- overlay
modprobe -- br_netfilter
	
#2.2 开机自动加载所需的内核模块
cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF



	#3 修改containerd的配置文件
#3.1 重新初始化containerd的配置文件
containerd config default | tee /etc/containerd/config.toml 

	#4.修改Cgroup的管理者为systemd组件
sed -ri 's#(SystemdCgroup = )false#\1true#' /etc/containerd/config.toml 
grep SystemdCgroup /etc/containerd/config.toml

	#5.修改pause的基础镜像名称
sed -i 's#registry.k8s.io/pause:3.6#registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7#' /etc/containerd/config.toml
grep sandbox_image /etc/containerd/config.toml



	#6 所有节点启动containerd
#6.1 启动containerd服务
systemctl daemon-reload
systemctl enable --now containerd

#6.2 配置crictl客户端连接的运行时位置
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

#6.3 查看containerd的版本
[root@k8s-master01 ~]# ctr version
Client:
  Version:  1.6.27
  Revision: a1496014c916f9e62104b33d1bb5bd03b0858e59
  Go version: go1.20.13

Server:
  Version:  1.6.27
  Revision: a1496014c916f9e62104b33d1bb5bd03b0858e59
  UUID: 4a5766bc-691f-49be-9182-b467ed31e330
[root@k8s-master01 ~]# 

2. 安装etcd组件(独立集群)

	# 1.下载etcd软件包
wget https://github.com/etcd-io/etcd/releases/download/v3.5.10/etcd-v3.5.10-linux-amd64.tar.gz


	# 2.解压etcd的二进制程序包到PATH环境变量路径
#2.1 解压软件包
tar -xf etcd-v3.5.10-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.10-linux-amd64/etcd{,ctl}

#2.2 查看etcd版本
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.5.10
API version: 3.5
[root@k8s-master01 ~]# 


	# 3.将软件包下发到所有节点
[root@k8s-master01 ~]# ETCD_NODES='k8s-master02 k8s-master03'
[root@k8s-master01 ~]# for NODE in $ETCD_NODES; do echo $NODE; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done

3. 安装k8s组件

	# 1.下载k8s二进制软件版
wget https://dl.k8s.io/v1.28.3/kubernetes-server-linux-amd64.tar.gz

# wget https://dl.k8s.io/v1.28.6/kubernetes-server-linux-amd64.tar.gz


	# 2.解压K8S的二进制程序包到PATH环境变量路径
#2.1 解压软件包
[root@k8s-master01 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}


2.2 查看kubelet的版本
[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.28.3
[root@k8s-master01 ~]# 


	3 将软件包下发到所有节点
MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-worker04 k8s-worker05'
for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; done
for NODE in $WorkNodes; do echo $NODE; scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/; done

3)生成k8s和etcd证书

1. 安装cfssl证书管理工具

github下载地址:
	https://github.com/cloudflare/cfssl

温馨提示:
	生成K8S和etcd证书这一步骤很关键,我建议各位在做实验前先对K8S集群的所有节点拍一下快照,以避免你实验做失败了方便回滚。
	关于cfssl证书可以自行在github下载即可,当然也可以使用我课堂上给大家下载好的软件包哟。

具体操作如下:
	1.解压压缩包
[root@k8s-master01 ~]# unzip supershy-cfssl.zip 

	2.重命名cfssl的版本号信息
[root@k8s-master01 ~]# rename _1.6.4_linux_amd64 "" *

	3.将cfssl证书拷贝到环境变量并授权执行权限
[root@k8s-master01 ~]# mv cfssl* /usr/local/bin/
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# chmod +x /usr/local/bin/cfssl*
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# ll /usr/local/bin/cfssl*
-rwxr-xr-x 1 root root 12054528 Aug 30 15:46 /usr/local/bin/cfssl
-rwxr-xr-x 1 root root  9560064 Aug 30 15:45 /usr/local/bin/cfssl-certinfo
-rwxr-xr-x 1 root root  7643136 Aug 30 15:48 /usr/local/bin/cfssljson
[root@k8s-master01 ~]# 

2. 生成etcd证书

	1 k8s-master01节点创建etcd证书存储目录
[root@k8s-master01 ~]# mkdir -pv /jiaxing/certs/{etcd,pki}/ && cd /jiaxing/certs/pki/


	2 k8s-master01节点生成etcd证书的自建ca证书
2.1 生成证书的CSR文件: 证书签发请求文件,配置了一些域名,公司,单位
[root@k8s-master01 pki]# ll
total 4
-rw-r--r-- 1 root root 249 Jan 24 14:45 etcd-ca-csr.json
[root@k8s-master01 pki]# 
[root@k8s-master01 pki]# cat >etcd-ca-csr.json <<'EOF'
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
EOF
[root@k8s-master01 pki]# 


2.2 生成etcd CA证书和CA证书的key
[root@k8s-master01 pki]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /jiaxing/certs/etcd/etcd-ca

2024/01/24 14:47:13 [INFO] generating a new CA key and certificate from CSR
2024/01/24 14:47:13 [INFO] generate received request
2024/01/24 14:47:13 [INFO] received CSR
2024/01/24 14:47:13 [INFO] generating key: rsa-2048
2024/01/24 14:47:13 [INFO] encoded CSR
2024/01/24 14:47:13 [INFO] signed certificate with serial number 628917585143372415551937887574457680757382454454
[root@k8s-master01 pki]# 
[root@k8s-master01 pki]# ll /jiaxing/certs/etcd/
total 12
-rw-r--r-- 1 root root 1050 Jan 24 14:47 etcd-ca.csr
-rw------- 1 root root 1675 Jan 24 14:47 etcd-ca-key.pem
-rw-r--r-- 1 root root 1318 Jan 24 14:47 etcd-ca.pem
[root@k8s-master01 pki]# 


	3 k8s-master01节点基于自建ca证书颁发etcd证书
3.1 生成etcd证书的有效期为100年
[root@k8s-master01 pki]# cat >ca-config.json <<'EOF'
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF
[root@k8s-master01 pki]# 


3.2 生成证书的CSR文件: 证书签发请求文件,配置了一些域名,公司,单位
[root@k8s-master01 pki]# cat >etcd-csr.json <<'EOF'
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ]
}
EOF
[root@k8s-master01 pki]# 


3.3 基于自建的ectd ca证书生成etcd的证书
[root@k8s-master01 pki]# cfssl gencert \
  -ca=/jiaxing/certs/etcd/etcd-ca.pem \
  -ca-key=/jiaxing/certs/etcd/etcd-ca-key.pem \
  -config=ca-config.json \
  --hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,10.0.0.241,10.0.0.242,10.0.0.243 \
  --profile=kubernetes \
  etcd-csr.json  | cfssljson -bare /jiaxing/certs/etcd/etcd-server
  
[root@k8s-master01 pki]# ll /jiaxing/certs/etcd/
total 24
-rw-r--r-- 1 root root 1050 Jan 24 14:47 etcd-ca.csr
-rw------- 1 root root 1675 Jan 24 14:47 etcd-ca-key.pem
-rw-r--r-- 1 root root 1318 Jan 24 14:47 etcd-ca.pem
-rw-r--r-- 1 root root 1131 Jan 24 14:52 etcd-server.csr
-rw------- 1 root root 1675 Jan 24 14:52 etcd-server-key.pem
-rw-r--r-- 1 root root 1464 Jan 24 14:52 etcd-server.pem
[root@k8s-master01 pki]# 



	4 k8s-master01节点将etcd证书拷贝到其他两个master节点
MasterNodes='k8s-master02 k8s-master03'

for NODE in $MasterNodes; do
     echo $NODE; ssh $NODE "mkdir -pv /jiaxing/certs/etcd/"
     for FILE in etcd-ca-key.pem etcd-ca.pem etcd-server-key.pem etcd-server.pem; do
       scp /jiaxing/certs/etcd/${FILE} $NODE:/jiaxing/certs/etcd/${FILE}
     done
 done

3. 生成k8s组件相关证书

	1 所有节点创建k8s证书存储目录
[root@k8s-master01 pki]# mkdir -pv /jiaxing/certs/kubernetes/


	2 k8s-master01节点生成kubernetes自建ca证书
2.1 生成证书的CSR文件: 证书签发请求文件,配置了一些域名,公司,单位
[root@k8s-master01 pki]# cat >k8s-ca-csr.json <<'EOF'
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
EOF
[root@k8s-master01 pki]# 



2.2 生成kubernetes证书
[root@k8s-master01 pki]# cfssl gencert -initca k8s-ca-csr.json | cfssljson -bare /jiaxing/certs/kubernetes/k8s-ca
[root@k8s-master01 pki]# 
[root@k8s-master01 pki]# ll /jiaxing/certs/kubernetes/
total 12
-rw-r--r-- 1 root root 1070 Jan 24 15:00 k8s-ca.csr
-rw------- 1 root root 1675 Jan 24 15:00 k8s-ca-key.pem
-rw-r--r-- 1 root root 1363 Jan 24 15:00 k8s-ca.pem
[root@k8s-master01 pki]# 



	3 k8s-master01节点基于自建ca证书颁发apiserver相关证书
3.1 生成k8s证书的有效期为100年
[root@k8s-master01 pki]# cat >k8s-ca-config.json<<'EOF'
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF
[root@k8s-master01 pki]# 

3.2 生成apiserver证书的CSR文件: 证书签发请求文件,配置了一些域名,公司,单位
[root@k8s-master01 pki]# cat >apiserver-csr.json <<'EOF'
{
  "CN": "kube-apiserver",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
[root@k8s-master01 pki]# 
	
3.3 基于自检ca证书生成apiServer的证书文件
[root@k8s-master01 pki]# cfssl gencert \
  -ca=/jiaxing/certs/kubernetes/k8s-ca.pem \
  -ca-key=/jiaxing/certs/kubernetes/k8s-ca-key.pem \
  -config=k8s-ca-config.json \
  --hostname=10.200.0.1,10.0.0.240,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.jiaxing,kubernetes.default.svc.jiaxing.com,10.0.0.241,10.0.0.242,10.0.0.243,10.0.0.244,10.0.0.245 \
  --profile=kubernetes \
   apiserver-csr.json  | cfssljson -bare /jiaxing/certs/kubernetes/apiserver
   
[root@k8s-master01 pki]#  ll /jiaxing/certs/kubernetes/apiserver*
-rw-r--r-- 1 root root 1310 Jan 24 15:06 /jiaxing/certs/kubernetes/apiserver.csr
-rw------- 1 root root 1679 Jan 24 15:06 /jiaxing/certs/kubernetes/apiserver-key.pem
-rw-r--r-- 1 root root 1704 Jan 24 15:06 /jiaxing/certs/kubernetes/apiserver.pem
[root@k8s-master01 pki]# 



  

温馨提示:
	"10.200.0.1"为咱们的svc网段的第一个地址,您需要根据自己的场景稍作修改。
	"10.0.0.240"是负载均衡器的VIP地址。
	"kubernetes,...,kubernetes.default.svc.supershy.com"对应的是apiServer解析的A记录。
	"10.0.0.241,...,10.0.0.245"对应的是K8S集群的地址。



4 生成第三方组件与apiServer通信的聚合证书
聚合证书的作用就是让第三方组件(比如metrics-server等)能够拿这个证书文件和apiServer进行通信。

4.1 生成聚合证书的用于自建ca的CSR文件
[root@k8s-master01 pki]# cat >front-proxy-ca-csr.json <<'EOF'
{
  "CN": "kubernetes",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}
EOF
[root@k8s-master01 pki]# 


4.2 生成聚合证书的自检ca证书
[root@k8s-master01 pki]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /jiaxing/certs/kubernetes/front-proxy-ca
[root@k8s-master01 pki]# 
[root@k8s-master01 pki]# ll /jiaxing/certs/kubernetes/front-proxy-ca*
-rw-r--r-- 1 root root  891 Jan 24 15:09 /jiaxing/certs/kubernetes/front-proxy-ca.csr
-rw------- 1 root root 1679 Jan 24 15:09 /jiaxing/certs/kubernetes/front-proxy-ca-key.pem
-rw-r--r-- 1 root root 1094 Jan 24 15:09 /jiaxing/certs/kubernetes/front-proxy-ca.pem
[root@k8s-master01 pki]# 


4.3 生成聚合证书的用于客户端的CSR文件
[root@k8s-master01 pki]# cat >front-proxy-client-csr.json<<'EOF'
{
  "CN": "front-proxy-client",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}
EOF
[root@k8s-master01 pki]# 


4.4 基于聚合证书的自建ca证书签发聚合证书的客户端证书
[root@k8s-master01 pki]# cfssl gencert \
  -ca=/jiaxing/certs/kubernetes/front-proxy-ca.pem \
  -ca-key=/jiaxing/certs/kubernetes/front-proxy-ca-key.pem \
  -config=k8s-ca-config.json \
  -profile=kubernetes \
  front-proxy-client-csr.json | cfssljson -bare /jiaxing/certs/kubernetes/front-proxy-client
[root@k8s-master01 pki]# 
[root@k8s-master01 pki]# ll /jiaxing/certs/kubernetes/front-proxy-client*
-rw-r--r-- 1 root root  903 Jan 24 15:10 /jiaxing/certs/kubernetes/front-proxy-client.csr
-rw------- 1 root root 1679 Jan 24 15:10 /jiaxing/certs/kubernetes/front-proxy-client-key.pem
-rw-r--r-- 1 root root 1188 Jan 24 15:10 /jiaxing/certs/kubernetes/front-proxy-client.pem
[root@k8s-master01 pki]# 




	5 生成controller-manager证书及kubeconfig文件
5.1 生成controller-manager的CSR文件
[root@k8s-master01 pki]# cat >controller-manager-csr.json <<'EOF'
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
[root@k8s-master01 pki]# 


5.2 生成controller-manager证书文件
[root@k8s-master01 pki]#  cfssl gencert \
  -ca=/jiaxing/certs/kubernetes/k8s-ca.pem \
  -ca-key=/jiaxing/certs/kubernetes/k8s-ca-key.pem \
  -config=k8s-ca-config.json \
  -profile=kubernetes \
  controller-manager-csr.json | cfssljson -bare /jiaxing/certs/kubernetes/controller-manager
  
[root@k8s-master01 pki]# ll /jiaxing/certs/kubernetes/controller-manager*
-rw-r--r-- 1 root root 1082 Jan 24 15:13 /jiaxing/certs/kubernetes/controller-manager.csr
-rw------- 1 root root 1679 Jan 24 15:13 /jiaxing/certs/kubernetes/controller-manager-key.pem
-rw-r--r-- 1 root root 1501 Jan 24 15:13 /jiaxing/certs/kubernetes/controller-manager.pem
[root@k8s-master01 pki]# 


5.3 创建一个kubeconfig目录
[root@k8s-master01 pki]# mkdir -pv /jiaxing/certs/kubeconfig

5.4 设置一个集群 
[root@k8s-master01 pki]# kubectl config set-cluster jiaxing-k8s \
  --certificate-authority=/jiaxing/certs/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.240:8443 \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-controller-manager.kubeconfig
  
5.5 设置一个用户项
[root@k8s-master01 pki]# kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=/jiaxing/certs/kubernetes/controller-manager.pem \
  --client-key=/jiaxing/certs/kubernetes/controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-controller-manager.kubeconfig
  
5.6 设置一个上下文环境
[root@k8s-master01 pki]# kubectl config set-context system:kube-controller-manager@kubernetes \
  --cluster=jiaxing-k8s \
  --user=system:kube-controller-manager \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-controller-manager.kubeconfig
  
5.7 使用默认的上下文
[root@k8s-master01 pki]# kubectl config use-context system:kube-controller-manager@kubernetes \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-controller-manager.kubeconfig
  



6 生成scheduler证书及kubeconfig文件
6.1 生成scheduler的CSR文件
[root@k8s-master01 pki]# cat >scheduler-csr.json <<'EOF'
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
[root@k8s-master01 pki]# 

6.2 生成scheduler证书文件
[root@k8s-master01 pki]#  cfssl gencert \
  -ca=/jiaxing/certs/kubernetes/k8s-ca.pem \
  -ca-key=/jiaxing/certs/kubernetes/k8s-ca-key.pem \
  -config=k8s-ca-config.json \
  -profile=kubernetes \
  scheduler-csr.json | cfssljson -bare /jiaxing/certs/kubernetes/scheduler

[root@k8s-master01 pki]# ll /jiaxing/certs/kubernetes/scheduler*
-rw-r--r-- 1 root root 1058 Jan 24 15:21 /jiaxing/certs/kubernetes/scheduler.csr
-rw------- 1 root root 1679 Jan 24 15:21 /jiaxing/certs/kubernetes/scheduler-key.pem
-rw-r--r-- 1 root root 1476 Jan 24 15:21 /jiaxing/certs/kubernetes/scheduler.pem
[root@k8s-master01 pki]# 


6.3 设置一个集群
[root@k8s-master01 pki]# kubectl config set-cluster jiaxing-k8s \
  --certificate-authority=/jiaxing/certs/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.240:8443 \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-scheduler.kubeconfig
  
  
6.4 设置一个用户项
[root@k8s-master01 pki]# kubectl config set-credentials system:kube-scheduler \
  --client-certificate=/jiaxing/certs/kubernetes/scheduler.pem \
  --client-key=/jiaxing/certs/kubernetes/scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-scheduler.kubeconfig
  
6.5 设置一个上下文环境
[root@k8s-master01 pki]# kubectl config set-context system:kube-scheduler@kubernetes \
  --cluster=jiaxing-k8s \
  --user=system:kube-scheduler \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-scheduler.kubeconfig
  
 6.6 使用默认的上下文
[root@k8s-master01 pki]# kubectl config use-context system:kube-scheduler@kubernetes \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-scheduler.kubeconfig




7 配置k8s集群管理员证书及kubeconfig文件
7.1 生成管理员的CSR文件
[root@k8s-master01 pki]# cat >admin-csr.json <<'EOF'
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:masters",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
[root@k8s-master01 pki]# 

7.2 生成k8s集群管理员证书
[root@k8s-master01 pki]# cfssl gencert \
  -ca=/jiaxing/certs/kubernetes/k8s-ca.pem \
  -ca-key=/jiaxing/certs/kubernetes/k8s-ca-key.pem \
  -config=k8s-ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare /jiaxing/certs/kubernetes/admin
  
[root@k8s-master01 pki]# ll /jiaxing/certs/kubernetes/admin*
-rw-r--r-- 1 root root 1025 Jan 24 15:26 /supershy/certs/kubernetes/admin.csr
-rw------- 1 root root 1675 Jan 24 15:26 /supershy/certs/kubernetes/admin-key.pem
-rw-r--r-- 1 root root 1444 Jan 24 15:26 /supershy/certs/kubernetes/admin.pem
[root@k8s-master01 pki]# 

 

7.3 设置一个集群
[root@k8s-master01 pki]#  kubectl config set-cluster jiaxing-k8s \
  --certificate-authority=/jiaxing/certs/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.240:8443 \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-admin.kubeconfig
  
  
7.4 设置一个用户项
[root@k8s-master01 pki]#  kubectl config set-credentials kube-admin \
  --client-certificate=/jiaxing/certs/kubernetes/admin.pem \
  --client-key=/jiaxing/certs/kubernetes/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-admin.kubeconfig
  
7.5 设置一个上下文环境
[root@k8s-master01 pki]#  kubectl config set-context kube-admin@kubernetes \
  --cluster=jiaxing-k8s \
  --user=kube-admin \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-admin.kubeconfig
  
7.6 使用默认的上下文
[root@k8s-master01 pki]#  kubectl config use-context kube-admin@kubernetes \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-admin.kubeconfig



	8.创建ServiceAccount
8.1 ServiceAccount是k8s一种认证方式,创建ServiceAccount的时候会创建一个与之绑定的secret,这个secret会生成一个token
[root@k8s-master01 pki]# openssl genrsa -out /jiaxing/certs/kubernetes/sa.key 2048


8.2 基于sa.key创建sa.pub
[root@k8s-master01 pki]# openssl rsa -in /jiaxing/certs/kubernetes/sa.key -pubout -out /jiaxing/certs/kubernetes/sa.pub
[root@k8s-master01 pki]# 
[root@k8s-master01 pki]# ll /jiaxing/certs/kubernetes/sa*
-rw-r--r-- 1 root root 1675 Jan 24 15:32 /jiaxing/certs/kubernetes/sa.key
-rw-r--r-- 1 root root  451 Jan 24 15:32 /jiaxing/certs/kubernetes/sa.pub
[root@k8s-master01 pki]# 



	9 k8s-master01节点将etcd证书拷贝到其他两个master节点
	1.k8s-master01节点将etcd证书拷贝到其他两个master节点
[root@k8s-master01 pki]# for NODE in k8s-master02 k8s-master03; do 
  	 echo $NODE; ssh $NODE "mkdir -pv /jiaxing/certs/{kubernetes,kubeconfig}"
	 for FILE in $(ls /jiaxing/certs/kubernetes); do 
		scp /jiaxing/certs/kubernetes/${FILE} $NODE:/jiaxing/certs/kubernetes/${FILE};
	 done; 
	 for FILE in kube-admin.kubeconfig  kube-controller-manager.kubeconfig  kube-scheduler.kubeconfig; do 
		scp /jiaxing/certs/kubeconfig/${FILE} $NODE:/jiaxing/certs/kubeconfig/${FILE};
	 done;
done


	2.其他两个节点验证文件数量是否正确
[root@k8s-master02 ~]# ls /jiaxing/certs/kubernetes  | wc -l
23
[root@k8s-master02 ~]# 

[root@k8s-master03 ~]# ls /jiaxing/certs/kubernetes  | wc -l
23
[root@k8s-master03 ~]# 

4)部署k8s集群高可用

1. 安装高可用组件haproxy+keepalived

	1 所有master节点安装高可用组件
温馨提示:
	- 对于高可用组件,其实我们也可以单独找两台虚拟机来部署,但我为了节省2台机器,就直接在master节点复用了。
	- 如果在云上安装K8S则无安装高可用组件了,毕竟公有云大部分都是不支持keepalived的,可以直接使用云产品,比如阿里的"SLB",腾讯的"ELB"等SAAS产品;
	- 推荐使用ELB,SLB有回环的问题,也就是SLB代理的服务器不能法相访问SLB,但是腾讯云修复了这个问题;


具体实操:
	yum -y install keepalived haproxy 




	2 所有master节点配置haproxy
温馨提示:
	- haproxy的负载均衡器监听地址我配置是8443,你可以修改为其他端口,haproxy会用来反向代理各个master组件的地址;
	- 如果你真的修改请一定注意上面的证书配置的kubeconfig文件,也要一起修改,否则就会出现连接集群失败的问题;
	
	
具体实操:
2.1 备份配置文件
cp /etc/haproxy/haproxy.cfg{,`date +%F`}


2.2 所有节点的配置文件内容相同
cat > /etc/haproxy/haproxy.cfg <<'EOF'
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend monitor-haproxy
  bind *:33305
  mode http
  option httplog
  monitor-uri /ayouok

frontend jiaxing-k8s
  bind 0.0.0.0:8443
  bind 127.0.0.1:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend jiaxing-k8s

backend jiaxing-k8s
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01   10.0.0.241:6443  check
  server k8s-master02   10.0.0.242:6443  check
  server k8s-master03   10.0.0.243:6443  check
EOF



	3 所有master节点配置keepalived
温馨提示:
	- 注意"interface"字段为你的物理网卡的名称,如果你的网卡是eth0,请将"ens33"修改为"eth0"哟;
	- 注意"mcast_src_ip"各master节点的配置均不相同,修改根据实际环境进行修改哟;
	- 注意"virtual_ipaddress"指定的是负载均衡器的VIP地址,这个地址也要和kubeconfig文件的Apiserver地址要一致哟;
	- 注意"script"字段的脚本用于检测后端的apiServer是否健康;
	
	
具体实操:
3.1 备份配置文件
cp /etc/keepalived/keepalived.conf{,`date +%F`}

3.2 "k8s-master01"节点创建配置文件
[root@k8s-master01 ~]# ifconfig 
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.241  netmask 255.255.255.0  broadcast 10.0.0.255
        ether 00:0c:29:32:73:ac  txqueuelen 1000  (Ethernet)
        RX packets 324292  bytes 234183010 (223.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 242256  bytes 31242156 (29.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

...

[root@k8s-master01 ~]# 
[root@k8s-master01 ~]#  cat > /etc/keepalived/keepalived.conf <<'EOF'
! Configuration File for keepalived
global_defs {
   router_id 10.0.0.241
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 8443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.0.0.241
    nopreempt
    authentication {
        auth_type PASS
        auth_pass supershy_k8s
    }
    track_script {
         check_port.sh
    }
    virtual_ipaddress {
        10.0.0.240
    }
}
EOF


3.3 "k8s-master02"节点创建配置文件
[root@k8s-master02 ~]# ifconfig 
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.242  netmask 255.255.255.0  broadcast 10.0.0.255
        ether 00:0c:29:cf:ad:0a  txqueuelen 1000  (Ethernet)
        RX packets 256743  bytes 42628265 (40.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 252589  bytes 34277384 (32.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

...

[root@k8s-master02 ~]# 
[root@k8s-master02 ~]# cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
   router_id 10.0.0.242
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 8443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.0.0.242
    nopreempt
    authentication {
        auth_type PASS
        auth_pass supershy_k8s
    }
    track_script {
         check_port.sh
    }
    virtual_ipaddress {
        10.0.0.240
    }
}
EOF

3.4 "k8s-master03"节点创建配置文件
[root@k8s-master03 ~]# ifconfig 
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.243  netmask 255.255.255.0  broadcast 10.0.0.255
        ether 00:0c:29:5f:f7:4f  txqueuelen 1000  (Ethernet)
        RX packets 178577  bytes 34808750 (33.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 171025  bytes 26471309 (25.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

...

[root@k8s-master03 ~]# 
[root@k8s-master03 ~]# cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
   router_id 10.0.0.243
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 8443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.0.0.243
    nopreempt
    authentication {
        auth_type PASS
        auth_pass supershy_k8s
    }
    track_script {
         check_port.sh
    }
    virtual_ipaddress {
        10.0.0.240
    }
}
EOF

3.5 所有keepalived节点均需要创建健康检查脚本
cat > /etc/keepalived/check_port.sh  <<'EOF'
#!/bin/bash

CHK_PORT=$1

if [ -n "$CHK_PORT" ];then
  PORT_PROCESS=`ss -ntl|grep $CHK_PORT|wc -l`

  if [ $PORT_PROCESS -eq 0 ];then
     echo "Port $CHK_PORT Is Not Used,Haproxy not runing... will stop keepalived."
     /usr/bin/systemctl stop keepalived
  else
     echo "Haproxy is runing ..."
  fi

else
  echo "Check Port Cant Be Empty!"
  /usr/bin/systemctl stop keepalived

fi
EOF
chmod +x /etc/keepalived/check_port.sh


温馨提示:
	router_id:
		节点ip,master每个节点配置自己的IP
	mcast_src_ip:
		节点IP,master每个节点配置自己的IP
	virtual_ipaddress:
		虚拟IP,即VIP。
	interface:
		服务器的IP接口名称。需要根据的自己环境的实际情况而修改哟!





	4 启动keepalived服务并验证
4.1 所有节点启动keepalived服务
systemctl daemon-reload
systemctl enable --now keepalived
systemctl status keepalived


4.2 验证服务是否正常
[root@k8s-master03 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:5f:f7:4f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.243/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.240/32 scope global eth0
       valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
[root@k8s-master03 ~]# 
[root@k8s-master03 ~]# ping 10.0.0.240
PING 10.0.0.240 (10.0.0.240) 56(84) bytes of data.
64 bytes from 10.0.0.240: icmp_seq=1 ttl=64 time=0.019 ms
64 bytes from 10.0.0.240: icmp_seq=2 ttl=64 time=0.027 ms
64 bytes from 10.0.0.240: icmp_seq=3 ttl=64 time=0.023 ms
...


4.3 单独开一个终端尝试停止keepalived服务
[root@k8s-master03 ~]# systemctl stop keepalived


4.4 再次观察终端输出
[root@k8s-master03 ~]# ping 10.0.0.240
PING 10.0.0.240 (10.0.0.240) 56(84) bytes of data.
64 bytes from 10.0.0.240: icmp_seq=1 ttl=64 time=0.019 ms
64 bytes from 10.0.0.240: icmp_seq=2 ttl=64 time=0.027 ms
64 bytes from 10.0.0.240: icmp_seq=3 ttl=64 time=0.023 ms
...
64 bytes from 10.0.0.240: icmp_seq=36 ttl=64 time=0.037 ms
64 bytes from 10.0.0.240: icmp_seq=37 ttl=64 time=0.023 ms
From 10.0.0.242: icmp_seq=38 Redirect Host(New nexthop: 10.0.0.240)
From 10.0.0.242: icmp_seq=39 Redirect Host(New nexthop: 10.0.0.240)
64 bytes from 10.0.0.240: icmp_seq=40 ttl=64 time=1.81 ms
64 bytes from 10.0.0.240: icmp_seq=41 ttl=64 time=0.680 ms
64 bytes from 10.0.0.240: icmp_seq=42 ttl=64 time=0.751 ms


4.5 验证vip是否飘逸到其他节点,果不其然,真的飘逸到其他master节点啦!
[root@k8s-master02 ~]# ip a
...
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:cf:ad:0a brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.242/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.240/32 scope global eth0
       valid_lft forever preferred_lft forever
...
[root@k8s-master02 ~]# 



4.6 再次启动keepalived,发现启动成功了,发现会自动飘逸回来,抢占式。
[root@k8s-master03 ~]# systemctl start keepalived
[root@k8s-master03 ~]# 
[root@k8s-master03 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:71:27:66 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.243/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 10.0.0.240/32 scope global ens33
       valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
[root@k8s-master03 ~]# 




	5 验证haproxy服务并验证
5.1.所有master节点启动haproxy服务
systemctl enable --now haproxy 
systemctl status haproxy 

	2.基于telnet验证haporxy是否正常
[root@k8s-worker05 ~]# telnet 10.0.0.240 8443
Trying 10.0.0.240...
Connected to 10.0.0.240.
Escape character is '^]'.
Connection closed by foreign host.
[root@k8s-worker05 ~]#

	3.基于webUI进行验证
[root@k8s-worker05 ~]# curl http://10.0.0.240:33305/ayouok
<html><body><h1>200 OK</h1>
Service ready.
</body></html>
[root@k8s-worker05 ~]#

2. 启动etcd集群

	1 创建etcd集群各节点配置文件
1.1 k8s-master01节点的配置文件
[root@k8s-master01 ~]# mkdir -pv /jiaxing/{softwares,data}/etcd/
[root@k8s-master01 ~]# cat > /jiaxing/softwares/etcd/etcd.config.yml <<'EOF'
name: 'k8s-master01'
data-dir: /jiaxing/data/etcd
wal-dir: /jiaxing/data/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.0.0.241:2380'
listen-client-urls: 'https://10.0.0.241:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.0.0.241:2380'
advertise-client-urls: 'https://10.0.0.241:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.0.0.241:2380,k8s-master02=https://10.0.0.242:2380,k8s-master03=https://10.0.0.243:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/jiaxing/certs/etcd/etcd-server.pem'
  key-file: '/jiaxing/certs/etcd/etcd-server-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/jiaxing/certs/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/jiaxing/certs/etcd/etcd-server.pem'
  key-file: '/jiaxing/certs/etcd/etcd-server-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/jiaxing/certs/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF


	2.k8s-master02节点的配置文件
[root@k8s-master02 ~]# mkdir -pv /jiaxing/{softwares,data}/etcd/
[root@k8s-master02 ~]# cat > /jiaxing/softwares/etcd/etcd.config.yml << 'EOF'
name: 'k8s-master02'
data-dir: /jiaxing/data/etcd
wal-dir: /jiaxing/data/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.0.0.242:2380'
listen-client-urls: 'https://10.0.0.242:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.0.0.242:2380'
advertise-client-urls: 'https://10.0.0.242:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.0.0.241:2380,k8s-master02=https://10.0.0.242:2380,k8s-master03=https://10.0.0.243:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/jiaxing/certs/etcd/etcd-server.pem'
  key-file: '/jiaxing/certs/etcd/etcd-server-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/jiaxing/certs/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/jiaxing/certs/etcd/etcd-server.pem'
  key-file: '/jiaxing/certs/etcd/etcd-server-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/jiaxing/certs/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF


	3.k8s-master03节点的配置文件
[root@k8s-master03 ~]# mkdir -pv /jiaxing/{softwares,data}/etcd/
[root@k8s-master03 ~]# cat > /jiaxing/softwares/etcd/etcd.config.yml << 'EOF'
name: 'k8s-master03'
data-dir: /jiaxing/data/etcd
wal-dir: /jiaxing/data/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.0.0.243:2380'
listen-client-urls: 'https://10.0.0.243:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.0.0.243:2380'
advertise-client-urls: 'https://10.0.0.243:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://10.0.0.241:2380,k8s-master02=https://10.0.0.242:2380,k8s-master03=https://10.0.0.243:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/jiaxing/certs/etcd/etcd-server.pem'
  key-file: '/jiaxing/certs/etcd/etcd-server-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/jiaxing/certs/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/jiaxing/certs/etcd/etcd-server.pem'
  key-file: '/jiaxing/certs/etcd/etcd-server-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/jiaxing/certs/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF






	2 所有master节点编写etcd启动脚本
cat > /usr/lib/systemd/system/etcd.service <<'EOF'
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/jiaxing/softwares/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF




	3 启动etcd集群
3.1 启动服务
systemctl daemon-reload && systemctl enable --now etcd
systemctl status etcd

3.2 查看etcd集群状态
etcdctl --endpoints="10.0.0.241:2379,10.0.0.242:2379,10.0.0.243:2379" --cacert=/jiaxing/certs/etcd/etcd-ca.pem --cert=/jiaxing/certs/etcd/etcd-server.pem --key=/jiaxing/certs/etcd/etcd-server-key.pem  endpoint status --write-out=table

3. etcd集群的扩缩容

	1.为etcdctl添加别名
[root@k8s-master01 ~]# vim ~/.bashrc 
...
alias etcdctl='etcdctl --endpoints="10.0.0.241:2379,10.0.0.242:2379,10.0.0.243:2379" --cacert=/jiaxing/certs/etcd/etcd-ca.pem --cert=/jiaxing/certs/etcd/etcd-server.pem --key=/jiaxing/certs/etcd/etcd-server-key.pem'

[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# source ~/.bashrc 
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl  endpoint status
10.0.0.241:2379, 566d563f3c9274ed, 3.5.10, 20 kB, true, false, 3, 12, 12, 
10.0.0.242:2379, b83b69ba7d246b29, 3.5.10, 20 kB, true, false, 3, 12, 12, 
10.0.0.243:2379, 47b70f9ecb1f200, 3.5.10, 20 kB, false, false, 3, 12, 12, 
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl  endpoint status  -w table
+-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|    ENDPOINT     |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 10.0.0.241:2379 | 566d563f3c9274ed |  3.5.10 |   20 kB |     false |      false |         3 |         11 |                 11 |        |
| 10.0.0.242:2379 | b83b69ba7d246b29 |  3.5.10 |   20 kB |      true |      false |         3 |         11 |                 11 |        |
| 10.0.0.243:2379 |  47b70f9ecb1f200 |  3.5.10 |   20 kB |     false |      false |         3 |         11 |                 11 |        |
+-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@k8s-master01 ~]# 


	2.查看集群的成员信息
[root@k8s-master01 ~]# etcdctl member list -w table
+------------------+---------+--------------+-------------------------+-------------------------+------------+
|        ID        | STATUS  |     NAME     |       PEER ADDRS        |      CLIENT ADDRS       | IS LEARNER |
+------------------+---------+--------------+-------------------------+-------------------------+------------+
|  47b70f9ecb1f200 | started | k8s-master03 | https://10.0.0.243:2380 | https://10.0.0.243:2379 |      false |
| 566d563f3c9274ed | started | k8s-master01 | https://10.0.0.241:2380 | https://10.0.0.241:2379 |      false |
| b83b69ba7d246b29 | started | k8s-master02 | https://10.0.0.242:2380 | https://10.0.0.242:2379 |      false |
+------------------+---------+--------------+-------------------------+-------------------------+------------+
[root@k8s-master01 ~]# 



	3.剔除ectd成员,删除节点
[root@k8s-master01 ~]# etcdctl member remove 566d563f3c9274ed
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl member list -w table
+------------------+---------+--------------+-------------------------+-------------------------+------------+
|        ID        | STATUS  |     NAME     |       PEER ADDRS        |      CLIENT ADDRS       | IS LEARNER |
+------------------+---------+--------------+-------------------------+-------------------------+------------+
|  47b70f9ecb1f200 | started | k8s-master03 | https://10.0.0.243:2380 | https://10.0.0.243:2379 |      false |
| b83b69ba7d246b29 | started | k8s-master02 | https://10.0.0.242:2380 | https://10.0.0.242:2379 |      false |
+------------------+---------+--------------+-------------------------+-------------------------+------------+
[root@k8s-master01 ~]# 

	4.将节点加入集群,新加入的节点etcd数据必须是空,否则加入失败
[root@k8s-master01 ~]# etcdctl member add k8s-master01 --peer-urls="https://10.0.0.241:2380"
Member 776956f0dde3cc14 added to cluster 1127739f1ffb7fd9

ETCD_NAME="k8s-master01"
ETCD_INITIAL_CLUSTER="k8s-master03=https://10.0.0.243:2380,k8s-master01=https://10.0.0.241:2380,k8s-master02=https://10.0.0.242:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.241:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl member list -w table
+------------------+-----------+--------------+-------------------------+-------------------------+------------+
|        ID        |  STATUS   |     NAME     |       PEER ADDRS        |      CLIENT ADDRS       | IS LEARNER |
+------------------+-----------+--------------+-------------------------+-------------------------+------------+
|  47b70f9ecb1f200 |   started | k8s-master03 | https://10.0.0.243:2380 | https://10.0.0.243:2379 |      false |
| 776956f0dde3cc14 | unstarted |              | https://10.0.0.241:2380 |                         |      false |
| b83b69ba7d246b29 |   started | k8s-master02 | https://10.0.0.242:2380 | https://10.0.0.242:2379 |      false |
+------------------+-----------+--------------+-------------------------+-------------------------+------------+
[root@k8s-master01 ~]# 


	5.修改待加入集群节点的状态
[root@k8s-master01 ~]# vim /supershy/softwares/etcd/etcd.config.yml 
...	
initial-cluster-state: 'existing'


	6.启动待加入集群的节点,要求数据目录为空。
[root@k8s-master01 ~]# rm -rf /supershy/data/etcd/*
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# systemctl restart etcd
[root@k8s-master01 ~]# 


	7.验证etcd集群状态
[root@k8s-master01 ~]# etcdctl member list -w table
+------------------+---------+--------------+-------------------------+-------------------------+------------+
|        ID        | STATUS  |     NAME     |       PEER ADDRS        |      CLIENT ADDRS       | IS LEARNER |
+------------------+---------+--------------+-------------------------+-------------------------+------------+
|  47b70f9ecb1f200 | started | k8s-master03 | https://10.0.0.243:2380 | https://10.0.0.243:2379 |      false |
| 776956f0dde3cc14 | started | k8s-master01 | https://10.0.0.241:2380 | https://10.0.0.241:2379 |      false |
| b83b69ba7d246b29 | started | k8s-master02 | https://10.0.0.242:2380 | https://10.0.0.242:2379 |      false |
+------------------+---------+--------------+-------------------------+-------------------------+------------+
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl  endpoint status -w table
+-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|    ENDPOINT     |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 10.0.0.241:2379 | 776956f0dde3cc14 |  3.5.10 |   20 kB |     false |      false |         3 |         14 |                 14 |        |
| 10.0.0.242:2379 | b83b69ba7d246b29 |  3.5.10 |   20 kB |      true |      false |         3 |         14 |                 14 |        |
| 10.0.0.243:2379 |  47b70f9ecb1f200 |  3.5.10 |   20 kB |     false |      false |         3 |         14 |                 14 |        |
+-----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@k8s-master01 ~]# 

4. etcd的基础操作

	1.创建数据
[root@k8s-master01 ~]# etcdctl put school supershy
OK
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl put class linux89
OK
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl put /supershy/classroom jiaoshi03
OK
[root@k8s-master01 ~]# 

	
	2.查看数据
[root@k8s-master01 ~]# etcdctl get school
school
supershy
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl get class
class
linux89
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl get /supershy/classroom
/supershy/classroom
jiaoshi03
[root@k8s-master01 ~]# 

	 
	3.修改数据
[root@k8s-master01 ~]# etcdctl get class
class
linux89
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl put class linux90
OK
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl get class
class
linux90
[root@k8s-master01 ~]# 

 
	4.删除数据
[root@k8s-master01 ~]# etcdctl get class
class
linux90
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl del class
1
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl get class
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# 

	5.基于范围查看数据
[root@k8s-master01 ~]# etcdctl put /kubernetes/master/apiserver 6443
OK
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl put /kubernetes/master/etcd 2379
OK
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl put /kubernetes/worker/kubelet pods
OK
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl put /kubernetes/worker/kube-proxy services
OK
[root@k8s-master01 ~]# 


	6.基于key的前缀匹配查看,也可以理解为递归查看
[root@k8s-master01 ~]# etcdctl get /kubernetes
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl get --prefix /kubernetes
/kubernetes/master/apiserver
6443
/kubernetes/master/etcd
2379
/kubernetes/worker/kube-proxy
services
/kubernetes/worker/kubelet
pods
[root@k8s-master01 ~]# 

	7.基于key的前缀匹配删除,可以理解为递归删除数据
[root@k8s-master01 ~]# etcdctl get --prefix /kubernetes
/kubernetes/master/apiserver
6443
/kubernetes/master/etcd
2379
/kubernetes/worker/kube-proxy
services
/kubernetes/worker/kubelet
pods
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl del --prefix  /kubernetes
4
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl get --prefix /kubernetes
[root@k8s-master01 ~]# 



	8.查看etcd的所有数据
[root@k8s-master01 ~]# etcdctl get --prefix ""
/haha
123
/supershy/class/linux89
aaaaaaaaaaaaaaa
/supershy/classroom
jiaoshi03
/xixi
456
jiaoshi03-xixi
sdasdsadsasda
school
supershy
[root@k8s-master01 ~]# 


	9.其他操作
[root@k8s-master01 ~]# etcdctl get --prefix "" --keys-only    # 只查看key
/haha

/supershy/class/linux89

/supershy/classroom

/xixi

jiaoshi03-xixi

school

[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl get --prefix "" --print-value-only      # 只查看value
123
aaaaaaaaaaaaaaa
jiaoshi03
456
sdasdsadsasda
supershy
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# etcdctl get --prefix "" --limit=2      # 只查看2条数据。
/haha
123
/supershy/class/linux89
aaaaaaaaaaaaaaa
[root@k8s-master01 ~]# 

5. 部署api-server组件

	1 k8s-master01节点启动ApiServer
温馨提示:
	- "--advertise-address"是对应的master节点的IP地址;
	- "--service-cluster-ip-range"对应的是svc的网段
	- "--service-node-port-range"对应的是svc的NodePort端口范围;
	- "--etcd-servers"指定的是etcd集群地址


推荐阅读:
	https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/


具体实操:
1.1 创建k8s-master01节点的配置文件
cat > /usr/lib/systemd/system/kube-apiserver.service << 'EOF'
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --allow_privileged=true \
      --advertise-address=10.0.0.241 \
      --service-cluster-ip-range=10.200.0.0/16  \
      --service-node-port-range=3000-50000  \
      --etcd-servers=https://10.0.0.241:2379,https://10.0.0.242:2379,https://10.0.0.243:2379 \
      --etcd-cafile=/jiaxing/certs/etcd/etcd-ca.pem  \
      --etcd-certfile=/jiaxing/certs/etcd/etcd-server.pem  \
      --etcd-keyfile=/jiaxing/certs/etcd/etcd-server-key.pem  \
      --client-ca-file=/jiaxing/certs/kubernetes/k8s-ca.pem  \
      --tls-cert-file=/jiaxing/certs/kubernetes/apiserver.pem  \
      --tls-private-key-file=/jiaxing/certs/kubernetes/apiserver-key.pem  \
      --kubelet-client-certificate=/jiaxing/certs/kubernetes/apiserver.pem  \
      --kubelet-client-key=/jiaxing/certs/kubernetes/apiserver-key.pem  \
      --service-account-key-file=/jiaxing/certs/kubernetes/sa.pub  \
      --service-account-signing-key-file=/jiaxing/certs/kubernetes/sa.key \
      --service-account-issuer=https://kubernetes.default.svc.jiaxing.com \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/jiaxing/certs/kubernetes/front-proxy-ca.pem  \
      --proxy-client-cert-file=/jiaxing/certs/kubernetes/front-proxy-client.pem  \
      --proxy-client-key-file=/jiaxing/certs/kubernetes/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF


1.2 启动服务
systemctl daemon-reload && systemctl enable --now kube-apiserver
systemctl status kube-apiserver




	2 k8s-master02节点启动ApiServer
温馨提示:
	- "--advertise-address"是对应的master节点的IP地址;
	- "--service-cluster-ip-range"对应的是svc的网段
	- "--service-node-port-range"对应的是svc的NodePort端口范围;
	- "--etcd-servers"指定的是etcd集群地址


具体实操:
2.1 创建k8s-master02节点的配置文件
cat > /usr/lib/systemd/system/kube-apiserver.service << 'EOF'
[Unit]
Description=Jason Yin's Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --advertise-address=10.0.0.242 \
      --service-cluster-ip-range=10.200.0.0/16  \
      --service-node-port-range=3000-50000  \
      --etcd-servers=https://10.0.0.241:2379,https://10.0.0.242:2379,https://10.0.0.243:2379 \
      --etcd-cafile=/jiaxing/certs/etcd/etcd-ca.pem  \
      --etcd-certfile=/jiaxing/certs/etcd/etcd-server.pem  \
      --etcd-keyfile=/jiaxing/certs/etcd/etcd-server-key.pem  \
      --client-ca-file=/jiaxing/certs/kubernetes/k8s-ca.pem  \
      --tls-cert-file=/jiaxing/certs/kubernetes/apiserver.pem  \
      --tls-private-key-file=/jiaxing/certs/kubernetes/apiserver-key.pem  \
      --kubelet-client-certificate=/jiaxing/certs/kubernetes/apiserver.pem  \
      --kubelet-client-key=/jiaxing/certs/kubernetes/apiserver-key.pem  \
      --service-account-key-file=/jiaxing/certs/kubernetes/sa.pub  \
      --service-account-signing-key-file=/jiaxing/certs/kubernetes/sa.key \
      --service-account-issuer=https://kubernetes.default.svc.jiaxing.com \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/jiaxing/certs/kubernetes/front-proxy-ca.pem  \
      --proxy-client-cert-file=/jiaxing/certs/kubernetes/front-proxy-client.pem  \
      --proxy-client-key-file=/jiaxing/certs/kubernetes/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF


	2.启动服务
systemctl daemon-reload && systemctl enable --now kube-apiserver
systemctl status kube-apiserver



	3 k8s-master03节点启动ApiServer
温馨提示:
	- "--advertise-address"是对应的master节点的IP地址;
	- "--service-cluster-ip-range"对应的是svc的网段
	- "--service-node-port-range"对应的是svc的NodePort端口范围;
	- "--etcd-servers"指定的是etcd集群地址


具体实操:
3.1 创建k8s-master03节点的配置文件
cat > /usr/lib/systemd/system/kube-apiserver.service << 'EOF'
[Unit]
Description=Jason Yin's Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --advertise-address=10.0.0.243 \
      --service-cluster-ip-range=10.200.0.0/16  \
      --service-node-port-range=3000-50000  \
      --etcd-servers=https://10.0.0.241:2379,https://10.0.0.242:2379,https://10.0.0.243:2379 \
      --etcd-cafile=/jiaxing/certs/etcd/etcd-ca.pem  \
      --etcd-certfile=/jiaxing/certs/etcd/etcd-server.pem  \
      --etcd-keyfile=/jiaxing/certs/etcd/etcd-server-key.pem  \
      --client-ca-file=/jiaxing/certs/kubernetes/k8s-ca.pem  \
      --tls-cert-file=/jiaxing/certs/kubernetes/apiserver.pem  \
      --tls-private-key-file=/jiaxing/certs/kubernetes/apiserver-key.pem  \
      --kubelet-client-certificate=/jiaxing/certs/kubernetes/apiserver.pem  \
      --kubelet-client-key=/jiaxing/certs/kubernetes/apiserver-key.pem  \
      --service-account-key-file=/jiaxing/certs/kubernetes/sa.pub  \
      --service-account-signing-key-file=/jiaxing/certs/kubernetes/sa.key \
      --service-account-issuer=https://kubernetes.default.svc.jiaxing.com \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/jiaxing/certs/kubernetes/front-proxy-ca.pem  \
      --proxy-client-cert-file=/jiaxing/certs/kubernetes/front-proxy-client.pem  \
      --proxy-client-key-file=/jiaxing/certs/kubernetes/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF


	2.启动服务
systemctl daemon-reload && systemctl enable --now kube-apiserver
systemctl status kube-apiserver

6. 部署controller manager组件

	1 所有节点创建配置文件
温馨提示:
	- "--cluster-cidr"是Pod的网段地址,我们可以自行修改。


所有节点的controller-manager组件配置文件相同: (前提是证书文件存放的位置也要相同哟!)
cat > /usr/lib/systemd/system/kube-controller-manager.service << 'EOF'
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
      --v=2 \
      --root-ca-file=/jiaxing/certs/kubernetes/k8s-ca.pem \
      --cluster-signing-cert-file=/jiaxing/certs/kubernetes/k8s-ca.pem \
      --cluster-signing-key-file=/jiaxing/certs/kubernetes/k8s-ca-key.pem \
      --service-account-private-key-file=/jiaxing/certs/kubernetes/sa.key \
      --kubeconfig=/jiaxing/certs/kubeconfig/kube-controller-manager.kubeconfig \
      --leader-elect=true \
      --use-service-account-credentials=true \
      --node-monitor-grace-period=40s \
      --node-monitor-period=5s \
      --controllers=*,bootstrapsigner,tokencleaner \
      --allocate-node-cidrs=true \
      --cluster-cidr=10.100.0.0/16 \
      --requestheader-client-ca-file=/jiaxing/certs/kubernetes/front-proxy-ca.pem \
      --node-cidr-mask-size=24
      
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF




	#2 启动controller-manager服务
systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl  status kube-controller-manager


	3.检查controller manager组件是否正常,注意,scheduler还没有部署,因此报错无法连接。
[root@k8s-master01 ~]# kubectl get cs --kubeconfig=/jiaxing/certs/kubeconfig/kube-admin.kubeconfig 
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                        ERROR
scheduler            Unhealthy   Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused   
controller-manager   Healthy     ok                                                                                             
etcd-0               Healthy     ok                                                                                             
[root@k8s-master01 ~]# 

7. 部署scheduler组件

	1 所有节点创建配置文件
温馨提示:
	所有节点的controller-manager组件配置文件相同: (前提是证书文件存放的位置也要相同哟!)
	
cat > /usr/lib/systemd/system/kube-scheduler.service <<'EOF'
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
      --v=2 \
      --leader-elect=true \
      --kubeconfig=/jiaxing/certs/kubeconfig/kube-scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF



	2 启动scheduler服务
systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl  status kube-scheduler


	3.检查scheduler组件是否正常
[root@k8s-master01 ~]# kubectl get cs --kubeconfig=/jiaxing/certs/kubeconfig/kube-admin.kubeconfig 
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
controller-manager   Healthy   ok        
scheduler            Healthy   ok        
etcd-0               Healthy   ok        
[root@k8s-master01 ~]# 

8. 创建bootstrapping自动颁发证书配置

	1 k8s-master01节点创建bootstrap-kubelet.kubeconfig文件
温馨提示:
	- "--server"指向的是负载均衡器的VIP地址,由负载均衡器对master节点进行反向代理哟。
	- "--token"也可以自定义,但也要同时修改"bootstrap"的Secret的"token-id""token-secret"对应值哟;
	
	1.设置集群
kubectl config set-cluster jiaxing-k8s \
  --certificate-authority=/jiaxing/certs/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.240:8443 \
  --kubeconfig=/jiaxing/certs/kubeconfig/bootstrap-kubelet.kubeconfig

	2.创建用户--注意token长度限制:5位字符.16位字符
kubectl config set-credentials tls-bootstrap-token-user  \
  --token=yindao.jasonsupershy  \
  --kubeconfig=/jiaxing/certs/kubeconfig/bootstrap-kubelet.kubeconfig


	3.将集群和用户进行绑定
kubectl config set-context tls-bootstrap-token-user@kubernetes \
  --cluster=jiaxing-k8s \
  --user=tls-bootstrap-token-user \
  --kubeconfig=/jiaxing/certs/kubeconfig/bootstrap-kubelet.kubeconfig


	4.配置默认的上下文
kubectl config use-context tls-bootstrap-token-user@kubernetes \
  --kubeconfig=/jiaxing/certs/kubeconfig/bootstrap-kubelet.kubeconfig




	2 所有master节点拷贝管理证书
温馨提示:
	下面的操作我以k8s-master01为案例来操作的,实际上你可以使用所有的master节点完成下面的操作哟~

2.1 所有master都拷贝管理员的证书文件
[root@k8s-master01 ~]#  mkdir -p ~/.kube
[root@k8s-master01 ~]#  cp /jiaxing/certs/kubeconfig/kube-admin.kubeconfig /root/.kube/config

2.2 查看master组件,该组件官方在1.19+版本开始弃用,但是在1.28依旧没有移除哟~
[root@k8s-master01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
scheduler            Healthy   ok        
controller-manager   Healthy   ok        
etcd-0               Healthy   ok        
[root@k8s-master01 ~]# 

2.3 查看集群状态,如果未来cs组件移除了也没关系,我们可以使用"cluster-info"子命令查看集群状态
[root@k8s-master01 ~]# kubectl cluster-info 
Kubernetes control plane is running at https://10.0.0.240:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master01 ~]# 




	3 创建bootstrap-secret授权
3.1 创建配bootstrap-secret文件用于授权
[root@k8s-master01 ~]# cat > bootstrap-secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-token-yindao
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  description: "The default bootstrap token generated by 'kubelet '."
  token-id: yindao
  token-secret: jasonsupershy
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-certificate-rotation
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kube-apiserver
EOF


3.2 应用bootstrap-secret配置文件
[root@k8s-master01 ~]# kubectl apply -f bootstrap-secret.yaml 
secret/bootstrap-token-yindao created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
[root@k8s-master01 ~]# 

9. 部署kubelet服务并查看csr资源

	1 复制证书
1.1 k8s-master01节点分发证书到其他节点
cd /jiaxing/certs/
for NODE in k8s-master02 k8s-master03 k8s-worker04 k8s-worker05; do
     echo $NODE
     ssh $NODE "mkdir -p /jiaxing/certs/kube{config,rnetes}"
     for FILE in k8s-ca.pem k8s-ca-key.pem front-proxy-ca.pem; do
       scp kubernetes/$FILE $NODE:/jiaxing/certs/kubernetes/${FILE}
	 done
     scp kubeconfig/bootstrap-kubelet.kubeconfig $NODE:/jiaxing/certs/kubeconfig/
done


1.2 worker节点进行验证
[root@k8s-worker05 ~]# ll /jiaxing/ -R
/supershy/:
total 0
drwxr-xr-x 4 root root 42 Jan 24 18:06 certs

/supershy/certs:
total 0
drwxr-xr-x 2 root root 42 Jan 24 18:06 kubeconfig
drwxr-xr-x 2 root root 72 Jan 24 18:06 kubernetes

/supershy/certs/kubeconfig:
total 4
-rw------- 1 root root 2243 Jan 24 18:06 bootstrap-kubelet.kubeconfig

/supershy/certs/kubernetes:
total 12
-rw-r--r-- 1 root root 1094 Jan 24 18:06 front-proxy-ca.pem
-rw------- 1 root root 1675 Jan 24 18:06 k8s-ca-key.pem
-rw-r--r-- 1 root root 1363 Jan 24 18:06 k8s-ca.pem
[root@k8s-worker05 ~]# 



	2 启动kubelet服务
温馨提示:
	- 在"10-kubelet.con"文件中使用"--kubeconfig"指定的"kubelet.kubeconfig"文件并不存在,这个证书文件后期会自动生成;
	- 对于"clusterDNS"是NDS地址,我们可以自定义,比如"10.200.0.254";
	- “clusterDomain”对应的是域名信息,要和我们设计的集群保持一致,比如"jiaxing.com";
	- "10-kubelet.conf"文件中的"ExecStart="需要写2次,否则可能无法启动kubelet;
	
	
具体实操:
2.1 所有节点创建工作目录
mkdir -pv /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/


2.2 所有节点创建kubelet的配置文件
cat > /etc/kubernetes/kubelet-conf.yml <<'EOF'
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /jiaxing/certs/kubernetes/k8s-ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.200.0.254
clusterDomain: jiaxing.com
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF
	
	
	
	
2.3 所有节点配置kubelet service
cat >  /usr/lib/systemd/system/kubelet.service <<'EOF'
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF


2.4 所有节点配置kubelet service的配置文件
cat > /etc/systemd/system/kubelet.service.d/10-kubelet.conf <<'EOF'
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/jiaxing/certs/kubeconfig/bootstrap-kubelet.kubeconfig --kubeconfig=/jiaxing/certs/kubeconfig/kubelet.kubeconfig"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_SYSTEM_ARGS=--container-runtime-endpoint=unix:///run/containerd/containerd.sock"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
EOF


2.5 启动所有节点kubelet
systemctl daemon-reload
systemctl enable --now kubelet
systemctl status kubelet


2.6.在所有master节点上查看nodes信息。
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   <none>   4m45s   v1.28.3
k8s-master02   NotReady   <none>   9s      v1.28.3
k8s-master03   NotReady   <none>   6s      v1.28.3
k8s-worker04   NotReady   <none>   3s      v1.28.3
k8s-worker05   NotReady   <none>   1s      v1.28.3
[root@k8s-master01 ~]# 


2.7 可以查看到有相应的csr用户客户端的证书请求
[root@k8s-master01 ~]# kubectl get csr
NAME        AGE     SIGNERNAME                                    REQUESTOR                 REQUESTEDDURATION   CONDITION
csr-4vkl8   14s     kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:yindao   <none>              Approved,Issued
csr-926nn   16s     kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:yindao   <none>              Approved,Issued
csr-jvb87   11s     kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:yindao   <none>              Approved,Issued
csr-nfpht   9s      kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:yindao   <none>              Approved,Issued
csr-rbtxk   4m52s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:yindao   <none>              Approved,Issued
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# kubectl describe csr csr-rbtxk
Name:               csr-rbtxk
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Wed, 24 Jan 2024 18:33:30 +0800
Requesting User:    system:bootstrap:yindao
Signer:             kubernetes.io/kube-apiserver-client-kubelet
Status:             Approved,Issued
Subject:
         Common Name:    system:node:k8s-master01
         Serial Number:  
         Organization:   system:nodes
Events:  <none>
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# kubectl describe csr csr-nfpht
Name:               csr-nfpht
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Wed, 24 Jan 2024 18:38:13 +0800
Requesting User:    system:bootstrap:yindao
Signer:             kubernetes.io/kube-apiserver-client-kubelet
Status:             Approved,Issued
Subject:
         Common Name:    system:node:k8s-worker05
         Serial Number:  
         Organization:   system:nodes
Events:  <none>
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# kubectl describe csr csr-jvb87
Name:               csr-jvb87
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Wed, 24 Jan 2024 18:38:11 +0800
Requesting User:    system:bootstrap:yindao
Signer:             kubernetes.io/kube-apiserver-client-kubelet
Status:             Approved,Issued
Subject:
         Common Name:    system:node:k8s-worker04
         Serial Number:  
         Organization:   system:nodes
Events:  <none>
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# kubectl describe csr csr-926nn
Name:               csr-926nn
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Wed, 24 Jan 2024 18:38:06 +0800
Requesting User:    system:bootstrap:yindao
Signer:             kubernetes.io/kube-apiserver-client-kubelet
Status:             Approved,Issued
Subject:
         Common Name:    system:node:k8s-master02
         Serial Number:  
         Organization:   system:nodes
Events:  <none>
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# kubectl describe csr csr-4vkl8
Name:               csr-4vkl8
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Wed, 24 Jan 2024 18:38:08 +0800
Requesting User:    system:bootstrap:yindao
Signer:             kubernetes.io/kube-apiserver-client-kubelet
Status:             Approved,Issued
Subject:
         Common Name:    system:node:k8s-master03
         Serial Number:  
         Organization:   system:nodes
Events:  <none>
[root@k8s-master01 ~]# 

10. 部署kube-proxy服务

	1.生成kube-proxy的csr文件
[root@k8s-master01 pki]# cat >kube-proxy-csr.json <<'EOF'
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-proxy",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
[root@k8s-master01 pki]# 

	2.创建kube-proxy需要的证书文件
[root@k8s-master01 pki]# cfssl gencert \
  -ca=/jiaxing/certs/kubernetes/k8s-ca.pem \
  -ca-key=/jiaxing/certs/kubernetes/k8s-ca-key.pem \
  -config=k8s-ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare /jiaxing/certs/kubernetes/kube-proxy


[root@k8s-master01 pki]# ll /jiaxing/certs/kubernetes/kube-proxy*
-rw-r--r-- 1 root root 1045 Jan 24 18:44 /jiaxing/certs/kubernetes/kube-proxy.csr
-rw------- 1 root root 1679 Jan 24 18:44 /jiaxing/certs/kubernetes/kube-proxy-key.pem
-rw-r--r-- 1 root root 1464 Jan 24 18:44 /jiaxing/certs/kubernetes/kube-proxy.pem
[root@k8s-master01 pki]# 



	3.设置集群
[root@k8s-master01 pki]# kubectl config set-cluster jiaxing-k8s \
  --certificate-authority=/jiaxing/certs/kubernetes/k8s-ca.pem \
  --embed-certs=true \
  --server=https://10.0.0.240:8443 \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-proxy.kubeconfig
  
	4.设置一个用户项
[root@k8s-master01 pki]# kubectl config set-credentials system:kube-proxy \
  --client-certificate=/jiaxing/certs/kubernetes/kube-proxy.pem \
  --client-key=/jiaxing/certs/kubernetes/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-proxy.kubeconfig
  
	5.设置一个上下文环境
[root@k8s-master01 pki]# kubectl config set-context kube-proxy@kubernetes \
  --cluster=jiaxing-k8s \
  --user=system:kube-proxy \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-proxy.kubeconfig
  
	6.使用默认的上下文
[root@k8s-master01 pki]# kubectl config use-context kube-proxy@kubernetes \
  --kubeconfig=/jiaxing/certs/kubeconfig/kube-proxy.kubeconfig

	7.将kube-proxy的systemd Service文件发送到其他节点
for NODE in k8s-master02 k8s-master03 k8s-worker04 k8s-worker05; do
     echo $NODE
     scp /jiaxing/certs/kubeconfig/kube-proxy.kubeconfig $NODE:/jiaxing/certs/kubeconfig/
done


	8.所有节点创建kube-proxy.conf配置文件,
cat > /etc/kubernetes/kube-proxy.yml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
metricsBindAddress: 127.0.0.1:10249
clientConnection:
  acceptConnection: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /jiaxing/certs/kubeconfig/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 10.100.0.0/16
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
mode: "ipvs"
nodeProtAddress: null
oomScoreAdj: -999
portRange: ""
udpIdelTimeout: 250ms
EOF

 
	9.所有节点使用systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yml \
  --v=2 
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
 
 
	10.所有节点启动kube-proxy
systemctl daemon-reload && systemctl enable --now kube-proxy
systemctl status kube-proxy

11. 部署flannel的cni插件并验证

参考链接:
	https://github.com/flannel-io/flannel
    https://gitee.com/jasonyin2020/cloud-computing-stack/blob/linux89/linux89/manifests/22-cni/flannel/kube-flannel.yml#

	1.下载flannel所需的二进制文件
[root@k8s-master01 ~]#  wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz

	2.解压flannel所需的程序包
[root@k8s-master01 ~]# mkdir -p /opt/cni/bin
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# tar -C /opt/cni/bin -xzf cni-plugins-linux-amd64-v1.2.0.tgz
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# ll /opt/cni/bin/
total 68936
-rwxr-xr-x 1 root root  3859475 Jan 17  2023 bandwidth
-rwxr-xr-x 1 root root  4299004 Jan 17  2023 bridge
-rwxr-xr-x 1 root root 10167415 Jan 17  2023 dhcp
-rwxr-xr-x 1 root root  3986082 Jan 17  2023 dummy
-rwxr-xr-x 1 root root  4385098 Jan 17  2023 firewall
-rwxr-xr-x 1 root root  3870731 Jan 17  2023 host-device
-rwxr-xr-x 1 root root  3287319 Jan 17  2023 host-local
-rwxr-xr-x 1 root root  3999593 Jan 17  2023 ipvlan
-rwxr-xr-x 1 root root  3353028 Jan 17  2023 loopback
-rwxr-xr-x 1 root root  4029261 Jan 17  2023 macvlan
-rwxr-xr-x 1 root root  3746163 Jan 17  2023 portmap
-rwxr-xr-x 1 root root  4161070 Jan 17  2023 ptp
-rwxr-xr-x 1 root root  3550152 Jan 17  2023 sbr
-rwxr-xr-x 1 root root  2845685 Jan 17  2023 static
-rwxr-xr-x 1 root root  3437180 Jan 17  2023 tuning
-rwxr-xr-x 1 root root  3993252 Jan 17  2023 vlan
-rwxr-xr-x 1 root root  3586502 Jan 17  2023 vrf
[root@k8s-master01 ~]# 


	3.将软件包同步到集群其他节点
[root@k8s-master01 ~]# data_rsync.sh /opt/cni/bin/
===== rsyncing k8s-master02: bin =====
命令执行成功!
===== rsyncing k8s-master03: bin =====
命令执行成功!
===== rsyncing k8s-worker04: bin =====
命令执行成功!
===== rsyncing k8s-worker05: bin =====
命令执行成功!
[root@k8s-master01 ~]# 

	4.修改flannel官方的资源清单
[root@k8s-master01 ~]# cat >kube-flannel.yml <<'EOF'
apiVersion: v1
kind: Namespace
metadata:
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
  name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.100.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "Directrouting": true
      }
    }
kind: ConfigMap
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-cfg
  namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-ds
  namespace: kube-flannel
spec:
  selector:
    matchLabels:
      app: flannel
      k8s-app: flannel
  template:
    metadata:
      labels:
        app: flannel
        k8s-app: flannel
        tier: node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      containers:
      - args:
        - --ip-masq
        - --kube-subnet-mgr
        command:
        - /opt/bin/flanneld
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        image: docker.io/flannel/flannel:v0.24.0
        name: kube-flannel
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
          privileged: false
        volumeMounts:
        - mountPath: /run/flannel
          name: run
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
        - mountPath: /run/xtables.lock
          name: xtables-lock
      hostNetwork: true
      initContainers:
      - args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        command:
        - cp
        image: docker.io/flannel/flannel-cni-plugin:v1.2.0
        name: install-cni-plugin
        volumeMounts:
        - mountPath: /opt/cni/bin
          name: cni-plugin
      - args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        command:
        - cp
        image: docker.io/flannel/flannel:v0.24.0
        name: install-cni
        volumeMounts:
        - mountPath: /etc/cni/net.d
          name: cni
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
      priorityClassName: system-node-critical
      serviceAccountName: flannel
      tolerations:
      - effect: NoSchedule
        operator: Exists
      volumes:
      - hostPath:
          path: /run/flannel
        name: run
      - hostPath:
          path: /opt/cni/bin
        name: cni-plugin
      - hostPath:
          path: /etc/cni/net.d
        name: cni
      - configMap:
          name: kube-flannel-cfg
        name: flannel-cfg
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock
EOF
[root@k8s-master01 ~]# 



	5.创建资源清单部署flannel程序
[root@k8s-master01 ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@k8s-master01 ~]# 


	6.观察flannel组件是否正常运行
[root@k8s-master01 ~]# kubectl get pods -A -o wide
NAMESPACE      NAME                    READY   STATUS    RESTARTS   AGE     IP           NODE           NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-2b6dg   1/1     Running   0          2m59s   10.0.0.244   k8s-worker04   <none>           <none>
kube-flannel   kube-flannel-ds-4zjdd   1/1     Running   0          2m59s   10.0.0.245   k8s-worker05   <none>           <none>
kube-flannel   kube-flannel-ds-b2d96   1/1     Running   0          2m59s   10.0.0.242   k8s-master02   <none>           <none>
kube-flannel   kube-flannel-ds-s48rw   1/1     Running   0          2m59s   10.0.0.241   k8s-master01   <none>           <none>
kube-flannel   kube-flannel-ds-tz49n   1/1     Running   0          2m59s   10.0.0.243   k8s-master03   <none>           <none>
[root@k8s-master01 ~]# 
	
	7.部署服务测试网络的可用性
[root@k8s-master01 ~]# cat >deploy-apple.yaml<<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-apple
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: apple
  template:
    metadata:
      labels:
        apps: apple
    spec:
      containers:
      - name: apple
        image: registry.cn-hangzhou.aliyuncs.com/supershy-k8s/apps:apple
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-apple
spec:
  type: NodePort
  selector:
    apps: apple
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 8080
EOF
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# kubectl apply -f deploy-apple.yaml
deployment.apps/deployment-apple created
service/svc-apple created
[root@k8s-master01 ~]# 
[root@k8s-master03 ~]# kubectl get po -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
deployment-apple-9df4b458-cjqcv   1/1     Running   0          67s   10.100.0.2   k8s-master03   <none>           <none>
deployment-apple-9df4b458-n7j7k   1/1     Running   0          67s   10.100.2.2   k8s-worker05   <none>           <none>
deployment-apple-9df4b458-xgd2w   1/1     Running   0          67s   10.100.4.2   k8s-master02   <none>           <none>


	8.访问测试
http://10.0.0.245:8080/



- 部署flannel可能出现的故障notready,去notready节点手动拉取镜像发现拉取失败
  - 重启containerd容器测试后镜像拉取成功
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   Ready      <none>   36m   v1.28.3
k8s-master02   Ready      <none>   34m   v1.28.3
k8s-master03   NotReady   <none>   36m   v1.28.3
k8s-worker04   Ready      <none>   36m   v1.28.3
k8s-worker05   Ready      <none>   36m   v1.28.3

[root@k8s-master03 ~]# ctr i pull docker.io/flannel/flannel-cni-plugin:v1.2.0
docker.io/flannel/flannel-cni-plugin:v1.2.0:                                      resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:ca6779c6ad63b77af8a00151cefc08578241197b9a6fe144b0e55484bc52b852:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:14c2d8f4af0d9044db96d8024e671c889aff4d1917296a709217aa9b463e50c5: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:a55d1bad692b776e7c632739dfbeffab2984ef399e1fa633e0751b1662ea8bb4:   downloading    |--------------------------------------|    0.0 B/1.1 KiB    
layer-sha256:25e19981c69bdbd46b89f0a1cf4f825351143eff95f34061a9d9846a98100235:    downloading    |--------------------------------------|    0.0 B/1023.1 KiB 
layer-sha256:72cfd02ff4d01b1f319eed108b53120dea0185b916d2abeb4e6121879cbf7a65:    downloading    |--------------------------------------|    0.0 B/2.7 MiB    
elapsed: 8.8 s                                                                    total:  2.7 Ki (310.0 B/s)                                       
ctr: failed to copy: httpReadSeeker: failed open: failed to do request: Get "https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/25/25e19981c69bdbd46b89f0a1cf4f825351143eff95f34061a9d9846a98100235/data?verify=1725792352-FYy84uNAqjsmSxaR1iK%2F2pu%2FSpA%3D": read tcp 10.0.0.243:33736->104.16.98.215:443: read: connection reset by peer

[root@k8s-master03 ~]# 

12. 给master节点打污点

	1.测试发现部署pod会向master节点调用
[root@k8s-master03 ~]# kubectl get po -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
deployment-apple-9df4b458-cjqcv   1/1     Running   0          67s   10.100.0.2   k8s-master03   <none>           <none>
deployment-apple-9df4b458-n7j7k   1/1     Running   0          67s   10.100.2.2   k8s-worker05   <none>           <none>
deployment-apple-9df4b458-xgd2w   1/1     Running   0          67s   10.100.4.2   k8s-master02   <none>           <none>

	2.给master节点打taints
[root@k8s-master03 ~]# kubectl taint nodes k8s-master01 k8s-master02 k8s-master03 node-role.kubernetes.io/master:NoSchedule
node/k8s-master01 tainted
node/k8s-master02 tainted
node/k8s-master03 tainted

5)关于二进制部署的坑

    1. apiserver和etcd连接不通
[root@k8s-master01 ~]# systemctl status kube-apiserver.service 
● kube-apiserver.service - Jason Yin's Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2024-10-02 17:03:06 CST; 26min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 7367 (kube-apiserver)
    Tasks: 8
   Memory: 219.3M
   CGroup: /system.slice/kube-apiserver.service
           └─7367 /usr/local/bin/kube-apiserver --v=2 --bind-address=0.0.0.0 --secure-port=6443 --advertise-address=10.0.0.241 --service-cluster-ip-range=10.200.0.0/16 --service-node-port-range=3000-50000 -...

Oct 02 17:29:59 k8s-master01 kube-apiserver[7367]: "Metadata": null
Oct 02 17:29:59 k8s-master01 kube-apiserver[7367]: }. Err: connection error: desc = "transport: authentication handshake failed: tls: failed to verify certificate: x509: certificate signed by u...te \"etcd\")"
Oct 02 17:29:59 k8s-master01 kube-apiserver[7367]: W1002 17:29:59.519293    7367 logging.go:59] [core] [Channel #409 SubChannel #412] grpc: addrConn.createTransport failed to connect to {
Oct 02 17:29:59 k8s-master01 kube-apiserver[7367]: "Addr": "10.0.0.243:2379",
Oct 02 17:29:59 k8s-master01 kube-apiserver[7367]: "ServerName": "10.0.0.243",
Oct 02 17:29:59 k8s-master01 kube-apiserver[7367]: "Attributes": null,
Oct 02 17:29:59 k8s-master01 kube-apiserver[7367]: "BalancerAttributes": null,
Oct 02 17:29:59 k8s-master01 kube-apiserver[7367]: "Type": 0,
Oct 02 17:29:59 k8s-master01 kube-apiserver[7367]: "Metadata": null
Oct 02 17:29:59 k8s-master01 kube-apiserver[7367]: }. Err: connection error: desc = "transport: authentication handshake failed: tls: failed to verify certificate: x509: certificate signed by u...te \"etcd\")"
Hint: Some lines were ellipsized, use -l to show in full.

原因:重复部署etcd的数据目录需删除

8. 基于kubeadm部署k8s高可用集群

1)角色规划

| 主机名 | IP地址 | 角色划分 |
| k8s-master01 | 10.0.0.241 | api-server,control manager,scheduler,etcd |
| k8s-master02 | 10.0.0.242 | api-server,control manager,scheduler,etcd |
| k8s-master03 | 10.0.0.243 | api-server,control manager,scheduler,etcd |
| k8s-worker04 | 10.0.0.244 | kubelet,kube-proxy |
| k8s-worker05 | 10.0.0.245 | kubelet,kube-proxy |
| apiserver-lb | 10.0.0.240 | apiserver的负载均衡器IP地址 |

2) 基础环境配置+containerd环境部署

	0.设置主机名,各节点参考如下命令修改即可
hostnamectl set-hostname k8s-master01

	1.设置相应的主机名及hosts文件解析
cat >> /etc/hosts <<'EOF'
10.0.0.240 apiserver-lb
10.0.0.241 k8s-master01
10.0.0.242 k8s-master02
10.0.0.243 k8s-master03
10.0.0.244 k8s-worker04
10.0.0.245 k8s-worker05
EOF

	2.禁用不必要的服务
		2.1 禁用防火墙,网络管理,邮箱
systemctl disable  --now firewalld NetworkManager postfix

		2.2 禁用selinux
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config 
grep ^SELINUX= /etc/selinux/config

		2.3 禁用swap分区
swapoff -a && sysctl -w vm.swappiness=0 
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
grep swap /etc/fstab


	3.Linux基础优化
		3.1 修改sshd服务优化
sed -ri  's@^#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config
sed -ri 's#^GSSAPIAuthentication yes#GSSAPIAuthentication no#g' /etc/ssh/sshd_config
grep ^UseDNS /etc/ssh/sshd_config 
grep ^GSSAPIAuthentication  /etc/ssh/sshd_config


		3.2 修改文件打开数量的限制(退出当前会话立即生效)
cat > /etc/security/limits.d/k8s.conf <<'EOF'
*       soft    nofile     65535
*       hard    nofile    131070
EOF
ulimit -Sn
ulimit -Hn



		3.3 修改终端颜色
cat <<EOF >>  ~/.bashrc 
PS1='[\[\e[34;1m\]\u@\[\e[0m\]\[\e[32;1m\]\H\[\e[0m\]\[\e[31;1m\] \W\[\e[0m\]]# '
EOF
source ~/.bashrc 


		3.4 所有节点配置模块自动加载,此步骤不做的话(kubeadm init时会直接失败!)
modprobe br_netfilter
modprobe ip_conntrack
cat >>/etc/rc.sysinit<<EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
echo "modprobe br_netfilter" >/etc/sysconfig/modules/br_netfilter.modules
echo "modprobe ip_conntrack" >/etc/sysconfig/modules/ip_conntrack.modules
chmod 755 /etc/sysconfig/modules/br_netfilter.modules
chmod 755 /etc/sysconfig/modules/ip_conntrack.modules
lsmod | grep br_netfilter


		3.5 基于chronyd守护进程实现集群时间同步:
			3.5.1 手动同步时区和时间
\cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime 

			3.5.2 安装服务chrony
yum -y install ntpdate chrony

			3.5.3 修改配置文件
vim /etc/chrony.conf 
...
server ntp.aliyun.com iburst
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp3.aliyun.com iburst
server ntp4.aliyun.com iburst
server ntp5.aliyun.com iburst
	
			3.5.4启动服务
systemctl enable --now chronyd  

			3.5.5 查看服务状态
systemctl status chronyd
chronyc activity -v


	4.配置软件源并安装集群常用软件
		4.1 配置阿里源
curl -s -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl  -s -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo


		4.2 配置K8S软件源
cat  > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF

		4.3 安装集群常用软件 
yum -y install expect wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git ntpdate chrony bind-utils rsync unzip git


		4.3 下载配置文件及脚本
git clone https://gitee.com/jasonyin2020/supershy-linux-Cloud_Native

[root@k8s-master01 ~]# wget http://192.168.15.253/Kubernetes/day26-/softwares/kubeadm-ha.zip
[root@k8s-master01 ~]#
[root@k8s-master01 ~]# unzip kubeadm-ha.zip 



	5.k8s-master01配置免密钥登录集群并配置同步脚本
k8s-master01节点免密钥登录集群节点,安装过程中生成配置文件和证书均在k8s-master01上操作,集群管理也在k8s-master01上操作。

阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:

		5.1 配置批量免密钥登录
cat > password_free_login.sh <<'EOF'
#!/bin/bash
# auther: Jason Yin

# 创建密钥对
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa -q

# 声明你服务器密码,建议所有节点的密码均一致,否则该脚本需要再次进行优化
export mypasswd=supershy

# 定义主机列表
k8s_host_list=(k8s-master01 k8s77 k8s88)

# 配置免密登录,利用expect工具免交互输入
for i in ${k8s_host_list[@]};do
expect -c "
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
  expect {
    \"*yes/no*\" {send \"yes\r\"; exp_continue}
    \"*password*\" {send \"$mypasswd\r\"; exp_continue}
  }"
done
EOF
sh password_free_login.sh


		5.2 编写同步脚本
cat > /usr/local/sbin/data_rsync.sh <<'EOF'
#!/bin/bash
# Auther: Jason Yin

if  [ $# -ne 1 ];then
   echo "Usage: $0 /path/to/file(绝对路径)"
   exit
fi 

if [ ! -e $1 ];then
    echo "[ $1 ] dir or file not find!"
    exit
fi

fullpath=`dirname $1`

basename=`basename $1`

cd $fullpath

k8s_host_list=(k8s-master01 k8s77 k8s88)

for host in ${k8s_host_list[@]};do
  tput setaf 2
    echo ===== rsyncing ${host}: $basename =====
    tput setaf 7
    rsync -az $basename  `whoami`@${host}:$fullpath
    if [ $? -eq 0 ];then
      echo "命令执行成功!"
    fi
done
EOF
chmod +x /usr/local/sbin/data_rsync.sh


		5.3 同步数据到其他节点
[root@k8s-master01 ~]# data_rsync.sh /root/supershy-linux-Cloud_Native/


也可以直接执行
wget http://192.168.15.253/Kubernetes/day26-/softwares/kubeadm-ha.zip

unzip kubeadm-ha.zip 	


	6.所有节点安装ipvsadm以实现kube-proxy的负载均衡				
		6.1 CentOS7需要升级内核版本为4.18+
yum -y localinstall  kernel/4.19.12/*.rpm

		6.2 更改内核的启动顺序
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg

		6.3 检查默认的内核版本
grubby --default-kernel

		6.4 重启操作系统
reboot
		
		6.5 检查当前正在使用的内核版本
uname -r

 
	
	7.所有节点安装ipvsadm以实现kube-proxy的负载均衡
		7.1 安装ipvsadm等相关工具
yum -y install ipvsadm ipset sysstat conntrack libseccomp 

		7.2 创建要开机自动加载的模块配置文件
cat > /etc/modules-load.d/ipvs.conf << 'EOF'
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

		7.3 将"systemd-modules-load"服务设置为开机自启动
systemctl enable --now systemd-modules-load && systemctl status systemd-modules-load


		7.4 启动模块
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

	
	8.所有节点修改Linux内核参数调优
		8.1 所有节点修改Linux内核参数调优
cat > /etc/sysctl.d/k8s.conf <<'EOF'
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_timestamps = 0
net.bridge.bridge-nf-call-arptables = 1 
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5 
EOF
sysctl --system


		8.2 重启虚拟机
reboot


		8.3 拍快照
如果所有节点都可以正常重启,说明我们的配置是正确的!接下来就是拍快照。






二.单独部署containerd
	1.Kubernetes容器运行时弃用Docker转型Containerd
推荐阅读:
	https://i4t.com/5435.html
	
	
	2.所有节点部署containerd服务
		2.1 升级libseccomp版本
在centos7中yum下载libseccomp的版本是2.3的,版本不满足我们最新containerd的需求。

综上所属,在安装containerd前,我们需要优先升级libseccomp,需要下载2.4以上的版本即可,我这里部署2.5.1版本。


		2.2 卸载旧的containerd
rpm -e libseccomp-2.3.1-4.el7.x86_64 --nodeps

		2.3 下载libseccomp-2.5.1版本的软件包
wget http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm

		2.4 安装libseccomp-2.5.1软件包
rpm -ivh containerd/libseccomp-2.5.1-1.el8.x86_64.rpm 



# rpm -ivh /root/supershy-linux-Cloud_Native/containerd/libseccomp-2.5.1-1.el8.x86_64.rpm

		2.5 检查安装的版本,如下图所示,安装成功啦
rpm -qa | grep libseccomp


	3.安装containerd组件
		3.1 下载containerd工具包
wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz

		3.2 解压软件包(此处我们直接让它给我们对应的目录给替换掉)
tar zxvf cri-containerd-cni-1.6.4-linux-amd64.tar.gz -C /  


	4.配置containerd
		4.1 创建配置文件目录
mkdir -pv /etc/containerd 

		4.2 生成默认配置文件
containerd config default > /etc/containerd/config.toml



	5.替换默认pause镜像地址
sed -i 's/k8s.gcr.io/registry.cn-hangzhou.aliyuncs.com\/abcdocker/' /etc/containerd/config.toml 
grep sandbox_image /etc/containerd/config.toml


	6.配置systemd作为容器的cgroup driver
sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/' /etc/containerd/config.toml
grep SystemdCgroup /etc/containerd/config.toml



	7.配置containerd开机自启动
		7.1 启动containerd服务并配置开机自启动
systemctl enable --now containerd 

		7.2 查看containerd状态
systemctl status containerd 

		7.3 查看containerd的版本
ctr version

3)部署nginx负载均衡+keepalived高可用

三.负载均衡配置
	1.所有maste编译安装nginx,后续需要使用到upstream模块
		1.1 所有的master节点创建运行nginx的用户
useradd nginx -s /sbin/nologin -M


		1.2 安装依赖
yum -y install pcre pcre-devel openssl openssl-devel gcc gcc-c++ automake autoconf libtool make


		1.3 下载nginx软件包
wget http://nginx.org/download/nginx-1.21.6.tar.gz

		1.4 解压软件包
tar xf nginx-1.21.6.tar.gz

# tar xf supershy-linux-Cloud_Native/nginx+keepalived/nginx-1.21.6.tar.gz 


		1.5 配置nginx
cd nginx-1.21.6
./configure --prefix=/usr/local/nginx/ \
            --with-pcre \
            --with-http_ssl_module \
            --with-http_stub_status_module \
            --with-stream \
            --with-http_stub_status_module \
            --with-http_gzip_static_module

		1.6 编译并安装nginx
make -j 2 &&  make install 

		1.7 使用systemctl管理,并设置开机启动
cat >/usr/lib/systemd/system/nginx.service <<'EOF'
[Unit]
Description=The nginx HTTP and reverse proxy server
After=network.target sshd-keygen.service

[Service]
Type=forking
EnvironmentFile=/etc/sysconfig/sshd
ExecStartPre=/usr/local/nginx/sbin/nginx -t -c /usr/local/nginx/conf/nginx.conf
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
ExecReload=/usr/local/nginx/sbin/nginx -s reload
ExecStop=/usr/local/nginx/sbin/nginx -s stop
Restart=on-failure
RestartSec=42s

[Install]
WantedBy=multi-user.target
EOF

		1.8 检查nginx服务是否启动
systemctl status nginx 
ps -ef|grep nginx


		1.9 同步nginx软件包和脚本到集群的其他master节点
data_rsync.sh /usr/local/nginx/
data_rsync.sh /usr/lib/systemd/system/nginx.service



	2.所有master节点配置nginx
		2.1 编辑nginx配置文件
cat > /usr/local/nginx/conf/nginx.conf <<'EOF'
user nginx nginx;
worker_processes auto;

events {
    worker_connections  20240;
    use epoll;
}

error_log /var/log/nginx_error.log info;

stream {
    upstream kube-servers {
        hash $remote_addr consistent;
        
        server k8s-master01:6443 weight=5 max_fails=1 fail_timeout=3s;
        server k8s-master02:6443 weight=5 max_fails=1 fail_timeout=3s;
        server k8s-master03:6443 weight=5 max_fails=1 fail_timeout=3s;
    }

    server {
        listen 8443 reuseport;
        proxy_connect_timeout 3s;
        proxy_timeout 3000s;
        proxy_pass kube-servers;
    }
}
EOF


		2.2 同步nginx的配置文件到其他master节点
data_rsync.sh /usr/local/nginx/conf/nginx.conf


		2.3 所有节点启动nginx服务
systemctl enable --now nginx 


	3.部署keepalived
		3.1 安装keepalived组件
yum  -y install  keepalived

		3.2 修改keepalive的配置文件(根据实际环境,interface eth0可能需要修改为interface ens33)
			3.2.1 编写配置文件,各个master节点需要修改router_id和mcast_src_ip的值即可。
				3.2.1.1 k8s-master01节点
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
   router_id 10.0.0.241
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 8443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 100
    priority 100
    advert_int 1
    mcast_src_ip 10.0.0.241
    nopreempt
    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.0.0.240
    }
}
EOF


				3.2.1.1 k8s-master02节点
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
   router_id 10.0.0.242
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 8443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 100
    priority 100
    advert_int 1
    mcast_src_ip 10.0.0.242
    nopreempt
    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.0.0.240
    }
}
EOF


				3.2.1.1 k8s-master03节点
cat > /etc/keepalived/keepalived.conf <<EOF
! Configuration File for keepalived
global_defs {
   router_id 10.0.0.243
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 8443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 100
    priority 100
    advert_int 1
    mcast_src_ip 10.0.0.243
    nopreempt
    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.0.0.240
    }
}
EOF



		3.2.2 各节点编写健康检查脚本
cat > /etc/keepalived/check_port.sh  <<'EOF'
#!/bin/bash
# auther: JasonYin

CHK_PORT=$1

if [ -n "$CHK_PORT" ];then
  PORT_PROCESS=`ss -ntl|grep $CHK_PORT|wc -l`

  if [ $PORT_PROCESS -eq 0 ];then
     echo "Port $CHK_PORT Is Not Used,Nginx not runing... will stop keepalived."
     /usr/bin/systemctl stop keepalived
  else
     echo "Nginx is runing ..."
  fi

else
  echo "Check Port Cant Be Empty!"
  /usr/bin/systemctl stop keepalived

fi
EOF
chmod +x /etc/keepalived/check_port.sh

		3.2.3 启动keepalived
systemctl enable --now keepalived
 
		
		3.2.4 测试keepalived
ip a  # 查看VIP在哪个节点
systemct stop nginx  # 停止负载均衡器服务,观察是否飘逸VIP


 
温馨提示:
	router_id:
		节点ip,master每个节点配置自己的IP
	mcast_src_ip:
		节点IP,master每个节点配置自己的IP
	virtual_ipaddress:
		虚拟IP,即VIP。
	interface:
		指定接口的名称。
	virtual_router_id:
		有效值为0-255,可以理解为一个组ID,只有相同的ID才被确认为一个组。
		如果每个keepalived实例修改的ID不一致,则会出现各自有一个VIP的现象。

4)安装k8s组件

四.kubeadm组件初始化K8S集群
	1.安装kubeadm
		1.1 所有节点节点安装kubeadm和master相关依赖组件
yum install -y kubelet-1.28.2 kubeadm-1.28.2 kubectl-1.28.2

		1.2 所有节点kubelet设置成开机启动
systemctl enable --now kubelet 
systemctl status kubelet

		1.3 检查kubectl工具版本号
kubectl version --client --output=yaml	


	2.配置kubeadm文件
		2.1 在k8s-master01节点上配置打印init默认配置信息
kubeadm config print init-defaults > kubeadm-init.yaml

		2.2 根据默认的配置格式进行自定义修改即可
[root@k8s-master01 supershy-linux-Cloud_Native]# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: jiajia.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  # k8s-master01的IP地址
  advertiseAddress: 10.0.0.241
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  # 主机名
  name: k8s-master01
  taints: null

---

apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
# imageRepository: k8s.gcr.io
  # 使用外部etcd集群
  # external:
    # endpoints:
    # - "10.100.0.1:2379"
    # - "10.100.0.2:2379"
    # caFile: "/etcd/kubernetes/pki/etcd/etcd-ca.crt"
    # certFile: "/etcd/kubernetes/pki/etcd/etcd.crt"
    # keyFile: "/etcd/kubernetes/pki/etcd/etcd.key"
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
# 指定apiserver高可用负载端主机,这里使用本地部署apiserver
controlPlaneEndpoint: 10.0.0.240:8443
networking:
  dnsDomain: jiaxing.com
  serviceSubnet: 10.200.0.0/16
  podSubnet: 10.100.0.0/16
scheduler: {}

---

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
# kube-proxy 模式
mode: ipvs

---

apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.200.0.254
# clusterDomain: cluster.local
clusterDomain: jiaxing.com
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
# 配置 cgroup driver
cgroupDriver: systemd
logging: {}
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
[root@k8s-master01 supershy-linux-Cloud_Native]# 


		2.3 检查配置文件是否有错误
kubeadm init --config kubeadm-init.yaml --dry-run 

 

参考链接:
	https://kubernetes.io/zh-cn/docs/reference/config-api/kubeadm-config.v1beta3/
	
	
	
	3.基于kubeadm配置文件初始化集群
[root@k8s-master01 supershy-linux-Cloud_Native]#  kubeadm init --config kubeadm-init.yaml  --upload-certs
...

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join api-server:8443 --token oldboy.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:f1c52a63da2c3ed3b494f05b0cd2a19301ac6c81cdb62cc99668f3c991080a61 \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join api-server:8443 --token oldboy.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:f1c52a63da2c3ed3b494f05b0cd2a19301ac6c81cdb62cc99668f3c991080a61 
[root@k8s-master01 supershy-linux-Cloud_Native]# 


温馨提示:
	如果你手快,没有执行"--upload-certs"参数,后期需要手动上传证书。
	这很简单,只需要参考"五.1"即可。
	



	4.复制kubectl的kubeconfig
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config


	5.查看configmap资源
kubectl -n kube-system get cm kubeadm-config -o yaml


	6.查看node资源
[root@k8s-master01 supershy-linux-Cloud_Native]# kubectl get nodes
NAME    STATUS   ROLES           AGE    VERSION
k8s-master01   NotReady    control-plane   8m1s   v1.28.1
[root@k8s-master01 supershy-linux-Cloud_Native]# 



五.多master和worker加入集群
	1.将控制面板证书文件上传到k8s集群定义为secrets资源。
[root@k8s-master01 ~]# kubeadm init phase upload-certs --upload-certs 
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
58aea750d020038fc8e682af195803d4c8f842c7eaaa49ef77fa354d1fc77db3
[root@k8s-master01 ~]# 

	注意,此处得到了一个"58aea750d020038fc8e682af195803d4c8f842c7eaaa49ef77fa354d1fc77db3"值很有用。
	
	
	2.所有master节点加入集群,需要用到上面的token信息,需要使用"--certificate-key"选项指定。(你的环境要替换!)
kubeadm join api-server:8443 --token oldboy.0123456789abcdef --discovery-token-ca-cert-hash sha256:f1c52a63da2c3ed3b494f05b0cd2a19301ac6c81cdb62cc99668f3c991080a61 --control-plane  --certificate-key 58aea750d020038fc8e682af195803d4c8f842c7eaaa49ef77fa354d1fc77db3


	3.所有的master节点配置kubeconfig认证文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


	4.查看所有节点。 
[root@k8s-master01 ~]# kubectl get nodes
NAME    STATUS   ROLES           AGE     VERSION
k8s-master01   Ready    control-plane   14m     v1.28.2
k8s77   Ready    control-plane   4m52s   v1.28.2
k8s88   Ready    control-plane   96s     v1.28.2
[root@k8s-master01 ~]# 


[root@k8s77 ~]# kubectl get nodes
NAME    STATUS   ROLES           AGE    VERSION
k8s-master01   Ready    control-plane   14m    v1.28.2
k8s77   Ready    control-plane   5m4s   v1.28.2
k8s88   Ready    control-plane   108s   v1.28.2
[root@k8s77 ~]# 

	
[root@k8s88 ~]# kubectl get nodes
NAME    STATUS   ROLES           AGE     VERSION
k8s-master01   Ready    control-plane   14m     v1.28.2
k8s77   Ready    control-plane   5m15s   v1.28.2
k8s88   Ready    control-plane   119s    v1.28.2
[root@k8s88 ~]# 


	
	5.如果有多余的worker节点,可以执行如下命令加入集群
kubeadm join api-server:8443 --token oldboy.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:f1c52a63da2c3ed3b494f05b0cd2a19301ac6c81cdb62cc99668f3c991080a61 
	
	
	
六.部署网络插件
	1.下载网络插件
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml


	2.修改网络插件
[root@k8s-master01 flannel]# vim kube-flannel.yml 

apiVersion: v1
data:
  ...
  net-conf.json: |
    {
      "Network": "10.100.0.0/16",
      "Backend": {
        "Type": "vxlan",
        "Directrouting": true
      }
    }


	3.应用flannel配置文件
[root@k8s-master01 flannel]# kubectl apply -f kube-flannel.yml 


	4.配置自动补全
yum -y install bash-completion

kubectl completion bash > ~/.kube/completion.bash.inc
echo "source '$HOME/.kube/completion.bash.inc'" >> $HOME/.bash_profile
source $HOME/.bash_profile


	5.部署deloyments测试
[root@k8s-master01 ~]# cat 01-deploy-matchLabels-nginx.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        effect: NoSchedule
      containers:
      - name: nginx
        # image: harbor.supershy.com/supershy-web/nginx:1.20.1-alpine
        image: nginx:1.20.1-alpine
        ports:
        - containerPort: 80
[root@k8s-master01 ~]# 





温馨提示: 在部署K8S 1.28.2出现flannel.1和cni网段不一致的情况。
	查看容器报错:
		 "cni0" already has an IP address different from 10.88.0.1/16
		 
		 
	目前通过下面两种方法都未能解决:
可以尝试: 但是貌似删除后cni0网卡不会重新创建:
ifconfig cni0 down
ifconfig flannel.1 down
ip link delete cni0
ip link delete flannel.1 



以下方式不能解决:
ip link delete cni0
ip link add cni0 type bridge
ip addr add 10.100.0.1/24 dev cni0
ifconfig cni0 up


ip link delete cni0
ip link add cni0 type bridge
ip addr add 10.100.1.1/24 dev cni0
ifconfig cni0 up

ip link delete cni0
ip link add cni0 type bridge
ip addr add 10.100.2.1/24 dev cni0
ifconfig cni0 up 



综上所述,建议可以尝试两种方式,切换calico网络,如果还不好使就只能降低版本了。


如果你没有遇到我上述的问题,下面的操作就不用继续,除非你真的很想体验calico的CNI插件。


	
	
注意,如果你的K8S版本是1.23.17版本,calico支持的最新版本是"3.25"。
https://docs.tigera.io/calico/3.25/getting-started/kubernetes/requirements


但是,我的K8S集群是1.28,我采用的版本时"3.26",具体操作如下:

参考链接:
	https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart

5)k8s集群calico网络插件的安装

k8s集群安装calico插件:
	1.安装tigera-operator组件
[root@k8s-master01 calico]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml
[root@k8s-master01 calico]# kubectl create -f tigera-operator.yaml


	2.修改自定义Pod IP地址池
[root@k8s-master01 calico]# wget  https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml


	3.创建calico
[root@k8s-master01 calico]# grep cidr custom-resources.yaml 
      cidr: 10.100.0.0/16
[root@k8s-master01 calico]# 
[root@k8s-master01 calico]# kubectl create -f custom-resources.yaml
	
	
	4.创建资源清单并验证测试
[root@k8s-master01 ~]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP               NODE    NOMINATED NODE   READINESS GATES
deploy-nginx-7948b47f4f-2z5t5   1/1     Running   0          5s    10.100.59.4      k8s-master01   <none>           <none>
deploy-nginx-7948b47f4f-nmgcq   1/1     Running   0          5s    10.100.16.4      k8s88   <none>           <none>
deploy-nginx-7948b47f4f-pchjq   1/1     Running   0          5s    10.100.112.196   k8s77   <none>           <none>
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# curl -I 10.100.112.196 
HTTP/1.1 200 OK
Server: nginx/1.20.1
Date: Thu, 09 Nov 2023 07:26:14 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 May 2021 13:41:16 GMT
Connection: keep-alive
ETag: "60acfe7c-264"
Accept-Ranges: bytes

[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# curl -I 10.100.16.4
HTTP/1.1 200 OK
Server: nginx/1.20.1
Date: Thu, 09 Nov 2023 07:26:18 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 May 2021 13:41:16 GMT
Connection: keep-alive
ETag: "60acfe7c-264"
Accept-Ranges: bytes

[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# curl -I 10.100.59.4
HTTP/1.1 200 OK
Server: nginx/1.20.1
Date: Thu, 09 Nov 2023 07:26:24 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 May 2021 13:41:16 GMT
Connection: keep-alive
ETag: "60acfe7c-264"
Accept-Ranges: bytes

[root@k8s-master01 ~]# 

9. k8s版本升级

	# k8s版本升级方式
	1.考虑每个节点是否有业务正在运行,驱逐该节点的pod
	2. delete node 将该节点从集群删除
	3. 替换/usrlocal/bin下的k8s组件,也需要考虑启动参数的更改,systemctl cat kubelet
	#替换命令
wget https://dl.k8s.io/v1.29.8/kubernetes-server-linux-amd64.tar.gz
tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
posted @   逃离这世界~  阅读(93)  评论(0编辑  收藏  举报
(评论功能已被禁用)
相关博文:
阅读排行:
· CSnakes vs Python.NET:高效嵌入与灵活互通的跨语言方案对比
· DeepSeek “源神”启动!「GitHub 热点速览」
· 我与微信审核的“相爱相杀”看个人小程序副业
· Plotly.NET 一个为 .NET 打造的强大开源交互式图表库
· 上周热点回顾(2.17-2.23)
点击右上角即可分享
微信分享提示

目录导航