使用kubeadm部署kubernetes1.23(学习使用)

注释:此次操作使用VMware Workstation Pro17虚拟机进行
本次使用单master节点,双worker节点组成的最小化单主节点的学习环境

1.K8S所有节点环境准备

xshell发送所用会话,包括harbor仓库

虚拟机操作系统环境准备
参考链接:
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

2.关闭swap分区

临时关闭

swapoff -a && sysctl -w vm.swappiness=0  
swapoff -a

基于配置文件关闭

sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab  

3.确保各个节点MAC地址或product_uuid唯一

ifconfig  eth0  | grep ether | awk '{print $2}'
cat /sys/class/dmi/id/product_uuid 

温馨提示:

一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。
Kubernetes使用这些值来唯一确定集群中的节点。 如果这些值在每个节点上不唯一,可能会导致安装失败。

4.检查网络环境是否互通

可以使用ping命令来测试。

5.允许使用iptable检查桥接流量

cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF


sysctl --system

6.检查端口是否被占用

ss -ntl
参考链接: https://kubernetes.io/zh-cn/docs/reference/networking/ports-and-protocols/

7.检查docker环境

安装docker和docker-compose,也可参考docker官网

8.禁用防火墙(内网环境)

systemctl disable --now firewalld

9.禁用selinux

sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config 
grep ^SELINUX= /etc/selinux/config

10.配置host解析

cat >> /etc/hosts <<'EOF'
192.168.52.231 master231
192.168.52.232 worker232
192.168.52.233 worker233
192.168.52.250 harbor.lzh.com
EOF
cat /etc/hosts
# 注意windows也要配置解析

11.所有节点创建自定义证书目录

mkdir -pv /etc/docker/certs.d/harbor.lzh.com

12.检查端口是否被占用

https://kubernetes.io/zh-cn/docs/reference/networking/ports-and-protocols/
image

13.安装harbor

13.1.下载harbor软件

可以去GitHub上下载其他版本,我这里先用以前下载的版本

13.2.解压harbor软件包

[root@harbor250 ~]# tar xf harbor.tar.gz -C /test-harbor/softwares/

13.3.安装harbor

[root@harbor250 ~]# cd /test-harbor/softwares/harbor/
[root@harbor250 harbor]# 
[root@harbor250 harbor]# ./install.sh 

13.4.将客户端证书推送到所有的k8s集群

[root@harbor250 harbor]# scp certs/custom/client/* master231:/etc/docker/certs.d/harbor.lzh.com/  
[root@harbor250 harbor]# 
[root@harbor250 harbor]# scp certs/custom/client/* worker232:/etc/docker/certs.d/harbor.lzh.com/
[root@harbor250 harbor]# 
[root@harbor250 harbor]# scp certs/custom/client/* worker233:/etc/docker/certs.d/harbor.lzh.com/

13.5.挑选任意K8S节点测试harbor能否正常访问

[root@master231 ~]# docker login -u admin -p 1 harbor.lzh.com
.....

Login Succeeded
[root@master231 ~]# 

14.所有节点安装kubeadm,kubelet,kubectl

14.1.配置软件源 (阿里云的镜像站)

cat  > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

14.2.查看kubeadm的版本(将来你要安装的K8S时请所有组件版本均保持一致!)

yum -y list kubeadm --showduplicates | sort -r
yum list kubelet.x86_64 --showduplicates | sort -r
yum list kubectl.x86_64 --showduplicates | sort -r

14.3.安装kubeadm,kubelet,kubectl软件包 #(所有节点)

yum -y install kubeadm-1.23.17-0 kubelet-1.23.17-0 kubectl-1.23.17-0

14.4.启动kubelet服务

(若服务启动失败时正常现象,其会自动重启,因为缺失配置文件,初始化集群后恢复!此步骤可跳过!推荐设置开机自启动)
systemctl enable --now kubelet
systemctl status kubelet

参考链接:
https://kubernetes.io/zh/docs/tasks/tools/install-kubectl-linux/

15.初始化master节点

15.1.使用kubeadm初始化master节点

[root@master231 ~]# kubeadm init --kubernetes-version=v1.23.17 --image-repository registry.aliyuncs.com/google_containers  --apiserver-advertise-address=192.168.52.11 --pod-network-cidr=10.96.0.0/16 --service-cidr=172.31.0.0/16

kubeadm init  --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images --kubernetes-version v1.23.17 --service-cidr=10.200.0.0/16 --pod-network-cidr=10.100.0.0/16

--kubernetes-version=v1.23.17: 指定 Kubernetes 的版本为 v1.23.17。
--image-repository registry.aliyuncs.com/google_containers: 指定使用阿里云的镜像仓库 repo,并拉取谷歌容器镜像。
--pod-network-cidr=10.100.0.0/16: 指定 Pod 网络的地址段为 10.100.0.0/16。
--service-cidr=10.200.0.0/16: 指定SVC的网段 指定 Service 的地址段为 10.200.0.0/16。
--service-dns-domain=lzh.com: 指定 Service 的 DNS 域名为 lzh.com。若不指定,默认为"cluster.local"。

15.2.拷贝授权文件,用于管理kubernetes集群

[root@master231 ~]# mkdir -p $HOME/.kube
[root@master231 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master231 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

15.3.查看master组件

[root@master231 ~]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
[root@master231 ~]# 
[root@master231 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
scheduler            Healthy   ok                              
[root@master231 ~]# 

16.配置所有节点加入k8s集群

16.1.加入集群,注意TOKEN

[root@worker232 ~]# docker load -i worker-node.tar.gz   不导入镜像就直接下载,网络好的话无所谓。
[root@worker232 ~]#kubeadm join 192.168.52.231:6443 --token uzg2le.ll3ay03eyfdupiw9 \
> --discovery-token-ca-cert-hash sha256:3e5b1a7125e6a1139478f702a9762f1857fb70bcdb52e335c46c3081ce7dc2b9


kubeadm join 192.168.52.11:6443 --token xovu1o.3vtmh02hrrsokxe7 \
	--discovery-token-ca-cert-hash sha256:543b5163dc1f547f7d67993878264c3f1f75acc0eb60ebcb68e719cb0834676c 

16.2.master查看集群节点数量

[root@master231 ~]# kubectl get nodes
NAME        STATUS     ROLES                  AGE     VERSION
master231   NotReady   control-plane,master   22m     v1.23.17
worker232   NotReady   <none>                 6m14s   v1.23.17
worker233   NotReady   <none>                 6m10s   v1.23.17
[root@master231 ~]# 
[root@master231 ~]# 
[root@master231 ~]# kubectl get no
NAME        STATUS     ROLES                  AGE     VERSION
master231   NotReady   control-plane,master   22m     v1.23.17
worker232   NotReady   <none>                 6m15s   v1.23.17
worker233   NotReady   <none>                 6m11s   v1.23.17
[root@master231 ~]# 

17.配置自动补全功能

[root@master231 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc && source ~/.bashrc 
[root@master231 ~]# 
[root@master231 ~]# kubectl 
alpha          auth           cordon         diff           get            patch          run            version
annotate       autoscale      cp             drain          help           plugin         scale          wait
api-resources  certificate    create         edit           kustomize      port-forward   set            
api-versions   cluster-info   debug          exec           label          proxy          taint          
apply          completion     delete         explain        logs           replace        top            
attach         config         describe       expose         options        rollout        uncordon       
[root@master231 ~]# kubectl 

18. 安装cni插件

18.1.下载插件的配置文件

[root@master231 ~]# mkdir -p /manifests/cni
[root@master231 ~]# 
[root@master231 ~]# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml -O /manifests/cni/kube-flannel.yml

18.2.修改配置文件

[root@master231 ~]# vim /manifests/cni/kube-flannel.yml
...
	将
		"Network": "10.244.0.0/16",
	修改为:
      "Network": "10.100.0.0/16",
	  
根据自己的时实际情况修改,不需要和我的一样

18.3.安装flannel插件

[root@master231 ~]# kubectl apply -f /manifests/cni/kube-flannel.yml 

18.4.验证网络插件是否部署成功

[root@master231 ~]# kubectl -n kube-flannel get pods
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-btrqw   1/1     Running   0          5m14s
kube-flannel-ds-krq6g   1/1     Running   0          5m14s
kube-flannel-ds-mh2q7   1/1     Running   0          5m14s
[root@master231 ~]# 
[root@master231 ~]# kubectl get nodes
NAME        STATUS   ROLES                  AGE   VERSION
master231   Ready    control-plane,master   91m   v1.23.17
worker232   Ready    <none>                 75m   v1.23.17
worker233   Ready    <none>                 74m   v1.23.17
[root@master231 ~]# 
[root@master231 ~]# kubectl -n kube-flannel get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
kube-flannel-ds-btrqw   1/1     Running   0          18m   192.168.52.231   master231   <none>           <none>
kube-flannel-ds-krq6g   1/1     Running   0          18m   192.168.52.232   worker232   <none>           <none>
kube-flannel-ds-mh2q7   1/1     Running   0          18m   192.168.52.233   worker233   <none>           <none>
[root@master231 ~]# 

18.5.推送测试镜像到harbor

[root@master231 ~]# docker login -u admin -p 1 harbor.lzh.com
[root@master231 ~]# 
[root@master231 ~]# docker tag alpine harbor.lzh.com/test-linux/alpine
[root@master231 ~]# 
[root@master231 ~]# docker push harbor.lzh.com/test-linux/alpine
Using default tag: latest
The push refers to repository [harbor.lzh.com/test-linux/alpine]
8d3ac3489996: Pushed 
latest: digest: sha256:e7d88de73db3d3fd9b2d63aa7f447a10fd0220b7cbf39803c803f2af9ba256b3 size: 528
[root@master231 ~]# 

18.6.启动pod测试

1.[root@master231 ~]# mkdir /manifests/pod
[root@master231 ~]# 

2.[root@master231 ~]# cat /manifests/pod/01-flannel-test.yaml
# 指定apiserver版本号
apiVersion: v1
# 指定资源的类型
kind: Pod
# 定义源数据信息
metadata:
  # Pod的名称
  name: pod-c1
# 用户定义资源期望运行的状态
spec:
  # 指定在worker232的工作节点运行
  nodeName: worker232
  # 在Pod内运行的容器定义
  containers:
    # 容器的名称
  - name: c1
    # 镜像名称
    image: harbor.lzh.com/test-linux/alpine:latest
    # 相当于Dockerfile的ENTRYPOINT指令,指定容器运行的命令
    command: ["tail","-f","/etc/hosts"]

---

apiVersion: v1
kind: Pod
metadata:
  name: pod-c2
spec:
  nodeName: worker233
  containers:
  - name: c2
    image: harbor.lzh.com/test-linux/alpine:latest
    command: ["sleep","3600"]
[root@master231 ~]# 


3.[root@master231 ~]# kubectl apply -f /manifests/pod/01-flannel-test.yaml
pod/pod-c1 created
pod/pod-c2 created
[root@master231 ~]# 


4.[root@master231 ~]# kubectl get pods -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
pod-c1   1/1     Running   0          8s    10.100.1.2   worker232   <none>           <none>
pod-c2   1/1     Running   0          8s    10.100.2.2   worker233   <none>           <none>
[root@master231 ~]# 
posted @ 2024-02-07 12:08  只为心情愉悦  阅读(55)  评论(0编辑  收藏  举报