明华严

导航

AlmaLinux8.9+kubenetes 1.29单机离线部署实践记录

一、准备工作

1、操作系统安装

almaLinux-8.9最小化安装,如果时区没有选对,需要设置时区为东八区,命令为:timedatectl set-timezone Asia/Shanghai

2、操作系统安装离线安装包准备

1)准备 Docker、Containerd 安装前的依赖

yum install -y yum-utils device-mapper-persistent-data lvm2 --downloadonly --downloaddir=./docker-before

2)准备 Docker 安装包

curl https://download.docker.com/linux/centos/docker-ce.repo > /etc/yum.repos.d/docker-ce.repo

yum makecache

yum clean all && yum makecache

yum install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin containerd --downloadonly --downloaddir=./docker

3)准备 k8s 安装包(阿里云Kubenetes地址https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.73281b11wcUG5N)

cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/

enabled=1

gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/repodata/repomd.xml.key

EOF

 

yum makecache

yum clean all && yum makecache

 

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes --nogpgcheck --downloadonly --downloaddir=./k8s

 

4)准备 k8s 初始化 Docker 镜像包,准备 containerd 所需的 Docker 镜像包 pause

# 在有网的电脑上安装 k8s 后,运行下列命令就可以获取到 k8s 初始化时所需的 docker 镜像了

kubeadm config images list

#执行结果如下

registry.k8s.io/kube-apiserver:v1.29.2

registry.k8s.io/kube-controller-manager:v1.29.2

registry.k8s.io/kube-scheduler:v1.29.2

registry.k8s.io/kube-proxy:v1.29.2

registry.k8s.io/coredns/coredns:v1.11.1

registry.k8s.io/pause:3.9

registry.k8s.io/etcd:3.5.10-0

 

# 使用 Docker 拉取镜像:k8s 1.25.3 初始化所需 Docker 镜像如下

注意:可能会存在docker未启动的问题,报错为:Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

解决方法:

systemctl start docker

systemctl enable docker

 

# 国内网络不直接拉取 registry.k8s.io 域名下的包,在这里我们使用阿里云Docker镜像来拉取上面的 Docker image

docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.29.2

docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.29.2

docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.29.2

docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.29.2

docker pull registry.aliyuncs.com/google_containers/pause:3.9

# containerd 所需

docker pull registry.aliyuncs.com/google_containers/etcd:3.5.10-0

docker pull registry.aliyuncs.com/google_containers/coredns:v1.11.1

 

docker images

# 将上述的 registry.aliyuncs.com 修改为 registry.k8s.io

docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.29.2            registry.k8s.io/kube-apiserver:v1.29.2

docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.29.2            registry.k8s.io/kube-scheduler:v1.29.2

docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.29.2   registry.k8s.io/kube-controller-manager:v1.29.2

docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.29.2                registry.k8s.io/kube-proxy:v1.29.2

docker tag registry.aliyuncs.com/google_containers/pause:3.9                         registry.k8s.io/pause:3.9

# containerd 所需

docker tag registry.aliyuncs.com/google_containers/etcd:3.5.10-0                      registry.k8s.io/etcd:3.5.10-0

# 注意这里的名称为 coredns/coredns:v1.11.1

docker tag registry.aliyuncs.com/google_containers/coredns:v1.11.1                    registry.k8s.io/coredns/coredns:v1.11.1

 

 

# 保存镜像到磁盘

docker save -o kube-apiserver-v1.29.2.tar            registry.k8s.io/kube-apiserver:v1.29.2

docker save -o kube-controller-manager-v1.29.2.tar   registry.k8s.io/kube-controller-manager:v1.29.2

docker save -o kube-scheduler-v1.29.2.tar            registry.k8s.io/kube-scheduler:v1.29.2

docker save -o kube-proxy-v1.29.2.tar                registry.k8s.io/kube-proxy:v1.29.2

docker save -o pause-3.9.tar                         registry.k8s.io/pause:3.9

# containerd 所需

docker save -o etcd-3.5.10-0.tar                      registry.k8s.io/etcd:3.5.10-0

docker save -o coredns-v1.11.1.tar                    registry.k8s.io/coredns/coredns:v1.11.1

 

# 将上述镜像复制到已安装 k8s、待初始化 k8s 的系统上

 

5)准备 网络 calico 初始化 Docker 镜像包

 

本文使用 calico 3.24.5,可以从下面链接中获取

 

如果要使用其他版本的 calico,请查看 calico.yaml 文件中的 calico/node、calico/cni、calico/kube-controllers 版本,下载对应的 Docker 镜像就可

 

不同 calico 支持的 k8s 版本不同,请查看 calico 与 k8s 版本的对应关系:Kubernetes(k8s)安装

docker pull docker.io/calico/node:v3.27.2

docker pull docker.io/calico/cni:v3.27.2

docker pull docker.io/calico/kube-controllers:v3.27.2

 

docker images

 

docker save -o node-v3.27.2.tar                    docker.io/calico/node:v3.27.2

docker save -o cni-v3.27.2.tar                     docker.io/calico/cni:v3.27.2

docker save -o kube-controllers-v3.27.2.tar        docker.io/calico/kube-controllers:v3.27.2

 

二、安装

1、安装 Docker、Containerd 安装前的依赖

cd ./docker-before

yum -y localinstall *.rpm

cd ..

2、安装 Docker、Containerd

cd ./docker

yum -y localinstall *.rpm

# yum -y install *.rpm

cd ..

 

# 启动 docker 时,会启动 containerd

# sudo systemctl status containerd.service --no-pager

systemctl stop containerd.service

 

cp /etc/containerd/config.toml /etc/containerd/config.toml.bak

containerd config default > $HOME/config.toml

cp $HOME/config.toml /etc/containerd/config.toml

 

# 由于是离线安装,提前准备了Docker镜像,所以此处此处只需要修改 /etc/containerd/config.toml 中的pause版本为3.9

sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.k8s.io/pause:3.9"#g' /etc/containerd/config.toml

# 确保 /etc/containerd/config.toml 中的 disabled_plugins 内不存在 cri

sed -i "s#SystemdCgroup = false#SystemdCgroup = true#g" /etc/containerd/config.toml

 

systemctl enable --now containerd.service

systemctl start docker.service

systemctl enable docker.service

systemctl enable docker.socket

systemctl list-unit-files | grep docker

 

tee /etc/docker/daemon.json <<-'EOF'

{

  "registry-mirrors": ["https://hnkfbj7x.mirror.aliyuncs.com"],

    "exec-opts": ["native.cgroupdriver=systemd"]

}

EOF

 

systemctl daemon-reload

systemctl restart docker

docker info

 

systemctl status docker.service --no-pager

 

systemctl status containerd.service --no-pager

3、安装kubenetes

# 设置所需的 sysctl 参数,参数在重新启动后保持不变

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-iptables  = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.ipv4.ip_forward                 = 1

EOF

 

# 应用 sysctl 参数而不重新启动

sudo sysctl --system

 

cd k8s

yum -y localinstall *.rpm

cd ..

 

systemctl daemon-reload

systemctl restart kubelet

systemctl enable kubelet

 

4、导入k8s 初始化时所需的Docker镜像

cd init-images

 

# 注意这里指定了命名空间为 k8s.io

ctr -n=k8s.io image import kube-apiserver-v1.29.2.tar

ctr -n=k8s.io image import kube-controller-manager-v1.29.2.tar

ctr -n=k8s.io image import kube-scheduler-v1.29.2.tar

ctr -n=k8s.io image import kube-proxy-v1.29.2.tar

ctr -n=k8s.io image import pause-3.9.tar

ctr -n=k8s.io image import etcd-3.5.10-0.tar

ctr -n=k8s.io image import coredns-v1.11.1.tar

 

ctr -n=k8s.io images list

ctr i list

 

cd ..

 

5、将主机名指向本机IP

永久设置主机名
echo '主机名' > /etc/hostname

编辑 hosts
vim /etc/hosts

控制面板:设置IP
当前机器的IP          当前机器的主机名

6、关闭防火墙或者开启端口

#关闭防火墙

systemctl stop firewalld.service

systemctl disable firewalld.service

#开启防火墙,开启端口

# 控制面板

firewall-cmd --zone=public --add-port=6443/tcp --permanent # Kubernetes API server        所有

firewall-cmd --zone=public --add-port=2379/tcp --permanent # etcd server client API        kube-apiserver, etcd

firewall-cmd --zone=public --add-port=2380/tcp --permanent # etcd server client API        kube-apiserver, etcd

firewall-cmd --zone=public --add-port=10250/tcp --permanent # Kubelet API        自身, 控制面

firewall-cmd --zone=public --add-port=10259/tcp --permanent # kube-scheduler        自身

firewall-cmd --zone=public --add-port=10257/tcp --permanent # kube-controller-manager        自身

firewall-cmd --zone=trusted --add-source=192.168.80.60 --permanent # 信任集群中各个节点的IP

firewall-cmd --zone=trusted --add-source=192.168.80.16 --permanent # 信任集群中各个节点的IP

firewall-cmd --add-masquerade --permanent # 端口转发

firewall-cmd --reload

firewall-cmd --list-all

firewall-cmd --list-all --zone=trusted

 

# 工作节点

firewall-cmd --zone=public --add-port=10250/tcp --permanent # Kubelet API        自身, 控制面

firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent # NodePort Services†        所有

firewall-cmd --zone=trusted --add-source=192.168.80.60 --permanent # 信任集群中各个节点的IP

firewall-cmd --zone=trusted --add-source=192.168.80.16 --permanent # 信任集群中各个节点的IP

firewall-cmd --add-masquerade --permanent # 端口转发

firewall-cmd --reload

firewall-cmd --list-all

firewall-cmd --list-all --zone=trusted

 

7、关闭swap

swapoff -a

sed -i 's/.*swap.*/#&/' /etc/fstab

 

8、kubenetes初始化

# 由于导入的 Docker 镜像已经修改为原始的名称,故此处初始化无需增加 --image-repository=registry.aliyuncs.com/google_containers

#如果该虚拟机只有一张网卡,不指定IP也可

kubeadm init

# 也可以指定集群的IP初始化

# kubeadm init  --apiserver-advertise-address=192.168.XX.XX

 

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

kubectl cluster-info

 

# 初始化失败后,可进行重置,重置命令:kubeadm reset

 

# 执行成功后,会出现类似下列内容:

Your Kubernetes control-plane has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

Alternatively, if you are the root user, you can run:

 

  export KUBECONFIG=/etc/kubernetes/admin.conf

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

Then you can join any number of worker nodes by running the following on each as root:

 

kubeadm join 192.168.31.200:6443 --token s4hn80.3ua31ao9bj9sbdid \

--discovery-token-ca-cert-hash sha256:191634ff43cbfbc9b623009293b1acfc8b14df84aaac16733bf5900cef742aa5

 

9、网络初始化,下载 calico.yaml 文件,复制到电脑上

cd calico

ctr -n=k8s.io image import node-v3.27.2.tar

ctr -n=k8s.io image import cni-v3.27.2.tar

ctr -n=k8s.io image import kube-controllers-v3.27.2.tar

cd ..

 

# 增加 DNS

vim /etc/resolv.conf

 

# 没有DNS时随便写一个

nameserver 192.168.10.1

 

kubectl apply -f calico.yaml

 

10、查看集群

kubectl get pods --all-namespaces -o wide

 

kubectl get nodes -o wide

 

posted on   明华严  阅读(296)  评论(0编辑  收藏  举报

努力加载评论中...
点击右上角即可分享
微信分享提示