Ubuntu22.04安装部署基于dockers的K8s目前最新版本1.30.1版本
其实之前也写过其他系统、其他版本的部署,但是,由于在1.28版本之后,安装略有所变化,所以,这里再写一篇
基础环境
主机名 | 配置 | 角色 | 系统版本 | IP | 安装的组件 |
---|---|---|---|---|---|
master | 4核4G | master | Ubuntu22.04 | 192.168.140.75 | apiserver、controller-manager、scheduler、kubelet、etcd、kube-proxy、容器运行时、calico |
node1 | 4核4G | worker | Ubuntu22.04 | 192.168.140.76 | Kube-proxy、 calicocoredns、容器运行时、kubelet |
系统初始化配置
配置本地解析
集群内主机都需要执行
$ cat >> /etc/hosts << EOF
192.168.140.75 master
192.168.140.76 node1
EOF
配置互信
集群内主机都需要执行
ssh-keygen
ssh-copy-id master
ssh-copy-id node1
关闭系统的交换分区swap
集群内主机都需要执行
sed -ri 's/^([^#].*swap.*)$/#\1/' /etc/fstab && grep swap /etc/fstab && swapoff -a && free -h
设置内核参数
集群内主机都需要执行
cat >> /etc/sysctl.conf <<EOF
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
cat >> /etc/modules-load.d/neutron.conf <<EOF
br_netfilter
EOF
#加载模块
modprobe br_netfilter
#让配置生效
sysctl -p
安装docker
集群内主机都需要执行
apt update
apt install -y ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
apt install docker-ce docker-ce-cli containerd.io docker-compose -y
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": [
"https://docker.mirrors.ustc.edu.cn",
"https://hub-mirror.c.163.com",
"https://reg-mirror.qiniu.com",
"https://registry.docker-cn.com"
],
"exec-opts": ["native.cgroupdriver=systemd"],
"data-root": "/data/docker",
"log-driver": "json-file",
"log-opts": {
"max-size": "20m",
"max-file": "5"
}
}
EOF
systemctl restart docker.service
systemctl enable docker.service
docker info
安装最新版本的kubeadm、kubelet 和 kubectl
集群内主机都需要执行
配置安装源
其中新版 kubernetes 源按照安装版本区分不同仓库,该文档示例为配置 1.30 版本,如需其他版本请在对应位置字符串替换即可。
比如需要安装 1.29 版本,则需要将如下配置中的 v1.30 替换成 v1.29
apt-get update && apt-get install -y apt-transport-https
curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/Release.key |
gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/ /" |
tee /etc/apt/sources.list.d/kubernetes.list
安装kubelet、kubeadm 和 kubectl
apt-get update
apt-get install -y kubelet kubeadm kubectl
设置kubelet自启
此时,还不能启动kubelet,因为集群还没有配置起来,现在仅仅设置开机自启动
systemctl enable kubelet
cri-dockerd
集群内主机都需要执行
Kubernetes自v1.24移除了对docker-shim的支持,而Docker Engine默认又不支持CRI规范,因而二者将无法直接完成整合。为此,Mirantis和Docker联合创建了cri-dockerd项目,用于为Docker Engine提供一个能够支持到CRI规范的垫片,从而能够让Kubernetes基于CRI控制Docker 。
项目地址:https://github.com/Mirantis/cri-dockerd
cri-dockerd项目提供了rpm包
Ubuntu的版本分别代表不同的大版本
- 22.04:jammy
- 20.04:focal
- 18.04:bionic
- 16.04:xenial
- 14.04:trusty
这里我们是22.04版本,所以选择jammy
安装 cri-dockerd
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.14/cri-dockerd_0.3.14.3-0.ubuntu-jammy_amd64.deb
dpkg -i ./cri-dockerd_0.3.14.3-0.ubuntu-jammy_amd64.deb
配置 cri-dockerd
从国内 cri-dockerd 服务无法下载 k8s.gcr.io上面相关镜像,导致无法启动,所以需要修改cri-dockerd 使用国内镜像源
修改配置文件,设置国内镜像源
$ sed -ri 's@^(.*fd://).*$@\1 --pod-infra-container-image registry.aliyuncs.com/google_containers/pause@' /usr/lib/systemd/system/cri-docker.service
# 重启
$ systemctl daemon-reload && systemctl restart cri-docker && systemctl enable cri-docker
在master节点上初始化
集群内master主机需要执行
生成初始化配置文件
kubeadm config print init-defaults > kubeadm.yaml
修改配置文件
vim kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
# 修改成本master的ip
advertiseAddress: 192.168.140.75
bindPort: 6443
nodeRegistration:
# 修改成cri-dockerd的sock
criSocket: unix:///run/cri-dockerd.sock
imagePullPolicy: IfNotPresent
# 修改成本master的主机名
name: master
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
# 修改etcd的数据目录
dataDir: /data/etcd
# 修改加速地址
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
# 修改成具体对应的版本好
kubernetesVersion: 1.30.1
# 如果是多master节点,就需要添加这项,指向代理的地址,这里就设置成master的节点
controlPlaneEndpoint: "master:6443"
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
# 添加pod的IP地址
podSubnet: 10.244.0.0/16
scheduler: {}
# 在最后添加上下面两部分
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
初始化集群
kubeadm init --config=kubeadm.yaml
出现下面这种类似的情况,就说明初始化成功了
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.502122 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.140.75:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:0deaa9ceed7266c28c5f5241ed9efea77c798055ebcc7a27dc03f6c97323c8a0
按照提示要求,创建配置文件目录以及复制配置文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
加入worker节点
集群内只在worker主机需要执行
kubeadm join 192.168.140.75:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:0deaa9ceed7266c28c5f5241ed9efea77c798055ebcc7a27dc03f6c97323c8a0 \
--cri-socket unix:///run/cri-dockerd.sock
注意,一定要加上 --cri-socket unix:///run/cri-dockerd.sock
指定容器运行时
执行完成,查看节点
kubectl get node
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 9m26s v1.30.1
node1 NotReady <none> 5m41s v1.30.1
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
etcd-0 Healthy
controller-manager Healthy ok
scheduler Healthy ok
安装pod网络calico
可以看到,虽然worker节点虽然添加上去了,但是状态时NotReady的,所以需要安装网络
只需要在其中一个master节点上执行
下载资源清单文件
在线下载配置文件地址是: https://docs.projectcalico.org/manifests/calico.yaml
wget https://docs.projectcalico.org/manifests/calico.yaml
修改资源清单文件
查找 DaemonSet
找到下面的容器containers部分
如果有多个网卡,需要添加网卡
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
#在这里指定网卡 添加下面两行
- name: IP_AUTODETECTION_METHOD
value: "interface=ens160"
# Enable or Disable VXLAN on the default IP pool.
- name: CALICO_IPV4POOL_VXLAN
value: "Never"
修改CIDR,将CIDR修改成上面初始化时pod的内部网段
对应项:
- --pod-network-cidr=10.244.0.0/16
- podSubnet: 10.244.0.0/16
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
#
#这部分原本是注释的,需要去掉#号,将192.168.0.0/16修改成10.244.0.0/16
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
安装calico
kubectl apply -f calico.yaml
可用命令观察各服务容器的状态
watch kubectl get pods --all-namespaces -o wide
查看节点状态
当看到pod都正常运行了,就可以查看状态了
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 73m v1.30.1
node1 Ready <none> 69m v1.30.1
测试集群网络是否正常
kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
10.96.0.10 就是我们coreDNS的clusterIP,说明coreDNS配置好了。
解析内部Service的名称,是通过coreDNS去解析的。
注意:
busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip
本文来自博客园,作者:厚礼蝎,转载请注明原文链接:https://www.cnblogs.com/guangdelw/p/18222715
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
· 【自荐】一款简洁、开源的在线白板工具 Drawnix
2023-05-30 如何让docker支持IPv6
2023-05-30 ubuntu22.04安装zabbix-agent2