基于Ubuntu部署企业级kubernetes集群---k8s集群容部署

1.下载用于Kubernetes软件包仓库的公共签名秘钥

#如果 `/etc/apt/keyrings` 目录不存在,则应在 curl 命令之前创建它。
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg  

添加Kubernetes apt 仓库

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

查看看具体配置

跟新apt 包索引

sudo apt-get update

2.k8s集群软件安装

安装软件之前,可以先查看软件列表

apt-cache policy kubeadm

也可以查看软件列表及其依赖关系

apt-cache showpkg kubeadm

# 查看软件列表
apt-cache madison kubeadm

 默认安装

sudo  apt-get install -y kubelet kubeadm kubectl
# 也可以安装指定版本
sudo apt-get install -y  kubelet=1.29.0-1.1   kubeadm=1.29.0-1.1  kubectl=1.29.0-1.1
#  锁定版本,防止后期自动更新
sudo apt-mark hold kubelet kubeadm kubectl
# 解锁版本,可以执行更新
sudo apt-mark unhold kubelet kubeadm kubectl

3.k8s集群初始化

3.1.集群配置文件

查看版本

kubeadm version

 生成部署配置文件

kubeadm config print init-defaults > kubeadm-config.yaml

 需要编辑kubeadm-config.yaml文件

vim kubeadm-config.yaml

 需要将advertiseAddress: 1.2.3.4  由“1.2.3.4” 改成“192.168.113.131”    将name: node 由  "node" 改成 "k8s-master01"   Netwoking 需要 增加 pod

文件内容最后追加

 ---
 kind: KubeletConfiguration
 apiVersion: kubelet.config.k8s.io/v1betal
 cgroupDriver: systemd

最终如下:

复制代码
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    token: abcdef.0123456789abcdef
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
localAPIEndpoint:
  advertiseAddress: 192.168.113.131
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: k8s-master01
  taints: null
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
  timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: 1.29.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
复制代码

 

 查看镜像

sudo kubeadm config images list

# 拉取镜像
kubeadm config images pull

如果拉取失败则,可以使用

kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers

# 查看具体镜像
crictl images

 3.2使用部署配置文件初始化K8S集群

# 初始化 Kubernetes 集群

  sudo kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log

错误一 : 结果报错了如下:

 需要检查配置文件中的 apiVersion配置的信息是否正确

正常输出的内容如下:

 错误二、kubeadm初始化安装K8S集群失败--Initial timeout of 40s passed.

 解决办法:

# 查看kubelet 的状态
systemctl status kubelet

# 从国内拉取镜像
sudo crictl pull registry.aliyuncs.com/google_containers/pause:3.9

 重命名镜像

# 将拉取的镜像标记为 registry.k8s.io/pause:3.9
 sudo ctr -n k8s.io images tag registry.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.9

复制代码
# 确保配置正确并且镜像拉取成功后,重启 kubelet 服务
sudo systemctl restart kubelet
# 然后检查 kubelet 的日志以确保没有新的错误:
sudo journalctl -u kubelet -f

 #查看状态

  systemctl status kubelet

复制代码

 错误三、端口已被占用

 解决办法:

# 清理已有的 Kubernetes 配置和状态

sudo kubeadm reset -f

正常初始化效果:

复制代码
root@k8s-master01:~# sudo kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
[init] Using Kubernetes version: v1.29.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0829 09:34:49.113130  165932 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.9" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.113.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.113.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.113.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.008360 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
4ab1cd611fef51370d980ddfb123163c3c9d2f6eacb58699725877b2a8efa085
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.113.131:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:b9bc0ddc89b05fb76d7d9192f9949145c050c005ae8a1afcb812a76e9a9f6811
复制代码

3.3准备kubectl 配置文件

仅在k8s-master节点上进行操作

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 3.4 工作节点加入集群

节点k8s-worker01

 kubeadm join 192.168.113.131:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:b9bc0ddc89b05fb76d7d9192f9949145c050c005ae8a1afcb812a76e9a9f6811

节点k8s-worker01异常

 解决方案:

sudo vim  /etc/containerd/config.toml

在输出中,找到 [plugins."io.containerd.grpc.v1.cri"] 配置段落,确保其内容如下:

[plugins."io.containerd.grpc.v1.cri"]
  systemd_cgroup = true

根据错误信息,systemd_cgroup 只能用于 io.containerd.runtime.v1.linux 运行时。因此,请检查配置文件中的运行时设置,并进行如下调整:

  • 找到 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] 部分。
  • 确保你使用的运行时是 io.containerd.runtime.v1.linux,例如:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runtime.v1.linux"
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

重启 containerd 服务

保存配置文件后,重启 containerd

sudo systemctl restart containerd

重新尝试将工作节点加入集群

节点k8s-worker02

 kubeadm join 192.168.113.131:6443 --token abcdef.0123456789abcdef \
     --discovery-token-ca-cert-hash sha256:b9bc0ddc89b05fb76d7d9192f9949145c050c005ae8a1afcb812a76e9a9f6811

节点k8s-worker02异常

 解决方案:

sudo vim /etc/sysctl.conf

找到或者添加以下行:

net.ipv4.ip_forward = 1

保存文件并退出编辑器,然后应用新的设置:

sudo sysctl -p

再次加入集群

kubeadm join 192.168.113.131:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:b9bc0ddc89b05fb76d7d9192f9949145c050c005ae8a1afcb812a76e9a9f6811

 3.5验证K8S集群节点是否可用

kubectl get nodes

kubectl get pods -n kube-system

 

 

参考来自 https://huangzhongde.cn/istio/Chapter2.html

posted @   骑着蚂蚁快跑  阅读(324)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
点击右上角即可分享
微信分享提示