kubernetes 集群部署问题点统计

1、安装网络插件报错

error unable to recognize "calico.yaml": no matches for kind "DaemonSet" in version "extensions/v1"'

描述:版本不匹配

解决办法:

地址:https://projectcalico.docs.tigera.io/archive/v3.21/getting-started/kubernetes/self-managed-onprem/onpremises

下载:curl https://docs.projectcalico.org/archive/v3.21/manifests/calico.yaml -O

安装:kubectl apply -f calico.yaml

2、集群初始化报错

问题:

[root@master ~]# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

解决办法:

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720         
EOF
sysctl --system

3、证书由未知机构签名

问题:虚拟机部署 kubernetes 集群,服务器重启后报这个错误

[root@master ~]# kubectl get node
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
# 无法连接到服务器:x509:证书由未知授权机构签名(可能是因为在尝试验证 candi 时出现“crypto/rsa:验证错误”日期权威证书“kubernetes”)

解决办法:

配置用户使用 kubectl 访问集群

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

root 用户执行以下,设置环境变量:

export KUBECONFIG=/etc/kubernetes/admin.conf

4、无法确定运行时api版本

问题1:

[root@master ~]# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"

问题2:

[root@node1 ~]# crictl ps
I1128 17:01:24.056664   24303 util_unix.go:104] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/run/containerd/containerd.sock" URL="unix:///run/containerd/containerd.sock"
I1128 17:01:24.061791   24303 util_unix.go:104] "Using this endpoint is deprecated, please consider using full URL format" endpoint="/run/containerd/containerd.sock" URL="unix:///run/containerd/containerd.sock"

解决办法:

crictl config runtime-endpoint unix:///run/containerd/containerd.sock
crictl config image-endpoint unix:///run/containerd/containerd.sock

查看:

[root@master ~]# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
e7878752e4ae6       b6e6ee0788f20       30 minutes ago      Running             calico-node               1                   b5934818396e3       calico-node-f6xfz
f589de8f6f6ca       58a9a0c6d96f2       30 minutes ago      Running             kube-proxy                1                   001b4ae7fedf5       kube-proxy-grx5m
2cf2f96c4d4fc       bef2cf3115095       30 minutes ago      Running             kube-scheduler            3                   7c294db1f3e0c       kube-scheduler-master
054ad592ffac9       a8a176a5d5d69       30 minutes ago      Running             etcd                      2                   a8c6650e2382e       etcd-master
a8becb1e17be7       1a54c86c03a67       30 minutes ago      Running             kube-controller-manager   2                   070b5d24d34d0       kube-controller-manager-master
065ef061923aa       4d2edfd10d3e3       30 minutes ago      Running             kube-apiserver            2                   f5c40d1e9cc76       kube-apiserver-master

5、二进制部署,启动kube-apiserver报错

问题1:错误,未知标志

[root@k8s-master1 k8s-work]# journalctl -xe
-- Unit kube-apiserver.service has begun starting up.
12月 15 11:19:39 k8s-master1 kube-apiserver[7982]: Error: unknown flag: --insecure-port
12月 15 11:19:39 k8s-master1 systemd[1]: kube-apiserver.service: main process exited, code=exited, status=1/FAILURE
12月 15 11:19:39 k8s-master1 systemd[1]: Failed to start Kubernetes API Server.

解决办法:

1、kubernetes v1.24版本以后弃用了  --insecure-port:非安全端口,所以在kube-apiserver配置文件中删除即可。

问题2:

[root@k8s-master1 k8s-work]# journalctl -xe
-- Unit kube-apiserver.service has begun starting up.
12月 15 11:35:46 k8s-master1 kube-apiserver[9169]: Error: unknown flag: --enable-swagger-ui
12月 15 11:35:46 k8s-master1 systemd[1]: kube-apiserver.service: main process exited, code=exited, status=1/FAILURE
12月 15 11:35:46 k8s-master1 systemd[1]: Failed to start Kubernetes API Server.

解决办法:

1、kubernetes v1.24版本以后弃用了 --enable-swagger-ui,所以在kube-apiserver配置文件中删除即可。

然后重新启动kube-apiserver,已经可以启动,显示为running

[root@k8s-master1 k8s-work]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 四 2022-12-15 11:37:05 CST; 15s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 9249 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─9249 /usr/local/bin/kube-apiserver --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --anonymous-aut...

12月 15 11:37:07 k8s-master1 kube-apiserver[9249]: I1215 11:37:07.835066    9249 storage_rbac.go:226] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io...nt-approver
12月 15 11:37:20 k8s-master1 kube-apiserver[9249]: I1215 11:37:20.579468    9249 trace.go:205] Trace[1975007498]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.25.5 ...
12月 15 11:37:20 k8s-master1 kube-apiserver[9249]: Trace[1975007498]: ---"About to Get from storage" 0ms (11:37:19.642)
12月 15 11:37:20 k8s-master1 kube-apiserver[9249]: Trace[1975007498]: ---"About to write a response" 936ms (11:37:20.579)
12月 15 11:37:20 k8s-master1 kube-apiserver[9249]: Trace[1975007498]: ---"Writing http response done" 0ms (11:37:20.579)
12月 15 11:37:20 k8s-master1 kube-apiserver[9249]: Trace[1975007498]: [936.380017ms] [936.380017ms] END
12月 15 11:37:20 k8s-master1 kube-apiserver[9249]: I1215 11:37:20.579595    9249 httplog.go:131] "HTTP" verb="GET" URI="/api/v1/namespaces/default" latency="937.219331ms" userAgent="kub...
12月 15 11:37:20 k8s-master1 kube-apiserver[9249]: I1215 11:37:20.600486    9249 httplog.go:131] "HTTP" verb="GET" URI="/api/v1/namespaces/default/services/kubernetes" latency="20.39133...
12月 15 11:37:20 k8s-master1 kube-apiserver[9249]: I1215 11:37:20.633958    9249 httplog.go:131] "HTTP" verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency="3.13709...
12月 15 11:37:20 k8s-master1 kube-apiserver[9249]: I1215 11:37:20.640702    9249 httplog.go:131] "HTTP" verb="GET" URI="/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kuber...
Hint: Some lines were ellipsized, use -l to show in full.

6、controller-manager启动报错,签署证书的期限(默认8760h0m0s)

问题:

kube-controller-manager[10968]: Error: unknown flag: --experimental-cluster-signing-duration

描述:

kube-controller-manager 的弃用--experimental-cluster-signing-duration标志现已删除。调整您的机器以使用--cluster-signing-duration自 v1.19 以来可用的标志。

解决办法:

将配置文件中的 --experimental-cluster-signing-duration=87600h 替换为: --cluster-signing-duration=87600h 即可解决!

7、--horizontal-pod-autoscaler-use-rest-clients

什么是 Horizontal Pod Autoscaling?

利用 Horizontal Pod Autoscaling,kubernetes 能够根据监测到的 CPU 利用率(或者在 alpha 版本中支持的应用提供的 metric)自动的扩容 replication controller,deployment 和 replica set。

Horizontal Pod Autoscaler 作为 kubernetes API resource 和 controller 的实现。Resource 确定 controller 的行为。Controller 会根据监测到用户指定的目标的 CPU 利用率周期性得调整 replication controller 或 deployment 的 replica 数量。

pod的横向自动伸缩

横向pod自动伸缩是指由控制器管理的pod副本数量的自动伸缩。它由Horizontal控制器执行,我们通过创建一个HorizontalpodAutoscaler(HPA)资源来启用和配置Horizontal控制器。该控制器周期性检查pod度量,计算满足HPA资源所配置的目标数值所需的副本数量,进而调整目标资源(如Deployment、ReplicaSet、ReplicationController、StatefulSet等)的replicas字段

自动伸缩的过程可以分为三个步骤:

  • 获取被伸缩资源对象所管理的所有pod度量。
  • 计算使度量数值到达(或接近)所指定目标数值所需的pod数量。
  • 更新被伸缩资源的replicas字段。

关于Autoscaler采集度量数据方式的改变

在Kubernetes 1.6版本之前,HPA直接从Heapster采集度量。在1.8版本中,如果用 --horizontal-pod-autoscaler-use-rest-clients=true 参数启动ControllerManager,Autoscaler就能通过聚合版的资源度量API拉取度量了。该行为从1.9版本开始将变为默认。

所以1.9版本后,无需再配置!

kubernetes完整攻略:https://www.kancloud.cn/chriswenwu/g_k8s/1006475

8、绑定IP报错

问题:

kube-scheduler[12792]: Error: unknown flag: --address

解决办法:

修改 --address=127.0.0.1  为 --bind-address=127.0.0.1 ,如下所示:

cat > kube-scheduler.conf << "EOF"
KUBE_SCHEDULER_OPTS=" \
--bind-address=127.0.0.1 \

9、kubelet启动失败(kubernetes v1.25.5)

 问题1:

"command failed" err="failed to parse kubelet flag: unknown flag: --cni-conf-dir"

问题2:

"command failed" err="failed to parse kubelet flag: unknown flag: --cni-bin-dir"

问题3:

"command failed" err="failed to parse kubelet flag: unknown flag: --network-plugin-dir"

解决办法:

 1、问题1和问题2

  • --cni-bin-dir string <警告: Alpha 特性>搜索CNI插件二进制文件的完整路径。默认值: /opt/cni/bin
  • --cni-conf-dir string <警告: Alpha 特性> 搜索CNI插件配置文件的完整路径。默认值:/etc/cni/net.d
  • 所以配置中直接删除即可。

 2、问题3

  • 在 kubelet/pod 生命周期中为各种事件调用的网络插件的名称
  • 所以也可以无需配置,删除即可。

 10、Contained安装的kubenetes pod缺少pause镜像

问题

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.8": failed to pull image "k8s.gcr.io/pause:3.8": failed to pull and unpack image "k8s.gcr.io/pause:3.8": failed to resolve reference "k8s.gcr.io/pause:3.8": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.8": dial tcp 142.251.170.82:443: i/o timeout

解决办法:

$ crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8
$ ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 k8s.gcr.io/pause:3.8

到此 Kubernetes Pod报错 filed to get sandbox image "k8s.gcr.io/pause:3.8"问题解决。

 

 

 

 

 

 

 

posted @ 2022-11-28 17:04  西瓜君~  阅读(2306)  评论(0编辑  收藏  举报