问题背景:

这个是安装k8s时报的错,安装使用的是ubuntu系统,当安装到coredns时报如下错

 

解决方法:

查找了一番资料,得出结论这个算是ubuntu和k8s coredns安装的一个兼容性问题,不过很好解决,参照coredns官方文档就可以~

首先贴出官网:https://coredns.io/plugins/loop/#troubleshooting

最下面有一段就是说这个问题

Troubleshooting Loops In Kubernetes Clusters

When a CoreDNS Pod deployed in Kubernetes detects a loop, the CoreDNS Pod will start to “CrashLoopBackOff”. This is because Kubernetes will try to restart the Pod every time CoreDNS detects the loop and exits.

A common cause of forwarding loops in Kubernetes clusters is an interaction with a local DNS cache on the host node (e.g. systemd-resolved). For example, in certain configurations systemd-resolved will put the loopback address 127.0.0.53 as a nameserver into /etc/resolv.conf. Kubernetes (via kubelet) by default will pass this /etc/resolv.conf file to all Pods using the default dnsPolicy rendering them unable to make DNS lookups (this includes CoreDNS Pods). CoreDNS uses this /etc/resolv.conf as a list of upstreams to forward requests to. Since it contains a loopback address, CoreDNS ends up forwarding requests to itself.

There are many ways to work around this issue, some are listed here:

  • Add the following to your kubelet config yaml: resolvConf: <path-to-your-real-resolv-conf-file> (or via command line flag --resolv-conf deprecated in 1.10). Your “real” resolv.conf is the one that contains the actual IPs of your upstream servers, and no local/loopback address. This flag tells kubelet to pass an alternate resolv.conf to Pods. For systems using systemd-resolved/run/systemd/resolve/resolv.conf is typically the location of the “real” resolv.conf, although this can be different depending on your distribution.
  • Disable the local DNS cache on host nodes, and restore /etc/resolv.conf to the original.
  • A quick and dirty fix is to edit your Corefile, replacing forward . /etc/resolv.conf with the IP address of your upstream DNS, for example forward . 8.8.8.8. But this only fixes the issue for CoreDNS, kubelet will continue to forward the invalid resolv.conf to all default dnsPolicy Pods, leaving them unable to resolve DNS.

他的意思是,ubuntu系统coredns的默认配置文件/etc/resolv.conf包含127.0.0.1地址,造成回环问题,解决方式就是让coredns读取到系统的真实配置文件 /run/systemd/resolve/resolv.conf 就可以

操作步骤如下,主要有两种方法

方法一、修改kubelet的yaml配置文件

1.修改resolvConf参数为 /run/systemd/resolve/resolv.conf

vi /etc/kubernetes/kubelet-conf.yml

 2.重启kubelet

systemctl daemon-reload
systemctl restart kubelet

3.让coredns重载配置文件

kubectl edit deployment coredns -n kube-system

将replicates改为0,从而停止已经启动的coredns pod

kubectl edit deployment coredns -n kube-system

再将replicates改为2,触发coredns重新读取系统配置

4.检查服务状态为Running

kubectl get po -n kube-system

 

方法二、通过命令行参数 --resolv-conf

1.修改kubelet systemd配置文件(我是通过二进制方式安装的)

vi /etc/systemd/system/kubelet.service.d/10-kubelet.conf

在KUBELET_KUBECONFIG_ARGS后面插入 /run/systemd/resolve/resolv.conf

后面步骤和上面一样,不多做赘述

2.重启kubelet

3.让coredns重载配置文件

4.检查服务状态为Running

 

参考文档:

https://coredns.io/plugins/loop/#troubleshooting

https://blog.csdn.net/carry1beyond/article/details/88817462

https://blog.csdn.net/evanxuhe/article/details/90210764

posted on 2024-03-21 11:01  06  阅读(682)  评论(0编辑  收藏  举报