k8s爬坑集锦[网络问题]-服务无法访问
一,前情提要:
集群中有3个节点(1个master, 2个node),部署的业务服务为NodePort类型,通过nodeip:port访问业务的时候发现,一个nodeIP不通,另外两个不通
HOST_1=192.168.86.188 --master
HOST_2=192.168.86.189
HOST_3=192.168.86.190
# curl http://192.168.86.188:30038/targets
189, 190上查看:---OK
# curl http://192.168.86.189:30038/targets ---ok
# kubectl get svc -nprometheus NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheus ClusterIP 10.96.167.153 <none> 9090/TCP 138m prometheus-nodeport NodePort 10.109.216.20 <none> 9090:30038/TCP 138m ---服务ip地址 # kubectl get pods -nprometheus -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE prometheus-0 2/2 Running 0 143m 10.244.2.6 host189 <none>
1)host188上,即问题节点上 [root@host188 ~]## iptables -t nat -L KUBE-NODEPORTS KUBE-MARK-MASQ tcp -- anywhere anywhere /* prometheus/prometheus-nodeport:http */ tcp dpt:30038 KUBE-SVC-SO4PW3GA7SPS7IHM tcp -- anywhere anywhere /* prometheus/prometheus-nodeport:http */ tcp dpt:30038 Chain KUBE-SVC-SO4PW3GA7SPS7IHM (2 references) target prot opt source destination KUBE-SEP-5GEH5OENN4FETTOQ all -- anywhere anywhere Chain KUBE-SEP-5GEH5OENN4FETTOQ (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 10.244.2.6 anywhere DNAT tcp -- anywhere anywhere tcp to:10.244.2.6:9090
【解析】: 如果目的地址是30038的,则首先,打上做MASQ的mark, 然后进行MASQ即将其进行目的地址转化,即交给SVC链(代表集群level的服务)
然后,SVC链会做分流,然后选择一个SEP链(代表一个enpoint),在这里就是交给唯一的pod,地址正是10.244.2.6:9090,
最后,进入路由表进行路由匹配
附: # iptables -t nat -S 和 #iptables -t nat -L
二者还是有区别的,前者信息能更完整一些,所以推荐二者结合使用
2. 路由方面的追踪
[root@host188 ~]# route -n ----问题节点上的路由表 Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.86.1 0.0.0.0 UG 0 0 0 ens160 10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0 10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1 ###去往宿主机是190上的路由,ok的 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 ens160 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.86.0 0.0.0.0 255.255.255.0 U 0 0 0 ens160
[root@host190 ~]# route -n ----标准节点上的路由表 Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.86.1 0.0.0.0 UG 0 0 0 ens160 10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1 ###去往宿主机188上的pods的网段路由 10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0 10.244.2.0 10.244.2.0 255.255.255.0 UG 0 0 0 flannel.1 ###去往189宿主机上pods的网段路由 10.244.57.0 0.0.0.0 255.255.255.192 U 0 0 0 * 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 ens160 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.86.0 0.0.0.0 255.255.255.0 U 0 0 0 ens160
【解析】:可以看到188上的路由表中没有2.0网段的路由,即去190节点上的pod的路由,而目标pod正好是部署在了190节点上。那么路由是谁来添加的?
根据k8s的网络原理(flannel)知道,k8s为集群中的每个节点分配一个子网段,然后记录在etcd中,flannel通过读取etcd的信息(不一定是直接读取,可能是借助api server)得到各个其他节点上分配的子网段信息, 然后向路由表中添加路由信息: 能够到达其他pod的路由信息。由此我们知道,出现问题可能有两个方向: 1)etcd信息损毁; 2)flannel添加路由失败。
wxy碎碎念:
实际上在开始查看路由表的时候,188节点上,使用ip route命令是没有该网段信息,而用route -n命令是可以看到2.0信息,但是其Iface对应的值是*,并不是应该的flannel.1
然后删除这条路由失败,提示NO Process...,但后来发现是掩码不对,之后我们会说
3. 检查etcd的信息
HOST_1=192.168.86.188 HOST_2=192.168.86.189 HOST_3=192.168.86.190 ENDPOINTS=$HOST_1:2379,$HOST_2:2379,$HOST_3:2379 TLS_CERT=" --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/client.pem --key=/etc/etcd/pki/client-key.pem" # docker exec etcd /bin/sh -c "export ETCDCTL_API=3 && etcdctl --endpoints=$ENDPOINTS $TLS_CERT get /registry/minions/linuxtest1a7a" /registry/minions/host189 结果: ...以下信息为flannel的子网段信息和对应的public ip(host的地址)... "flannel.alpha.coreos.com/public-ip 192.168.86.189bB 10.244.2.0/24 ¿! ....
【解析】: 查看了所有节点的信息,发现子网段以及对应的public ip都是存在且正确的,所以说明问题不在etcd身上。(wxy:因为重启过etcd, 所以实际上一直担心是他的问题,这下稍稍放心了)
4. flannel方面的问题
1) 首先,检查下路由表关于flannel接口方面的信息,如下是查看interface的邻居,这些都是通过arp来发现的
[root@host188 ~]## ip neigh show dev flannel.1 ###问题节点上,只发现了190一个neighbor 10.244.1.0 lladdr 76:50:75:23:dd:f8 PERMANENT [root@host189 ~]# ip neigh show dev flannel.1 ###ok节点1: 有发现两个neighbor 10.244.1.0 lladdr 76:50:75:23:dd:f8 PERMANENT 10.244.0.0 lladdr 5a:d6:46:ab:ea:17 PERMANENT [root@host188 net.d]# bridge fdb show flannel.1 ###问题节点上的,转发表信息 01:00:5e:00:00:01 dev ens160 self permanent 01:00:5e:00:00:01 dev docker0 self permanent b6:88:ae:4a:ac:42 dev docker0 vlan 1 master docker0 permanent 76:50:75:23:dd:f8 dev flannel.1 dst 192.168.86.190 self permanent ------------ [root@host190 ~]# ip neigh show dev flannel.1 ###ok节点2: 也有发现两个neighbor 10.244.2.0 lladdr 92:e7:b6:4e:e5:73 PERMANENT 10.244.0.0 lladdr 5a:d6:46:ab:ea:17 PERMANENT
【解析】: 这其实是再一次印证了路由缺失的问题,因为如上是arp信息,正是因为路由的不存在,导致了对应的neighbor的缺失,所以还是要看看flannel的问题
2) 然后,flannel插件的检查,发现其运行日志有报出路由添加失败,至此直接原因找到
# kubectl logs -f -nkube-system kube-flannel-ds-fvj6h I1028 01:59:01.891322 1 main.go:475] Determining IP address of default interface ----会有删除操作? I1028 01:59:01.891599 1 main.go:488] Using interface with name ens160 and address 192.168.86.188 I1028 01:59:01.891622 1 main.go:505] Defaulting external address to interface address (192.168.86.188) I1028 01:59:01.996901 1 kube.go:131] Waiting 10m0s for node controller to sync I1028 01:59:01.996901 1 kube.go:294] Starting kube subnet manager I1028 01:59:02.997083 1 kube.go:138] Node controller sync successful I1028 01:59:02.997115 1 main.go:235] Created subnet manager: Kubernetes Subnet Manager - host188 I1028 01:59:02.997131 1 main.go:238] Installing signal handlers I1028 01:59:02.997229 1 main.go:353] Found network config - Backend type: vxlan I1028 01:59:02.997297 1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false I1028 01:59:02.997619 1 main.go:300] Wrote subnet file to /run/flannel/subnet.env I1028 01:59:02.997630 1 main.go:304] Running backend. I1028 01:59:02.997636 1 main.go:322] Waiting for all goroutines to exit I1028 01:59:02.997650 1 vxlan_network.go:60] watching for new subnet leases E1028 01:59:02.997781 1 vxlan_network.go:158] failed to add vxlanRoute (10.244.2.0/24 -> 10.244.2.0): invalid argument ###关键:flannel报出路由添加失败
3)进一步的,为为了验证flannel对路由的影响,我修改了DaemonSet,添加了亲和性,让问题节点188上不能被调度flannel,如下
修改DaemonSet的亲和性, 让这个节点不启动flannel进程,查看路由信息是否会删除 spec: revisionHistoryLimit: 10 selector: matchLabels: app: flannel tier: node template: metadata: creationTimestamp: null labels: app: flannel tier: node spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: NotIn values: - host188 # kubectl logs -f -nkube-system kube-flannel-ds-fvj6h ... I1028 01:59:02.997650 1 vxlan_network.go:60] watching for new subnet leases E1028 01:59:02.997781 1 vxlan_network.go:158] failed to add vxlanRoute (10.244.2.0/24 -> 10.244.2.0): invalid argument I1028 05:31:48.554667 1 main.go:337] shutdownHandler sent cancel signal... E1028 05:31:48.554782 1 vxlan_network.go:183] DelARP failed: no such file or directory E1028 05:31:48.554812 1 vxlan_network.go:187] DelFDB failed: no such file or directory E1028 05:31:48.554835 1 vxlan_network.go:191] failed to delete vxlanRoute (10.244.2.0/24 -> 10.244.2.0): no such process
【解析】: 也就是说,flannel在销毁的时候,是会尝试销毁相关的路由条目,但是为什么会删除失败呢,而且还是no such process,很明显这是内核报出的错误
而flannel程序本身不可能有问题,肯定是内核此时的状态,不允许flannel删除,为什么呢?在仔细看了下路由表,发现路由表中该条路由的掩码并不是24位,
这是哪里的进程残留下来的路由条目么?其实此时就有了蛛丝马迹,是由残留存在了...
再结合网络上的一些博客有提到:flannel报出 invalid argument,有可能是和接口地址冲突了, 再加上此时的这条残留,突然觉得是不是忽略了什么
附: 删除路由操作
# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface ..10.244.2.0 0.0.0.0 255.255.255.192 U 0 0 0 * ----就是你 手动删除: ip route del 10.244.2.0/24 --not ok # ip route del 10.244.2.0/255.255.255.192 --ok,删除成功 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.86.1 0.0.0.0 UG 0 0 0 ens160 10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 ens160 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.86.0 0.0.0.0 255.255.255.0 U 0 0 0 ens160
4) 接口信息全面检查
# ifconfig -a ###注意要有个参数a, 才能发现这条信息,查看了别的设备,虽然也有这个接口,但是网段都是非冲突的,如下 [root@host188 etc]# ifconfig -a .... tunl0: flags=128<NOARP> mtu 1440 inet 10.244.2.0 netmask 255.255.255.255 ###这个网段,刚好是当前集群网络中使用的,所以导致发生了冲突 tunnel txqueuelen 1000 (IPIP Tunnel) RX packets 9552860 bytes 9661438904 (8.9 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10309373 bytes 2978060504 (2.7 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@host189 ~]# ifconfig -a ... tunl0: flags=128<NOARP> mtu 1440 inet 10.244.188.0 netmask 255.255.255.255 tunnel txqueuelen 1000 (IPIP Tunnel) RX packets 11031772 bytes 8680651091 (8.0 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10557451 bytes 9723264443 (9.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@host190 ~]# ifconfig -a ... tunl0: flags=128<NOARP> mtu 1440 inet 10.244.57.0 netmask 255.255.255.255 tunnel txqueuelen 1000 (IPIP Tunnel) RX packets 720412 bytes 48975635 (46.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 701770 bytes 5701518725 (5.3 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
【解析】: 经过和ll确认,IPIP Tunnel是k8s 1.18版本中,calico类型的网络中使用的,并不是当前的,而这个测试环境会经常k8s 1.18切换安装,所以有了这样的残留。
三,问题解决
[root@host188 etc]# ifconfig tunl0 down [root@host188 etc]# ip tunnel del tunl0 delete tunnel "tunl0" failed: Operation not permitted [root@host188 etc]# ip tunnel show tunl0: ip/ip remote any local any ttl inherit nopmtudisc [root@host188 etc]# lsmod Module Size Used by ... ipip 13465 0 ---ipip模块的存在让tunl0无法删除 [root@host188 etc]# rmmod ipip ----卸载内核模块 [root@host188 etc]# ip tunnel show ---此时删除成功
[root@host188 etc]# route -n ---至此,路由信息正确添加进来了 Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.86.1 0.0.0.0 UG 0 0 0 ens160 10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0 10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1 10.244.2.0 10.244.2.0 255.255.255.0 UG 0 0 0 flannel.1 ----有了 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 ens160 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.86.0 0.0.0.0 255.255.255.0 U 0 0 0 ens160
最后的最后,重新加载ipip模块
# modprobe ipip # lsmod Module Size Used by ipip 13465 0
#此时tunl0也自动被创建,但并未申请ip tunl0: flags=128<NOARP> mtu 1480 tunnel txqueuelen 1000 (IPIP Tunnel) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
============================================END=========================================================================