K8S--SVC网络
一、Service网络简介:
Why:pod重启或者重建ip会发生改变,pod之间访问会有问题;
What:解耦了服务和应用。(集群内部服务之间调用填写service域名/IP即可;
How:声明一个service对象
一般常用的有两种:
k8s集群内部的service:selector指定pod,自动创建Endpoints
k8s集群外的service:手动创建Endpoints,指定外部服务的ip、端口和协议。
1.1 k8s 三种网络
node network
pod network
cluster network,也称作virtual IP虚拟网络—service
kube-proxy监听k8s-apiserver,一旦service资源发生变化,kube-proxy就会生成对应的负载调度的调整,这样就保证service的最新状态。
1.2 service类型
ExternalName
Cluster IP
Node Port
Load Balancer
1.3 资源记录
SVC_NAME.NS_NAME.DOMAIN.LTD.
DOMAIN.LTD 集群域名后缀默认为svc.cluster.local.
1.4 service 三种工作模式
userspace
iptabels
ipvs 最新版本
1.4.1 iptables网络模型
1.4.2 ipvs模型
二、实践
2.1 使用清单文件创建cluster IP service资源
apiVersion: v1 kind: Service metadata: name: nginx namespace: default spec: selector: app: nginx role: logstor clusterIP: 10.97.97.97 //默认为clusterIP,不指定会自动分配 type: ClusterIP ports: - port: 80 //service端口 targetPort: 80 //容器端口
master yaml]# kubectl get svc -o wide //查看创建的SVC NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28d <none> nginx ClusterIP 10.97.97.97 <none> 80/TCP 13m app=nginx,role=logstor
2.1.1 资源记录
SVC_NAME.NS_NAME.DOMAIN.LTD.
集群默认后缀DOMAIN.LTD.:svc.cluster.local.
所以上面创建的服务记录为:nginx.default.svc.cluster.local
2.2 使用资源清单创建类型为nodeport的svc
apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: selector: app: myapp release: canary clusterIP: 10.99.99.99 type: NodePort ports: - port: 80 //service端口 targetPort: 80 //pod端口 nodePort: 30080 //映射节点端口,后续可以直接访问该节点端口,DNAT到service端口,再到pod端口 master yaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28d myapp NodePort 10.99.99.99 <none> 80:30080/TCP 38m //网络类型为节点网络 nginx ClusterIP 10.97.97.97 <none> 80/TCP 76m
2.2.1 访问servic
]# curl http://172.18.0.68:30080 //在集群外部机器访问该集群内IP加端口,servie后端须关联pod资源才可以被访问
注:此时可以在节点前面做代理服务器来访问service地址
2.3 Load Balancer(负载均衡器) 访问示意图,在Nodeport前面搭建一套负载均衡
参考华为云文档:https://support.huaweicloud.com/usermanual-cce/cce_01_0014.html
apiVersion: v1 kind: Service metadata: annotations: kubernetes.io/elb.id: 3c7caa5a-a641-4bff-801a-feace27424b6 # ELB实例ID,替换为实际值 name: nginx spec: ports: - name: service0 port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: LoadBalancer
三、标签管理
3.1 模拟匹配pod作为后端资源
master yaml]# kubectl apply -f test.yaml master yaml]# kubectl label pods nginx app=nginx master yaml]# kubectl describe svc nginx Name: nginx Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"clusterIP":"10.97.97.97","... Selector: app=nginx,role=logstor Type: ClusterIP IP: 10.97.97.97 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: <none> //可以看到当svc标签与pod标签只匹配一个时,service未关联到pod资源 Session Affinity: None Events: <none> master yaml]# kubectl label pods nginx role=logstor //手动匹配所有标签 master yaml]# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28d <none> nginx ClusterIP 10.97.97.97 <none> 80/TCP 13m app=nginx,role=logstor master yaml]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx 1/1 Running 0 10m app=nginx,role=logstor //此时标签已全匹配 master yaml]# kubectl describe svc nginx //再次查看发现已匹配后端资源 Name: nginx Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"clusterIP":"10.97.97.97","... Selector: app=nginx,role=logstor //svc标签 Type: ClusterIP IP: 10.97.97.97 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.244.1.84:80 //标签一致,匹配到的后端pod资源 Session Affinity: None Events: <none>
结论:servie根据Selector标签选择器关联pod资源,service的Selector不需要完全匹配pod标签,而pod标签必须完全匹配service标签才可以被关联。
3.2 查看svc关联的pod
]# kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 236d kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 236d kube-system metrics-server ClusterIP 10.105.89.199 <none> 443/TCP 15d kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.109.95.129 <none> 8000/TCP 15d kubernetes-dashboard kubernetes-dashboard NodePort 10.98.147.122 <none> 443:30002/TCP 15d linux40 ng-deploy-80 NodePort 10.97.239.161 <none> 81:30019/TCP 20d ]# kubectl get pods -n linux40 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-55fb8c9d77-9qlkp 1/1 Running 9 20d 10.244.1.98 node1 <none> <none> nginx-deployment-55fb8c9d77-sj8bs 1/1 Running 3 15d 10.244.2.29 node2 <none> <none> # 查看endpoint,svc关联的pod资源 ]# kubectl get ep -n linux40 NAME ENDPOINTS AGE ng-deploy-80 10.244.1.98:80,10.244.2.29:80 20d
四、把来自于同一个客户端的请求始终固定访问同一个pod资源
master yaml]# kubectl patch svc myapp -p ‘{“spec”:{“sessionAffinity”:“ClientIP”}}’
master yaml]# kubectl describe svc myapp //查看service详情 Name: myapp Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"clusterIP":"10.99.99.99","... Selector: app=myapp,release=canary Type: NodePort IP: 10.99.99.99 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30080/TCP Endpoints: 10.244.2.84:80 //后端pod资源 Session Affinity: ClientIP //此参数说明相同客户端访问相同pod资源;若为“none”则为负载均衡调度。 External Traffic Policy: Cluster Events: <none>
无头service:将service名称解析到后端pod ip地址来实现访问
apiVersion: v1 kind: Service metadata: name: myapp-svc namespace: default spec: selector: app: myapp release: canary clusterIP: "None" //指定service IP地址为空 ports: - port: 80 targetPort: 80
master yaml]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 29d master yaml]# dig -t A myapp-svc.default.svc.cluster.local. @10.96.0.10 ...... ;; ANSWER SECTION: myapp-svc.default.svc.cluster.local. 30 IN A 10.244.1.97 //可以看到解析SVC资源记录地址为pod地址,当关联到时 myapp-svc.default.svc.cluster.local. 30 IN A 10.244.2.101 myapp-svc.default.svc.cluster.local. 30 IN A 10.244.2.100 myapp-svc.default.svc.cluster.local. 30 IN A 10.244.1.96 myapp-svc.default.svc.cluster.local. 30 IN A 10.244.2.104
......
master yaml]# kubectl get pods -o wide --show-labels //查看pod标签,与无头service 标签完全匹配
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
myapp-deploy-f476f4fcf-jltjs 1/1 Running 3 6d1h 10.244.1.85 k8s-node1 <none> <none> app=myapp,pod-template-hash=f476f4fcf,releas=canary
myapp-deploy-f476f4fcf-nbqwv 1/1 Running 3 6d1h 10.244.2.86 k8s-node2 <none> <none> app=myapp,pod-template-hash=f476f4fcf,releas=canary
nginx 1/1 Running 1 7h26m 10.244.2.85 k8s-node2 <none> <none> app=myapp,release=canary
master yaml]# kubectl get svc -o wide //查看service标签,与pod 标签完全匹配时,可解析service name到pod地址
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29d <none>
myapp NodePort 10.99.99.99 <none> 80:30080/TCP 7h49m app=myapp,release=canary
myapp-svc ClusterIP None <none> 80/TCP 3m23s app=myapp,release=canary //SVC地址为空
nginx ClusterIP 10.97.97.97 <none> 80/TCP 8h app=nginx,role=logstor
2.5 创建一个hostnetwork类型的pod
2.5.1 不指定DNS策略
apiVersion: v1 kind: Pod metadata: name: nginx-hostnet spec: containers: - name: nginx image: tomcat-app1:v1 hostNetwork: true # 将pod网络和宿主机网络打通
查看网络
]# ifconfig cali0c474fa1137: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440 ether ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet) RX packets 1110071 bytes 566024261 (539.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1110071 bytes 566024261 (539.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 cali4478953ae54: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440 ether ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:5f:9c:3d:62 txqueuelen 0 (Ethernet) RX packets 1915824 bytes 33663989873 (31.3 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11088604 bytes 626077036 (597.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.64.113 netmask 255.255.255.0 broadcast 192.168.64.255 inet6 fe80::4b4:9202:aaca:81ec prefixlen 64 scopeid 0x20<link> ether 00:0c:29:59:02:06 txqueuelen 1000 (Ethernet) RX packets 26655124 bytes 18020033636 (16.7 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 29230552 bytes 58639418242 (54.6 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
]# cat /etc/hosts # Kubernetes-managed hosts file (host network). 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.64.110 master-1 192.168.64.111 master-2 192.168.64.112 master-3 192.168.64.113 node1 192.168.64.114 node2 10.103.97.2 apiserver.cluster.local [root@node1 /]# cat /etc/resolv.conf nameserver 223.5.5.5 nameserver 223.6.6.6
/]# ping www.baidu.com PING www.a.shifen.com (182.61.200.6) 56(84) bytes of data. 64 bytes from 182.61.200.6 (182.61.200.6): icmp_seq=1 ttl=128 time=47.8 ms 64 bytes from 182.61.200.6 (182.61.200.6): icmp_seq=2 ttl=128 time=45.7 ms ^C --- www.a.shifen.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 45.786/46.821/47.857/1.057 ms
2.5.2 指定dns策略
apiVersion: v1 kind: Pod metadata: name: nginx-hostnet spec: containers: - name: nginx image: tomcat-app1:v1 hostNetwork: true # 将pod网络和宿主机网络打通 dnsPolicy: ClusterFirstWithHostNet # DNS策略,使用集群内的dns
查看dns
]# kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d6h kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3d6h # 集群默认dns
查看网络
]# cat /etc/hosts # Kubernetes-managed hosts file (host network). 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.64.110 master-1 192.168.64.111 master-2 192.168.64.112 master-3 192.168.64.113 node1 192.168.64.114 node2 10.103.97.2 apiserver.cluster.local
[root@node2 /]# cat /etc/resolv.conf nameserver 10.96.0.10 # 可以看到使用的集群默认dns search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5
[root@node2 /]# ifconfig cali56c64d85ccf: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440 ether ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 calidb7d83c1f13: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440 ether ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:28:df:af:fc txqueuelen 0 (Ethernet) RX packets 645516 bytes 2516757196 (2.3 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 621504 bytes 45130104 (43.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0