每天一点基础K8S--K8S中的service

 
四层代理service
 
# 功能:为一组pods上的应用程序公开网络服务的抽象方法,并为这一组pod提供相同的DNS名字,从而实现负载均衡。
 
# 产生背景
     正常的K8S环境都是集群部署,运行在pod中的某个服务可以在集群内通过POD IP访问
[root@master-worker-node-1 service]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP             NODE                 NOMINATED NODE   READINESS GATES
test   1/1     Running   0          20s   10.244.31.42   only-worker-node-3   <none>           <none>
[root@master-worker-node-1 service]# curl 10.244.31.42
the version is 1.1.0, this is a nginx server in container, and the ip is 10.244.31.42/32
 
    能够访问的原因是现场集群部署过程中使用了ipip类型的calico网络,etcd会记录每个node上的pod subnet信息,主机上的tunl0网卡直接将流量发送给pod。
[root@master-worker-node-1 service]# ip add show tunl0
5: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.244.123.128/32 scope global tunl0
       valid_lft forever preferred_lft forever
[root@master-worker-node-1 service]# tcpdump -i tunl0 -p icmp -nnvvv
dropped privs to tcpdump
tcpdump: listening on tunl0, link-type RAW (Raw IP), capture size 262144 bytes
21:30:57.712908 IP (tos 0x0, ttl 64, id 8740, offset 0, flags [DF], proto ICMP (1), length 84)
    10.244.123.128 > 10.244.31.42: ICMP echo request, id 5, seq 117, length 64
21:30:57.714115 IP (tos 0x0, ttl 63, id 19327, offset 0, flags [none], proto ICMP (1), length 84)
    10.244.31.42 > 10.244.123.128: ICMP echo reply, id 5, seq 117, length 64
 
     但是这类通常我们不会使用这类自主式pod运行业务pod,因为主机关机或者误删除pod,都将导致业务不可访问。哪怕是通过控制器控制pod数量,但pod的删除、重建将导致的ip地址变化同样是服务不可访问。
     因此,将pod上的应用程序公开网络服务需要解决两个问题:1、即使pod地址发生变化,业务仍不受影响;2、多个pod提供相同应用服务时,如何进行四成负载。
 
# K8S集群中的三个IP
# 1、node IP:集群节点的物理网络
[root@master-worker-node-1 service]# ip add show ens3
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:53:50:5f brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.89/24 brd 192.168.122.255 scope global noprefixroute ens3
       valid_lft forever preferred_lft forever
    inet 192.168.122.253/24 scope global secondary ens3
       valid_lft forever preferred_lft forever
    inet6 fe80::df8:5c2e:6181:c92b/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
 
# 2、pod IP:pod地址
[root@master-worker-node-1 service]# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP             NODE                 NOMINATED NODE   READINESS GATES
test   1/1     Running   0          17m   10.244.31.42   only-worker-node-3   <none>           <none>
 
# 3、cluster IP:集群IP,或者叫service IP,仅存在iptables或者ipvs规则中
[root@master-worker-node-1 service]# kubectl get service -o wide
NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE   SELECTOR
gray-release-service   ClusterIP   10.105.60.46   <none>        30080/TCP   30h   version=gray-release
kubernetes             ClusterIP   10.96.0.1      <none>        443/TCP     16d   <none>
 
Service type: NodePort、Cluster IP,集群内集群外的区别,语法区别不大
 
# service基本语法
[root@master-worker-node-1 service]# cat first-service.yaml
kind: Service
apiVersion: v1
metadata:
  name: test-service
spec:
  selector:
    func: test-service
  type: NodePort     # ClusterIP、ExternalName、NodePort、LoadBalancer
  ports:
  - name: test
    port: 31111      # port that will be exposed by this service.
    protocol: TCP    # TCP UDP SCTP
    nodePort: 32333  # The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system.
    targetPort: 80   # Number or name of the port to access on the pods targeted by the service.
 
# 通过deployment创建pod
[root@master-worker-node-1 service]# kubectl get deployment -o wide
NAME           READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                     SELECTOR
nginx-deploy   3/3     3            3           5m24s   nginx-test   nginx:stable-alpine-perl   func=test-service
 
[root@master-worker-node-1 service]# kubectl get pods --show-labels -o wide
NAME                          READY   STATUS    RESTARTS   AGE     IP             NODE                 NOMINATED NODE   READINESS GATES   LABELS
nginx-deploy-76cc5c5b-kjwxn   1/1     Running   0          3m51s   10.244.31.44   only-worker-node-3   <none>           <none>            func=test-service,pod-template-hash=76cc5c5b
nginx-deploy-76cc5c5b-q6dt7   1/1     Running   0          3m51s   10.244.31.46   only-worker-node-3   <none>           <none>            func=test-service,pod-template-hash=76cc5c5b
nginx-deploy-76cc5c5b-rzj96   1/1     Running   0          3m51s   10.244.54.50   only-worker-node-4   <none>           <none>            func=test-service,pod-template-hash=76cc5c5b
 
 
# 创建service
[root@master-worker-node-1 service]# kubectl get service test-service -o wide
NAME           TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)           AGE     SELECTOR
test-service   NodePort   10.105.161.8   <none>        31111:31123/TCP   2m54s   func=test-service
 
# 创建service的时候会同步创建一个同名的endpoints资源
[root@master-worker-node-1 service]# kubectl get endpoints  test-service
NAME           ENDPOINTS                                         AGE
test-service   10.244.31.44:80,10.244.31.46:80,10.244.54.50:80   3h26m
 
# service和pod自动关联
[root@master-worker-node-1 service]# kubectl describe service test-service
Name:                     test-service
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 func=test-service
Type:                     NodePort                # 通过访问每个节点的物理网络都可以访问到pod内的应用资源
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.161.8            # service的地址
IPs:                      10.105.161.8
Port:                     test  31111/TCP        # 将service的31111端口映射到内部pod的80端口
TargetPort:               80/TCP                # 表示内部pod暴露的端口
NodePort:                 test  31123/TCP        # 将service的31111端口和每个集群中的物理节点31123端口绑定
Endpoints:                10.244.31.44:80,10.244.31.46:80,10.244.54.50:80        # 标签选择器对应上的pod ip
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
 
 
# 访问service IP的地址+端口可以访问内部资源
[root@master-worker-node-1 service]# curl  10.105.161.8:31111
the version is 1.1.0, this is a nginx server in container, and the ip is 10.244.54.50/32
[root@master-worker-node-1 service]# curl  10.105.161.8:31111
the version is 1.1.0, this is a nginx server in container, and the ip is 10.244.31.46/32
[root@master-worker-node-1 service]# curl  10.105.161.8:31111
the version is 1.1.0, this is a nginx server in container, and the ip is 10.244.31.44/32
 
# 在集群中任意一个节点上,访问其物理网络的31123端口也能访问到pod内部资源
[root@master-worker-node-2 ~]# ip add show ens3 |  grep 192
    inet 192.168.122.106/24 brd 192.168.122.255 scope global noprefixroute ens3
[root@master-worker-node-2 ~]# curl 192.168.122.106:31123
the version is 1.1.0, this is a nginx server in container, and the ip is 10.244.54.50/32
 
 
[root@master-worker-node-1 service]# ip add show ens3 |  grep 192
    inet 192.168.122.89/24 brd 192.168.122.255 scope global noprefixroute ens3
    inet 192.168.122.253/24 scope global secondary ens3
[root@master-worker-node-1 service]# curl 192.168.122.89:31123
the version is 1.1.0, this is a nginx server in container, and the ip is 10.244.31.46/32
 
# 上述环境中,能访问的原因与ipvs规则有关
 
[root@master-worker-node-1 manifests]# kubectl get configmaps -n kube-system -o yaml |  grep -E "ipvs|Proxy"
      ipvs:
      kind: KubeProxyConfiguration
      mode: ipvs
 
# 每一个节点上都有ipvs规则
[root@master-worker-node-1 manifests]# ipvsadm -Ln |  grep 31123 -A 3
TCP  172.17.0.1:31123 rr
  -> 10.244.31.44:80              Masq    1      0          0         # rr表示轮询
  -> 10.244.31.46:80              Masq    1      0          0         
  -> 10.244.54.50:80              Masq    1      0          0         
TCP  192.168.122.89:31123 rr
  -> 10.244.31.44:80              Masq    1      0          0         
  -> 10.244.31.46:80              Masq    1      0          0         
  -> 10.244.54.50:80              Masq    1      0          0         
TCP  192.168.122.253:31123 rr
  -> 10.244.31.44:80              Masq    1      0          0         
  -> 10.244.31.46:80              Masq    1      0          0         
  -> 10.244.54.50:80              Masq    1      0          0         
--
TCP  10.244.123.128:31123 rr
  -> 10.244.31.44:80              Masq    1      0          0         
  -> 10.244.31.46:80              Masq    1      0          0         
  -> 10.244.54.50:80              Masq    1      0          0
 
Service type: ExternalName
 
# 背景:默认情况下,访问同名空间下的service仅需要访问service-name就行,而一个命名空间下的pod想要其他命名空间下的service,需要使用完整域名才行。
 
# 完整域名: service-name.namespace-name.svc.cluster.local
 
[root@master-worker-node-1 ~]# kubectl get svc -o wide -A |  grep -E "NAME|service"
NAMESPACE     NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
default       service-default   ClusterIP   10.109.121.131   <none>        30000/TCP                7h36m   func=test-service
ns-1          service-ns-1      ClusterIP   10.103.47.187    <none>        30000/TCP                7h25m   func=test-service
ns-2          service-ns-2      ClusterIP   10.103.149.111   <none>        30000/TCP                7h24m   func=test-service
 
[root@master-worker-node-1 ~]# kubectl exec -it -n ns-1 centos -- bash
[root@centos /]# cat /etc/resolv.conf
search ns-1.svc.cluster.local svc.cluster.local cluster.local    # 域名
nameserver 10.96.0.10
options ndots:5
 
[root@centos /]# curl service-ns-1:30000
the version is 1.1.0, this is a nginx server in container, and the ip is 10.244.31.55/32
[root@centos /]# curl service-ns-2:30000
curl: (6) Could not resolve host: service-ns-2
 
[root@centos /]# curl service-ns-2.ns-2.svc.cluster.local:30000
the version is 1.1.0, this is a nginx server in container, and the ip is 10.244.54.57/32
 
[root@centos /]# host service-ns-1
service-ns-1.ns-1.svc.cluster.local has address 10.103.47.187
[root@centos /]# host service-ns-2
Host service-ns-2 not found: 3(NXDOMAIN)
 
 
[root@centos /]# curl 10.103.109.121.131:30000
curl: (6) Could not resolve host: 10.103.109.121.131
 
[root@centos /]# curl 10.109.121.131:30000
the version is 1.1.0, this is a nginx server in container, and the ip is 10.244.31.52/32
 
# 上面的例子看到,虽然访问短域名不行,但是可以通过完整域名和clusterip进行访问,那为什么还需要external name呢?
# 我理解这就是因为1、service IP可能因为集群环境的重启的发生变化,而且地址记忆没有域名清晰;2、带service-name和namespace-name的域名不好记忆,可以通过命名别名一样完成配置。方便维护
 
# 在上面的例子中,ns-1中的POD想要访问ns-2中的service,就可以将ns-2的完整域名配置一个软连接,external Name
 
# externalName配置文件
[root@master-worker-node-1 service]# cat external-name.yaml
apiVersion: v1
kind: Service
metadata:
  name: test-external-name
  namespace: ns-1
spec:
  type: ExternalName
  externalName: service-ns-2.ns-2.svc.cluster.local
 
 
[root@master-worker-node-1 service]# kubectl apply -f external-name.yaml
service/test-external-name created
 
 
[root@master-worker-node-1 service]# kubectl get service -o wide -A  |  grep service
default       service-default      ClusterIP      10.109.121.131   <none>                                30000/TCP                8h      func=test-service
ns-1          service-ns-1         ClusterIP      10.103.47.187    <none>                                30000/TCP                7h49m   func=test-service
ns-1          test-external-name   ExternalName   <none>           service-ns-2.ns-2.svc.cluster.local   <none>                   6m10s   <none>
ns-2          service-ns-2         ClusterIP      10.103.149.111   <none>                                30000/TCP                7h49m   func=test-service
 
 
[root@master-worker-node-1 service]# kubectl exec -it centos -n ns-1 -- bash
[root@centos /]# host test-external-name
test-external-name.ns-1.svc.cluster.local is an alias for service-ns-2.ns-2.svc.cluster.local.
service-ns-2.ns-2.svc.cluster.local has address 10.103.149.111 
 
[root@centos /]# ping test-external-name -c 2
PING service-ns-2.ns-2.svc.cluster.local (10.103.149.111) 56(84) bytes of data.
64 bytes from service-ns-2.ns-2.svc.cluster.local (10.103.149.111): icmp_seq=1 ttl=64 time=0.129 ms
64 bytes from service-ns-2.ns-2.svc.cluster.local (10.103.149.111): icmp_seq=2 ttl=64 time=0.201 ms
 
 
--- service-ns-2.ns-2.svc.cluster.local ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.129/0.165/0.201/0.036 ms
 
[root@centos /]# curl test-external-name:30000
the version is 1.1.0, this is a nginx server in container, and the ip is 10.244.31.56/32
 
# 感觉像是访问自己命名空间下的资源一样。
 
 
小结
 
 
1、k8s中有三种ip,node IP、Pod IP和Cluster IP;
 
2、service的类型有四种:
    cluster ip:只能在集群内访问;means a service will only be accessible inside the cluster, via the cluster IP
    nodePort: 可以在集群外访问,也可以在集群内访问。means a service will be exposed on one port of every node, in addition to 'ClusterIP' type.
    ExternalName:means a service consists of only a reference to an external name that kubedns or equivalent will return as a CNAME record, with no exposing or proxying of any pods involved.
    LoadBalancer: means a service will be exposed via an external load balancer (if the cloud provider supports it), in addition to 'NodePort' type.
 
3、k8s中的namespace没有网络隔离功能,不能的ns只能算是不同的项目,不是内核态的namespace。因此,不同ns下的pod默认是互通的,pod也能访问其他ns下的资源。但是访问需要使用IP地址或者完整域名。此时为了方便,可以创建externalName的链接来实现跨namespace的互访。 
posted @   woshinidaye  阅读(265)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 无需6万激活码!GitHub神秘组织3小时极速复刻Manus,手把手教你使用OpenManus搭建本
· C#/.NET/.NET Core优秀项目和框架2025年2月简报
· Manus爆火,是硬核还是营销?
· 终于写完轮子一部分:tcp代理 了,记录一下
· Qt个人项目总结 —— MySQL数据库查询与断言
点击右上角即可分享
微信分享提示