Route Sharding in OpenShift 4.3
如果我们专门需要有一组route处理暴露给内部的应用,就可以采用Route分区的功能,OpenShift 4.3中Route分区功能有所增强,支持基于命名空间的分区以及基于Route的Label进行分区。
下面我们具体来实践一下。
1.创建内部Router组
首先修改自己的node,做一些分组,比如infra,infra1
[root@clientvm 0 ~]# oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-138-140.us-east-2.compute.internal Ready master 14d v1.16.2 ip-10-0-141-38.us-east-2.compute.internal Ready infra,worker 14d v1.16.2 ip-10-0-144-175.us-east-2.compute.internal Ready master 14d v1.16.2 ip-10-0-152-254.us-east-2.compute.internal Ready infra1,worker 14d v1.16.2 ip-10-0-165-83.us-east-2.compute.internal Ready infra,worker 14d v1.16.2 ip-10-0-172-187.us-east-2.compute.internal Ready master 14d v1.16.2
OpenShift 4安装完成后有一个缺省的Ingress Controller,可以通过以下命令看到这个default router.
[root@clientvm 0 ~]# oc get ingresscontroller -n openshift-ingress-operator default -o yaml apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: "2020-02-17T14:05:36Z" finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 2 name: default namespace: openshift-ingress-operator resourceVersion: "286852" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 91cb30a9-518e-11ea-9402-02390bbc2fc6 spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" replicas: 2 status: availableReplicas: 2 conditions: - lastTransitionTime: "2020-02-17T14:05:36Z" reason: Valid status: "True" type: Admitted - lastTransitionTime: "2020-02-18T03:20:38Z" status: "True" type: Available - lastTransitionTime: "2020-02-17T14:05:40Z" message: The endpoint publishing strategy supports a managed load balancer reason: WantedByEndpointPublishingStrategy status: "True" type: LoadBalancerManaged - lastTransitionTime: "2020-02-17T14:05:43Z" message: The LoadBalancer service is provisioned reason: LoadBalancerProvisioned status: "True" type: LoadBalancerReady - lastTransitionTime: "2020-02-17T14:05:40Z" message: DNS management is supported and zones are specified in the cluster DNS config. reason: Normal status: "True" type: DNSManaged - lastTransitionTime: "2020-02-17T14:05:47Z" message: The record is provisioned in all reported zones. reason: NoFailedZones status: "True" type: DNSReady - lastTransitionTime: "2020-02-18T03:20:38Z" status: "False" type: Degraded - lastTransitionTime: "2020-02-18T03:20:38Z" message: The deployment has Available status condition set to True reason: DeploymentAvailable status: "False" type: DeploymentDegraded domain: apps.cluster-6277.sandbox140.opentlc.com endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService observedGeneration: 2 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default tlsProfile: ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: VersionTLS12
我们先建立一组内部的Router
[root@clientvm 0 ~]# cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: replicas: 1 domain: internalapps.cluster-6277.sandbox140.opentlc.com endpointPublishingStrategy: type: LoadBalancerService nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra1: "" routeSelector: matchLabels: type: internal status: {} kind: List metadata: resourceVersion: "" selfLink: ""
oc create -f router-internal.yaml
建立完成后查看
[root@clientvm 0 ~]# oc get ingresscontroller -n openshift-ingress-operator NAME AGE default 14d internal 23m
[root@clientvm 0 ~]# oc get svc -n openshift-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.147.15 a92dd1252518e11ea940202390bbc2fc-1093196650.us-east-2.elb.amazonaws.com 80:31681/TCP,443:31998/TCP 14d router-internal LoadBalancer 172.30.234.210 af3b1fc6df9f44e69b656426ba1497dc-1902918297.us-east-2.elb.amazonaws.com 80:31499/TCP,443:32125/TCP 23m router-internal-default ClusterIP 172.30.205.36 <none> 80/TCP,443/TCP,1936/TCP 14d router-internal-internal ClusterIP 172.30.187.205 <none> 80/TCP,443/TCP,1936/TCP 23m
值得注意的是,我们是在aws公有云环境中去建立的,所以暴露出来的是LoadBalancerService
如果我们是在自己内部云的环境中建立,应该不需要标黑的那段。
查看一下router信息
[root@clientvm 0 ~]# oc get pod -n openshift-ingress -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-6784d69459-db5rt 1/1 Running 0 14d 10.129.2.15 ip-10-0-141-38.us-east-2.compute.internal <none> <none> router-default-6784d69459-xrtgc 1/1 Running 0 14d 10.131.0.4 ip-10-0-165-83.us-east-2.compute.internal <none> <none> router-internal-6c896bb666-mckr4 1/1 Running 0 26m 10.128.2.82 ip-10-0-152-254.us-east-2.compute.internal <none> <none>
2.修改应用路由
注意标黑的host URL以及label上注明了type: internal
[root@clientvm 0 ~]# oc get route tomcat -oyaml apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: openshift.io/host.generated: "true" creationTimestamp: "2020-03-03T08:08:18Z" labels: app: tomcat app.kubernetes.io/component: tomcat app.kubernetes.io/instance: tomcat app.kubernetes.io/name: "" app.kubernetes.io/part-of: tomcat-app app.openshift.io/runtime: "" type: internal name: tomcat namespace: myproject resourceVersion: "5811320" selfLink: /apis/route.openshift.io/v1/namespaces/myproject/routes/tomcat uid: a94f136b-3292-4d8d-981c-923bf5d8a3a0 spec: host: tomcat-myproject.internalapps.cluster-6277.sandbox140.opentlc.com port: targetPort: 8080-tcp to: kind: Service name: tomcat weight: 100 wildcardPolicy: None
从describe信息来看,路由已经暴露在default和internal的路由上。
[root@clientvm 0 ~]# oc describe route tomcat Name: tomcat Namespace: myproject Created: 22 minutes ago Labels: app=tomcat app.kubernetes.io/component=tomcat app.kubernetes.io/instance=tomcat app.kubernetes.io/name= app.kubernetes.io/part-of=tomcat-app app.openshift.io/runtime= type=internal Annotations: openshift.io/host.generated=true Requested Host: tomcat-myproject.internalapps.cluster-6277.sandbox140.opentlc.com exposed on router default (host apps.cluster-6277.sandbox140.opentlc.com) 22 minutes ago exposed on router internal (host internalapps.cluster-6277.sandbox140.opentlc.com) 20 minutes ago Path: <none> TLS Termination: <none> Insecure Policy: <none> Endpoint Port: 8080-tcp Service: tomcat Weight: 100 (100%) Endpoints: 10.128.2.76:8080
之所以暴露在default上,是因为在default router设置中并为设置RouteSelector,因此如果需要只暴露在internal的路由,就需要修改default,加入RouteSelector的Label标识,但这样带来的效果就是,以后每次建立route都需要指定label,从而选择具体把route挂载在一组特定的router上。
在公有云环境中,我们可以直接访问 http://tomcat-myproject.internalapps.cluster-6277.sandbox140.opentlc.com/
3.基于命名空间进行路由分区
以上是基于Route的Label进行路由分区,如果是基于命名空间分区的化,我们继续修改一下route-internal.yaml文件
[root@clientvm 0 ~]# cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: replicas: 1 domain: internalapps.cluster-6277.sandbox140.opentlc.com endpointPublishingStrategy: type: LoadBalancerService namespaceSelector: matchLabels: environment: app nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra1: "" status: {} kind: List metadata: resourceVersion: "" selfLink: ""
然后我们把项目打上标签
oc label ns myproject environment=app
修改route tomcat,删掉label,删除再测试。
值得注意的是,如果路由分组既有namespaceSelector又有RouteSelector,那就说明需要两个条件都生效。