k8s不得不懂的组件(五)
一、Controllers
官网:https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
前面说过pods,但pod如果想由一个变成多个时,如何保证pod一直是多个呢,这时pod的维护就显的特别的重要了,这时由第一篇的知识可知,Controllers的作用就出来了。上面贴出了官网网址,自己进去看就懂了。前面讲过Pod的创建,yml文件如下,现在如果我想运行多个pod呢,我总不能把下面文件搞多个然后命不同的名字吧。所以我们首先想的是能不能找出一个一键创建维护的方法。
apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80
(1)优化后的文件如下:创建名为nginx_replication.yaml
apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
- kind:表示要新建对象的类型
- spec.selector:表示需要管理的Pod的label,这里表示包含app: nginx的label的Pod都会被该RC管理
- spec.replicas:表示受此RC管理的Pod需要运行的副本数
- spec.template:表示用于定义Pod的模板,比如Pod名称、拥有的label以及Pod中运行的应用等
- 通过改变RC里Pod模板中的镜像版本,可以实现Pod的升级功能
- kubectl apply -f nginx-pod.yaml,此时k8s会在所有可用的Node上,创建3个Pod,并且每个Pod都有一个app: nginx的label,同时每个Pod中都运行了一个nginx容器。
- 如果某个Pod发生问题,Controller Manager能够及时发现,然后根据RC的定义,创建一个新的Pod
- 扩缩容:kubectl scale rc nginx --replicas=5
(2)根据nginx_replication.yaml创建pod
kubectl apply -f nginx_replication.yaml
(3)查看pod
kubectl get pods -o wide
NAME READY STATUS nginx-hksg8 1/1 Running 0 44s 192.168.0.107 w2 nginx-q7bw5 1/1 Running 0 44s 192.168.0.106 w1 nginx-zzwzl 1/1 Running 0 44s 192.168.0.108 w1
(4)查看创建数
kubectl get rc
(5)尝试删除一个pod(会发现你删除一个后,系统又会自动创建一个)
kubectl delete pods nginx-zzwzl kubectl get pods
(6)对pod进行扩缩容
kubectl scale rc nginx --replicas=5 kubectl get pods
nginx-8fctt 0/1 ContainerCreating 0 2s nginx-9pgwk 0/1 ContainerCreating 0 2s nginx-hksg8 1/1 Running 0 6m50s nginx-q7bw5 1/1 Running 0 6m50s nginx-wzqkf 1/1 Running 0 99s
(7)删除pod(通过删除yaml删除所有)
kubectl delete -f nginx_replication.yaml
二、ReplicaSet(RS)
官网地址:https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
上面已经介绍了一种管理pod的方法,接下来讲第二种,其实只要我第一篇幅的文章看懂的人,那么这篇幅文章很好理解的。一般情况下,我们很少单独使用Replica Set,它主要是被Deployment这个更高的资源对象所使用,从而形成一整套Pod创建、删除、更新的编排机制。当我们使用Deployment时,无须关心它是如何创建和维护Replica Set的,这一切都是自动发生的。同时,无需担心跟其他机制的不兼容问题(比如ReplicaSet不支持rolling-update但Deployment支持)。在Kubernetes v1.2时,RC就升级成了另外一个概念:Replica Set,官方解释为“下一代RC”;ReplicaSet和RC没有本质的区别,kubectl中绝大部分作用于RC的命令同样适用于RS,RS与RC唯一的区别是:RS支持基于集合的Label Selector(Set-based selector),而RC只支持基于等式的Label Selector(equality-based selector),这使得Replica Set的功能更强
三、Deployment
官网网址:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
Deployment相对RC最大的一个升级就是我们可以随时知道当前Pod“部署”的进度。接下来创建一个Deployment对象来生成对应的Replica Set并完成Pod副本的创建过程,并检查Deploymnet的状态来看部署动作是否完成(Pod副本的数量是否达到预期的值)
(1)创建nginx_deployment.yaml文件
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
(2)根据nginx_deployment.yaml文件创建pod
kubectl apply -f nginx_deployment.yaml
(3)查看pod
kubectl get pods -o wide kubectl get deployment kubectl get rs kubectl get deployment -o wide
(4)当前nginx的版本
kubectl get deployment -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-deployment 3/3 3 3 3m27s nginx nginx:1.7.9 app=nginx
(5)更新nginx的image版本
kubectl set image deployment nginx-deployment nginx=nginx:1.9.1
四、Labels and Selectors
官网:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
在前面的yaml文件中,看到很多label,顾名思义,就是给一些资源打上标签的,用一个例子说明
apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx
上面的例子表示名称为nginx-pod的pod,有一个label,key为app,value为nginx。我们可以将具有同一个label的pod,交给selector管理,接着看下面
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: # 匹配具有同一个label属性的pod标签 matchLabels: app: nginx template: # 定义pod的模板 metadata: labels: app: nginx # 定义当前pod的label属性,app为key,value为nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
五、Namespace
查看一下当前的命名空间
kubectl get namespaces/ns
NAME STATUS AGE default Active 27m #默认 kube-node-lease Active 27m kube-public Active 27m kube-system Active 27m
查看系统的
kubectl get pods -n kube-system
其实说白了,命名空间就是为了隔离不同的资源,比如:Pod、Service、Deployment等。可以在输入命令的时候指定命名空间`-n`,如果不指定,则使用默认的命名空间:default。
(1)创建命名空间myns-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: myns
(2)执行命令创建
kubectl apply -f myns-namespace.yaml
(3)查看
kubectl get namespaces/ns
NAME STATUS AGE default Active 38m kube-node-lease Active 38m kube-public Active 38m kube-system Active 38m myns Active 6s
(4)指定命名空间下的资源
比如创建一个pod,属于myns命名空间下
vi nginx-pod.yaml
apiVersion: v1 kind: Pod metadata: name: nginx-pod namespace: myns spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80
(5)执行命令创建
kubectl apply -f nginx-pod.yaml
(6)查看myns命名空间下的Pod和资源
kubectl get pods kubectl get pods -n myns kubectl get all -n myns kubectl get pods --all-namespaces #查找所有命名空间下的pod
六、Network
6.1 同一个Pod中的容器通信
接下来就要说到跟Kubernetes网络通信相关的内容,我们都知道K8S最小的操作单位是Pod,先思考一下同一个Pod中多个容器要进行通信,由官网的这段话可以看出,同一个pod中的容器是共享网络ip地址和端口号的,通信显然没问题
Each Pod is assigned a unique IP address. Every container in a Pod shares the network namespace, including the IP address and network ports.
那如果是通过容器的名称进行通信呢?就需要将所有pod中的容器加入到同一个容器的网络中,我们把该容器称作为pod中的pause container。
6.2 集群内Pod之间的通信
接下来就聊聊K8S最小的操作单元,Pod之间的通信,我们都知道Pod会有独立的IP地址,这个IP地址是被Pod中所有的Container共享的,那多个Pod之间的通信能通过这个IP地址吗?这个问题需要分两个维度:一是集群中同一台机器中的Pod,二是集群中不同机器中的Pod;为了解决这问题准备两个pod,一个是nginx,别一个是busybox
nginx_pod.yaml
apiVersion: v1 kind: Pod metadata: name: nginx-pod
labels: app: nginx spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80
busybox_pod.yaml
apiVersion: v1 kind: Pod metadata: name: busybox labels: app: busybox spec: containers: - name: busybox image: busybox command: ['sh', '-c', 'echo The app is running! && sleep 3600']
(1)将两个pod运行起来,并且查看运行情况
kubectl apply -f nginx_pod.yaml
kubectl apply -f busy_pod.yaml
kubectl get pods -o wide
> NAME READY STATUS RESTARTS AGE IP NODE > busybox 1/1 Running 0 49s 192.168.221.70 worker02-kubeadm-k8s > nginx-pod 1/1 Running 0 7m46s 192.168.14.1 worker01-kubeadm-k8s
发现:nginx-pod的ip为192.168.14.1 busybox-pod的ip为192.168.221.70
(2)同一个集群中同一台机器
来到worker01:ping 192.168.14.1
PING 192.168.14.1 (192.168.14.1) 56(84) bytes of data. 64 bytes from 192.168.14.1: icmp_seq=1 ttl=64 time=0.063 ms 64 bytes from 192.168.14.1: icmp_seq=2 ttl=64 time=0.048 ms
来到worker01:curl 192.168.14.1
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style>
(3)同一个集群中不同机器
来到worker02:ping 192.168.14.1
[root@worker02-kubeadm-k8s ~]# ping 192.168.14.1 PING 192.168.14.1 (192.168.14.1) 56(84) bytes of data. 64 bytes from 192.168.14.1: icmp_seq=1 ttl=63 time=0.680 ms 64 bytes from 192.168.14.1: icmp_seq=2 ttl=63 time=0.306 ms 64 bytes from 192.168.14.1: icmp_seq=3 ttl=63 time=0.688 ms
来到worker02:curl 192.168.14.1,同样可以访问nginx;
来到master:
ping/curl 192.168.14.1 访问的是worker01上的nginx-pod ping 192.168.221.70 访问的是worker02上的busybox-pod
来到worker01:ping 192.168.221.70 访问的是worker02上的busybox-pod
6.3、集群内Service-Cluster IP
官网:https://kubernetes.io/docs/concepts/services-networking/service/
对于上述的Pod虽然实现了集群内部互相通信,但是Pod是不稳定的,比如通过Deployment管理Pod,随时可能对Pod进行扩缩容,这时候Pod的IP地址是变化的。能够有一个固定的IP,使得集群内能够访问。也就是之前在架构描述的时候所提到的,能够把相同或者具有关联的Pod,打上Label,组成Service。而Service有固定的IP,不管Pod怎么创建和销毁,都可以通过Service的IP进行访问
(1)创建whoami-deployment.yaml文件
apiVersion: apps/v1 kind: Deployment metadata: name: whoami-deployment labels: app: whoami spec: replicas: 3 selector: matchLabels: app: whoami template: metadata: labels: app: whoami spec: containers: - name: whoami image: jwilder/whoami ports: - containerPort: 8000
进行apply
kubectl apply -f whoami-deployment.yaml
(2)查看pod以及service
kubectl get pods
结果如下
whoami-deployment-5dd9ff5fd8-22k9n 192.168.221.80 worker02-kubeadm-k8s whoami-deployment-5dd9ff5fd8-vbwzp 192.168.14.6 worker01-kubeadm-k8s whoami-deployment-5dd9ff5fd8-zzf4d 192.168.14.7 worker01-kubeadm-k8s
(3)在集群内正常访问
curl 192.168.221.80:8000/192.168.14.6:8000/192.168.14.7:8000
(4) 查看service情况,可以发现目前并没有关于whoami的service
kubect get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
(5)创建whoami的service(该地址只能在集群内部访问)
kubectl expose deployment whoami-deployment
然后查看;可以发现有一个Cluster IP类型的service,名称为whoami-deployment,IP地址为10.105.147.59
kubectl get svc
[root@master-kubeadm-k8s ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h whoami-deployment ClusterIP 10.105.147.59 <none> 8000/TCP 23s
删除svc 命令
kubectl delete service whoami-deployment
(6)通过Service的Cluster IP访问
[root@master-kubeadm-k8s ~]# curl 10.105.147.59:8000 I'm whoami-deployment-678b64444d-b2695 [root@master-kubeadm-k8s ~]# curl 10.105.147.59:8000 I'm whoami-deployment-678b64444d-hgdrk [root@master-kubeadm-k8s ~]# curl 10.105.147.59:8000 I'm whoami-deployment-678b64444d-65t88
(7)具体查看一下whoami-deployment的详情信息,发现有一个Endpoints连接了具体3个Pod
[root@master-kubeadm-k8s ~]# kubectl describe svc whoami-deployment Name: whoami-deployment Namespace: default Labels: app=whoami Annotations: <none> Selector: app=whoami Type: ClusterIP IP: 10.105.147.59 Port: <unset> 8000/TCP TargetPort: 8000/TCP Endpoints: 192.168.14.8:8000,192.168.221.81:8000,192.168.221.82:8000 Session Affinity: None Events: <none>
(8)不妨对whoami扩容成5个
kubectl scale deployment whoami-deployment --replicas=5
(9)再次访问:curl 10.105.147.59:8000
(10)再次查看service具体信息:kubectl describe svc whoami-deployment
(11)其实对于Service的创建,不仅仅可以使用kubectl expose,也可以定义一个yaml文件
apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 type: Cluster
conclusion:其实Service存在的意义就是为了Pod的不稳定性,而上述探讨的就是关于Service的一种类型Cluster IP,只能供集群内访问
6.4 Pod访问外部服务
比较简单,没太多好说的内容,直接访问即可
6.5 外部服务访问集群中的Pod
Service-NodePort
也是Service的一种类型,可以通过NodePort的方式;说白了,因为外部能够访问到集群的物理机器IP,所以就是在集群中每台物理机器上暴露一个相同的IP,比如32008
(1)根据whoami-deployment.yaml创建pod
apiVersion: apps/v1 kind: Deployment metadata: name: whoami-deployment labels: app: whoami spec: replicas: 3 selector: matchLabels: app: whoami template: metadata: labels: app: whoami spec: containers: - name: whoami image: jwilder/whoami ports: - containerPort: 8000
创建一下
kubectl apply -f whoami-deployment.yaml
(2)创建NodePort类型的service,名称为whoami-deployment,创建之前删除之前创建的
kubectl delete svc whoami-deployment
kubectl expose deployment whoami-deployment --type=NodePort
[root@master-kubeadm-k8s ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h whoami-deployment NodePort 10.99.108.82 <none> 8000:32041/TCP 7s
(3)注意上述的端口32041,实际上就是暴露在集群中物理机器上的端口
lsof -i tcp:32041
netstat -ntlp|grep 32041
(4)浏览器通过物理机器的IP访问
http://192.168.0.51:32041
curl 192.168.0.61:32041
NodePort虽然能够实现外部访问Pod的需求,但是真的好吗?其实不好,占用了各个物理主机上的端口
Service-LoadBalance
通常需要第三方云提供商支持,有约束性
Ingress
官网:https://kubernetes.io/docs/concepts/services-networking/ingress/
通过官网可以发现,Ingress就是帮助我们访问集群内的服务的。不过在看Ingress之前,我们还是先以一个案例出发。很简单,在K8S集群中部署 tomcat;浏览器想要访问这个 tomcat,也就是外部要访问该tomcat,用之前的Service-NodePort的方式是可以的,比如暴露一个32008端口,只需要访问192.168.0.61:32008即可。但是很显然,Service-NodePort的方式生产环境不推荐使用,那接下来就基于上述需求,使用Ingress实现访问 tomcat的需求。
什么是Ingress呢,其实官网文档写的很详细,我不想复制了,对开这个文档,这一页大家还是要认真全看完
下面用官网的 Ingress controller例子进行感受下
打开新网址后选择NGINX Ingress Controller进行感受
下面是官网网址
当然也可以从它的github上去了解这个Ingress Controller的具体描述,之所以看github是因为官网有些地方看起来还是有些麻烦的;github的网址是:https://github.com/kubernetes/ingress-nginx;里面有一些说明和在线文档
进入文档查看:https://kubernetes.github.io/ingress-nginx/deploy/
里面说明了用yml文件布署的方法,需要执行下面命令下载文件
这个deploy.yaml文件我提前下载好了,下面就看下下载的内容
apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses verbs: - get - list - watch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: # wait up to five minutes for the drain of connections terminationGracePeriodSeconds: 300 serviceAccountName: nginx-ingress-serviceaccount hostNetwork: true nodeSelector: name: ingress kubernetes.io/os: linux containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 lifecycle: preStop: exec: command: - /wait-shutdown
经过上面的方法就可以得到一个 Ingress Controlle,接下来通过Ingress对Controlle配置定义一个配置规则,规则定义好后, Ingress Controlle就可以读取到这个规则,就可以形成一个对应资源规则的转发。
下面画个图让大家理解下
有了上面的思路图,可以很清楚的知道web访问pod的整个链路的经过,上面说了这么多,下面的事就是按前面的理论进行实践验证及比较,一步步来实现用使用Ingress实现访问tomcat的需求
以tomcat为例完成链路
1)定义pod的my-tomcat.yaml文件,然后通过service进行管理映射外网
用命令
vi my-tomcat.yaml
将下面文件复制进去
apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-deployment labels: app: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: tomcat ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: tomcat-service spec: ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: tomcat
2)用命令进行创建
kubectl apply -f my-tomcat.yaml
3)查看资源
kubectl get pods
4)等创建完成后可以用下面命令查看详细信息
kubectl get pods -o wide
5)用下面命令可以发现他创建了一个tomcat-service
kubectl get svc
tomcat-service NodePort 10.105.51.97 <none> 80:31032/TCP 37s
经过上面的操作,我画的时序图最后一部分完成了,就是如下图示部分
按理说,现在如果我想供外界访问,我只用将这个service改成NodePort方式,但前面说了,生产环境不推荐这样玩,按生产的要求来玩的话,接下来就是要完成时序图倒数第二个步骤,部署nginx Ingress Controller;前面说过通过官网命令下载一个很长的mandatory.yaml文件,但那个文件下载下来是没有下图的标签的
如果我们直接部署的话nginx-ingress-controller会随机去分配在集群中的任一台机器中,如果想解决这个问题那就要用标签选择器去指定他在某一个固定的节点中,那么接下来就是去完成指定操作;
6)确保nginx-controller运行到w1节点上(打标签,让w1节点具有标签,主节点上执行);
kubectl label node w1 name=ingress
有个标签后接下来就是让ingress-nginx运行在这个节点当中,要做的事就是nodeSelector这个节点选择器选择的是name: ingress这个节点,这样做的目的就是让他运行在w1这个固定节点上;
7)使用HostPort方式运行,需要增加配置;让他暴露端口供外界访问;在下载的配置文件中增加下面配置
hostNetwork: true
我将官网上下载的改良后的mandatory.yaml再次展示出来
apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses verbs: - get - list - watch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: # wait up to five minutes for the drain of connections terminationGracePeriodSeconds: 300 serviceAccountName: nginx-ingress-serviceaccount hostNetwork: true nodeSelector: name: ingress kubernetes.io/os: linux containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 lifecycle: preStop: exec: command: - /wait-shutdown
将上面改良后的文件上传到主服务器上,然后执行命令进行创建(要确保w1节点上的80和443端口没有被占用)
kubectl apply -f mandatory.yaml
查看ingress-nginx这个命名空间的创建情况,会发现他拉镜像时间有点长
kubectl get pods -n ingress-nginx
等他拉完后用下面命令查看会发现他一定运行在w1节点上
kubectl get pods -n ingress-nginx -o wide
这个镜像拉取完后要排查创建是否有问题(如果自信下面的步骤是可以不进行的)
查看当前机器的配置文件需要哪些镜像
cat mandatory.yaml |grep image
会看到他显示要拉取的镜像,用docker pull拉一下,如果能正常拉取就说明创建的pods没问题
8)查看当前命名空间下的所有资源,查看资源是否创建完成(一定要等创建完)
kubectl get all -n ingress-nginx
等这些操作完成后,下图的部分就操作完成了
等这个Ingress Controller部署完成后,可以查看w1,会发现80和443端口会被打开(我暴露端口的方式没采用官网NodePort的方式,原因是如果采用官网的方式话,那么每个节点都会有这样一个端口,这样的话端口太多也不方便所以我采用了hostPort方式)
lsof -i tcp:80 lsof -i tcp:443
9)接下来进行下一个操作,写Ingress的规则,用命令创建nginx-ingress.yaml文件
#ingress apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress spec: rules: - host: tomcat.ghy.com http: paths: - path: / backend: serviceName: tomcat-service servicePort: 80
用命令创建
kubectl apply -f nginx-ingress.yaml
查看yaml文件是否创建成功
kubectl get ingress
上面配置文件的host的域名我是没有的,所以我需要配置dns加上,修改dns加上下面的域名
192.168.8.61 tomcat.ghy.com
10)打开浏览器,访问tomcat.ghy.com就可以访问成功了
总结:
如果以后想要使用Ingress网络,其实只要定义ingress,service和pod即可,前提是要保证nginx ingress controller已经配置好了