Kubernetes学习之路(六)之创建K8S应用
Kubernetes学习之路(六)之创建K8S应用
-
一、Deployment的概念
K8S本身并不提供网络的功能,所以需要借助第三方网络插件进行部署K8S中的网络,以打通各个节点中容器的互通。
POD,是K8S中的一个逻辑概念,K8S管理的是POD,一个POD中包含多个容器,容器之间通过localhost互通。而POD需要ip地址。每个POD都有一个标签
POD–>RC–>RS–>Deployment (发展历程)
Deployment,表示用户对K8S集群的一次更新操作。Deployment是一个比RS应用模式更广的API对象。用于保证Pod的副本的数量。
可以是创建一个新的服务,更新一个新的服务,也可以是滚动升级一个服务。滚动升级一个服务。实际是创建一个新的RS,然后将新的RS中副本数增加到理想状态,将旧的RS中的副本数减小到0的复合操作; 这样的一个复合操作用一个RS是不太好描述的,所以用一个更通用的Deployment来描述。
RC、RS和Deployment只是保证了支撑服务的POD数量,但是没有解决如何访问这些服务的问题。一个POD只是一个运行服务的实例,随时可以能在一个节点上停止,在另一个节点以一个新的IP启动一个新的POD,因此不能以确定的IP和端口号提供服务。
要稳定地提供服务需要服务发现和负载均衡能力。服务发现完成的工作,是针对客户端访问的服务,找到对应的后端服务实例。
在K8S的集中当中,客户端需要访问的服务就是Service对象。每个Service会对应一个集群内部有效的虚拟IP,集群内部通过虚拟IP访问一个服务。
-
二、创建K8S的第一个应用
[root@linux-node1 ~]# kubectl run net-test --image=alpine --replicas=2 sleep 36000 #创建名称为net-test的应用,镜像指定为alpine,副本数为2个 deployment.apps "net-test" created [root@linux-node1 ~]# kubectl get pod -o wide #查看pod的状态信息,此时是API Server从etcd中读取这些数据 NAME READY STATUS RESTARTS AGE IP NODE net-test-7b949fc785-2v2qz 1/1 Running 0 56s 10.2.87.2 192.168.56.120 net-test-7b949fc785-6nrhm 0/1 ContainerCreating 0 56s <none> 192.168.56.130 [root@linux-node1 ~]# kubectl get deployment net-test NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE net-test 2 2 2 2 22h
kubectl get deployment命令可以查看net-test的状态,输出显示两个副本正常运行。还可以在创建的过程中,通过kubectl describe deployment net-test了解详细的信息。
[root@linux-node1 ~]# kubectl describe deployment net-test Name: net-test Namespace: default CreationTimestamp: Thu, 16 Aug 2018 15:41:29 +0800 Labels: run=net-test Annotations: deployment.kubernetes.io/revision=1 Selector: run=net-test Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 1 max unavailable, 1 max surge Pod Template: Labels: run=net-test Containers: net-test: Image: alpine Port: <none> Host Port: <none> Args: sleep 360000 Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: net-test-5767cb94df (2/2 replicas created) Events: <none>
Events是Deployment的日志,记录整个RelicaSet的启动过程,从上面的创建过程,可以看到Deployment是通过ReplicaSet来管理Pod。
[root@linux-node1 ~]# kubectl get replicaset #获取副本集信息 NAME DESIRED CURRENT READY AGE net-test-5767cb94df 2 2 2 23h [root@linux-node1 ~]# kubectl describe replicaset net-test-5767cb94df #查看副本集的详细信息 Name: net-test-5767cb94df Namespace: default Selector: pod-template-hash=1323765089,run=net-test Labels: pod-template-hash=1323765089 run=net-test Annotations: deployment.kubernetes.io/desired-replicas=2 deployment.kubernetes.io/max-replicas=3 deployment.kubernetes.io/revision=1 Controlled By: Deployment/net-test #指明ReplicaSet是由Deployment net-test创建 Replicas: 2 current / 2 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: pod-template-hash=1323765089 run=net-test Containers: net-test: Image: alpine Port: <none> Host Port: <none> Args: sleep 360000 Environment: <none> Mounts: <none> Volumes: <none> Events: <none> #Events可以查看到两个副本Pod的创建过程 [root@linux-node1 ~]# kubectl get pod #获取Pod信息,可以看到2个副本都处于Running状态 NAME READY STATUS RESTARTS AGE net-test-5767cb94df-djt98 1/1 Running 0 22h net-test-5767cb94df-zb8m4 1/1 Running 0 23h [root@linux-node1 ~]# kubectl describe pod net-test-5767cb94df-djt98 #查看pod的详细信息 Name: net-test-5767cb94df-djt98 Namespace: default Node: 192.168.56.13/192.168.56.13 Start Time: Thu, 16 Aug 2018 15:53:00 +0800 Labels: pod-template-hash=1323765089 run=net-test Annotations: <none> Status: Running IP: 10.2.73.3 Controlled By: ReplicaSet/net-test-5767cb94df Containers: net-test: Container ID: docker://c8e267326ed80f3cbe8111377c74dd1f016beaef513196b941165e180a5d5733 Image: alpine Image ID: docker-pullable://alpine@sha256:7043076348bf5040220df6ad703798fd8593a0918d06d3ce30c6c93be117e430 Port: <none> Host Port: <none> Args: sleep 360000 State: Running Started: Thu, 16 Aug 2018 15:53:06 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-mnqx5 (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-mnqx5: Type: Secret (a volume populated by a Secret) SecretName: default-token-mnqx5 Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: <none> Events: <none>
Controlled By
指明此 Pod 是由 ReplicaSet/net-test-5767cb94df 创建。Events
记录了 Pod 的启动过程。如果操作失败(比如 image 不存在),也能在这里查看到原因。
总结创建的过程:
(1)用户通过kubectl创建Deployment
(2)Deployment创建ReplicaSet
(3)ReplicaSet创建Pod
如图:
-
三、K8S创建资源的两种方式
Kubernetes 支持两种方式创建资源:
(1)用kubectl命令直接创建,在命令行中通过参数指定资源的属性。此方式简单直观,比较适合临时测试或实验使用。
kubectl run net-test --image=alpine --replicas=2 sleep 36000
(2)通过配置文件和kubectl create创建。在配置文件中描述了应用的信息和需要达到的预期状态。
kubectl create -f nginx-deployment.yaml
-
四、以Deployment YAML方式创建Nginx服务
- 1、创建deployment
[root@linux-node1 ~]# vim nginx-deployment.yaml #使用yaml的方式进行创建应用 apiVersion: apps/v1 #apiVersion是当前配置格式的版本 kind: Deployment #kind是要创建的资源类型,这里是Deploymnet metadata: #metadata是该资源的元数据,name是必须的元数据项 name: nginx-deployment labels: app: nginx spec: #spec部分是该Deployment的规则说明 replicas: 3 #relicas指定副本数量,默认为1 selector: matchLabels: app: nginx template: #template定义Pod的模板,这是配置的重要部分 metadata: #metadata定义Pod的元数据,至少要顶一个label,label的key和value可以任意指定 labels: app: nginx spec: #spec描述的是Pod的规则,此部分定义pod中每一个容器的属性,name和image是必需的 containers: - name: nginx image: nginx:1.13.12 ports: - containerPort: 80 [root@linux-node1 ~]# kubectl create -f nginx-deployment.yaml #创建nginx-deployment应用 deployment.apps "nginx-deployment" created
- 2、查看deployment
[root@linux-node1 ~]# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE net-test 2 2 2 2 32m nginx-deployment 3 3 3 0 10s [root@linux-node1 ~]# kubectl describe deployment nginx-deployment #查看deployment详情 Name: nginx-deployment Namespace: default CreationTimestamp: Thu, 16 Aug 2018 16:13:37 +0800 Labels: app=nginx Annotations: deployment.kubernetes.io/revision=1 Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 0 available | 3 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.13.12 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: <none> NewReplicaSet: nginx-deployment-6c45fc49cb (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 1m deployment-controller Scaled up replica set nginx-deployment-6c45fc49cb to 3
- 3、查看Pod
[root@linux-node1 ~]# kubectl get pod #查看pod在状态,正在创建中,此时应该正在拉取镜像 NAME READY STATUS RESTARTS AGE net-test-5767cb94df-djt98 1/1 Running 0 22m net-test-5767cb94df-hcwv7 1/1 Unknown 0 34m net-test-5767cb94df-zb8m4 1/1 Running 0 34m nginx-deployment-6c45fc49cb-dmc22 0/1 ContainerCreating 0 2m nginx-deployment-6c45fc49cb-fd8xm 0/1 ContainerCreating 0 2m nginx-deployment-6c45fc49cb-sc8sh 0/1 ContainerCreating 0 2m [root@linux-node1 ~]# kubectl describe pod nginx-deployment-6c45fc49cb-dmc22 #查看具体某个pod的状态信息 [root@linux-node1 ~]# kubectl get pod -o wide #创建成功,状态为Running NAME READY STATUS RESTARTS AGE IP NODE net-test-5767cb94df-djt98 1/1 Running 0 24m 10.2.73.3 192.168.56.13 net-test-5767cb94df-hcwv7 1/1 Unknown 0 36m 10.2.10.2 192.168.56.12 net-test-5767cb94df-zb8m4 1/1 Running 0 36m 10.2.73.2 192.168.56.13 nginx-deployment-6c45fc49cb-dmc22 1/1 Running 0 4m 10.2.73.6 192.168.56.13 nginx-deployment-6c45fc49cb-fd8xm 1/1 Running 0 4m 10.2.73.4 192.168.56.13 nginx-deployment-6c45fc49cb-sc8sh 1/1 Running 0 4m 10.2.73.5 192.168.56.13
Deployment、ReplicaSet、Pod 都已经就绪。如果要删除这些资源,执行 kubectl delete deployment nginx-deployment
或者 kubectl delete -f nginx-deployment.yaml
。
- 4、测试Pod访问
[root@linux-node1 ~]# curl --head http://10.2.73.6 HTTP/1.1 200 OK Server: nginx/1.13.12 Date: Thu, 16 Aug 2018 08:18:14 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT Connection: keep-alive ETag: "5acb8e45-264" Accept-Ranges: bytes
- 5、更新Deployment
[root@linux-node1 ~]# kubectl set image deployment/nginx-deployment nginx=nginx:1.15.2 --record #nginx的版本升级,由1.13.2升级为1.15.2,记录需要加参数--record deployment.apps "nginx-deployment" image updated [root@linux-node1 ~]# kubectl get deployment -o wide #查看更新后的deployment,可以看到当前4个副本,说明还在滚动升级中 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR net-test 2 2 2 2 39m net-test alpine run=net-test nginx-deployment 3 4 1 3 6m nginx nginx:1.15.2 app=nginx
- 6、查看更新历史
[root@linux-node1 ~]# kubectl rollout history deployment/nginx-deployment #查看更新历史记录 deployments "nginx-deployment" REVISION CHANGE-CAUSE 1 <none> 2 kubectl set image deployment/nginx-deployment nginx=nginx:1.15.2 --record=true
- 7、查看具体某一个版本的升级历史
[root@linux-node1 ~]# kubectl rollout history deployment/nginx-deployment --revision=1 deployments "nginx-deployment" with revision #1 Pod Template: Labels: app=nginx pod-template-hash=2701970576 Containers: nginx: Image: nginx:1.13.12 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none>
- 8、查看更新后的Deployment,并进行访问
[root@linux-node1 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE net-test-5767cb94df-djt98 1/1 Running 0 30m 10.2.73.3 192.168.56.13 net-test-5767cb94df-hcwv7 1/1 Unknown 0 42m 10.2.10.2 192.168.56.12 net-test-5767cb94df-zb8m4 1/1 Running 0 42m 10.2.73.2 192.168.56.13 nginx-deployment-64749d4b59-djttr 1/1 Running 0 37s 10.2.73.8 192.168.56.13 nginx-deployment-64749d4b59-jp7fw 1/1 Running 0 3m 10.2.73.7 192.168.56.13 nginx-deployment-64749d4b59-q4fsn 1/1 Running 0 33s 10.2.73.9 192.168.56.13 [root@linux-node1 ~]# curl --head http://10.2.73.7 HTTP/1.1 200 OK Server: nginx/1.15.2 #版本已经升级为1.15.2 Date: Thu, 16 Aug 2018 08:24:09 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 24 Jul 2018 13:02:29 GMT Connection: keep-alive ETag: "5b572365-264" Accept-Ranges: bytes
- 9、快速回滚到上一个版本
[root@linux-node1 ~]# kubectl rollout undo deployment/nginx-deployment #回滚上一个版本 deployment.apps "nginx-deployment" [root@linux-node1 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE net-test-5767cb94df-djt98 1/1 Running 0 32m 10.2.73.3 192.168.56.13 net-test-5767cb94df-hcwv7 1/1 Unknown 0 43m 10.2.10.2 192.168.56.12 net-test-5767cb94df-zb8m4 1/1 Running 0 43m 10.2.73.2 192.168.56.13 nginx-deployment-6c45fc49cb-b9h84 1/1 Running 0 24s 10.2.73.11 192.168.56.13 nginx-deployment-6c45fc49cb-g4mrg 1/1 Running 0 26s 10.2.73.10 192.168.56.13 nginx-deployment-6c45fc49cb-k29kq 1/1 Running 0 21s 10.2.73.12 192.168.56.13 [root@linux-node1 ~]# curl --head http://10.2.73.10 HTTP/1.1 200 OK Server: nginx/1.13.12 Date: Thu, 16 Aug 2018 08:25:35 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT Connection: keep-alive ETag: "5acb8e45-264" Accept-Ranges: bytes 回滚完成,每一次更新或者回滚ip都会变化,所以需要通过vip进行访问,这就引入了service
- 10、使用service的vip进行访问应用
[root@linux-node1 ~]# vim nginx-service.yaml #使用yaml方式创建service kind: Service apiVersion: v1 metadata: name: nginx-service spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80 [root@linux-node1 ~]# kubectl create -f nginx-service.yaml #创建service service "nginx-service" created [root@linux-node1 ~]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 4h nginx-service ClusterIP 10.1.213.126 <none> 80/TCP 15s #这个就是vip [root@linux-node2 ~]# curl --head http://10.1.213.126 #在node2节点上进行访问vip测试,在node1上无法访问是因为没有安装kube-proxy导致无法访问 HTTP/1.1 200 OK Server: nginx/1.13.12 Date: Thu, 16 Aug 2018 08:30:08 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT Connection: keep-alive ETag: "5acb8e45-264" Accept-Ranges: bytes [root@linux-node2 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.0.1:443 rr persistent 10800 -> 192.168.56.11:6443 Masq 1 0 0 TCP 10.1.213.126:80 rr -> 10.2.73.10:80 Masq 1 0 1 -> 10.2.73.11:80 Masq 1 0 1 -> 10.2.73.12:80 Masq 1 0 0
查看LVS状态可以看到,当访问VIP:10.1.213.126时,会进行负载均衡到各个pod
- 11、扩容到5个节点
[root@linux-node1 ~]# kubectl scale deployment nginx-deployment --replicas 5 #对应用的副本数进行扩容,直接指定副本数为5 deployment.extensions "nginx-deployment" scaled [root@linux-node1 ~]# kubectl get pod #查看pod状态,可以看到已经增加到5个副本 NAME READY STATUS RESTARTS AGE net-test-5767cb94df-djt98 1/1 Running 0 38m net-test-5767cb94df-hcwv7 1/1 Unknown 0 50m net-test-5767cb94df-zb8m4 1/1 Running 0 50m nginx-deployment-6c45fc49cb-b9h84 1/1 Running 0 6m nginx-deployment-6c45fc49cb-g4mrg 1/1 Running 0 7m nginx-deployment-6c45fc49cb-k29kq 1/1 Running 0 6m nginx-deployment-6c45fc49cb-n9qkx 1/1 Running 0 24s nginx-deployment-6c45fc49cb-xpx9s 1/1 Running 0 24s [root@linux-node2 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.0.1:443 rr persistent 10800 -> 192.168.56.11:6443 Masq 1 0 0 TCP 10.1.213.126:80 rr -> 10.2.73.10:80 Masq 1 0 0 -> 10.2.73.11:80 Masq 1 0 0 -> 10.2.73.12:80 Masq 1 0 1 -> 10.2.73.13:80 Masq 1 0 0 -> 10.2.73.14:80 Masq 1 0 0