K8S基础搭建使用

一、 K8S架构

1563068809299
除了核心组件,还有一些推荐的Add-ons:

组件名称 说明
kube-dns 负责为整个集群提供DNS服务
Ingress Controller 为服务提供外网入口
Heapster 提供资源监控
Dashboard 提供GUI
Federation 提供跨可用区的集群
Fluentd-elasticsearch 提供集群日志采集、存储与查询

K8S核心功能:

  • 自愈: 重新启动失败的容器,在节点不可用时,替换和重新调度节点上的容器,对用户定义的健康检查不响应的容器会被中止,并且在容器准备好服务之前不会把其向客户端广播。
  • 弹性伸缩: 通过监控容器的cpu的负载值,如果这个平均高于80%,增加容器的数量,如果这个平均低于10%,减少容器的数量
  • 服务的自动发现和负载均衡: 不需要修改您的应用程序来使用不熟悉的服务发现机制,Kubernetes 为容器提供了自己的 IP 地址和一组容器的单个 DNS 名称,并可以在它们之间进行负载均衡。
  • 滚动升级和一键回滚: Kubernetes 逐渐部署对应用程序或其配置的更改,同时监视应用程序运行状况,以确保它不会同时终止所有实例。 如果出现问题,Kubernetes会为您恢复更改,利用日益增长的部署解决方案的生态系统。

1.1环境准备

cat >> /etc/hosts <<EOF
10.0.0.202 master
10.0.0.215 etcd
10.0.0.197 node1
10.0.0.163 node2
EOF

1.2 etcd节点安装

[root@etcd ~]# yum install etcd -y
vim /etc/etcd/etcd.conf
6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.215:2379"
systemctl start etcd.service
systemctl enable etcd.service
[root@etcd ~]# etcdctl -C http://10.0.0.215:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://10.0.0.215:2379
cluster is healthy

1.3 master节点安装:

yum install kubernetes-master.x86_64 -y
vim /etc/kubernetes/apiserver 
8行:  KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
11行:KUBE_API_PORT="--port=8080"
17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.215:2379"
23行:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

vim /etc/kubernetes/config
22行:KUBE_MASTER="--master=http://10.0.0.202:8080"

systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service

检查服务是否正常

[root@master ~]# kubectl get componentstatus 
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

1.3 node节点安装

yum install kubernetes-node.x86_64 -y

vim /etc/kubernetes/config 
22行:KUBE_MASTER="--master=http://10.0.0.202:8080"

vim /etc/kubernetes/kubelet
5行:KUBELET_ADDRESS="--address=0.0.0.0"
8行:KUBELET_PORT="--port=10250"
11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.163" #node节点ip
14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.202:8080"
17行:KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.202:5000/rhel7/pod-infrastructure:latest"   #这里需要把pod-infrastructure镜像做到本地镜像仓库

systemctl enable kubelet.service
systemctl restart kubelet.service
systemctl enable kube-proxy.service
systemctl restart kube-proxy.service

在master节点检查

[root@master ~]# kubectl get node
NAME         STATUS    AGE
10.0.0.163   Ready     1d
10.0.0.197   Ready     1d

1.4 所有节点配置flannel网络

yum install flannel -y
sed -i 's#http://127.0.0.1:2379#http://10.0.0.215:2379#g' /etc/sysconfig/flanneld

##etcd节点:
etcdctl mk /atomic.io/network/config   '{ "Network": "172.18.0.0/16" }'

##master节点
yum install docker -y
systemctl enable flanneld.service 
systemctl restart flanneld.service 
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

##node节点:
systemctl enable flanneld.service 
systemctl restart flanneld.service 
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service

vim /usr/lib/systemd/system/docker.service
#在[Service]区域下增加一行
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
systemctl daemon-reload 
systemctl restart docker

1.5 配置master为镜像仓库

#所有节点
vim /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.0.202:5000'

systemctl restart docker

#master节点(如有网络问题,多试几次)
docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry  registry

二、 K8S常用资源

2.1 pod资源

pod是最小资源单位.

k8s yaml的主要组成

apiVersion: v1  api版本
kind: pod   资源类型
metadata:   属性
spec:       详细
  1. 创建一个yaml文件:
[root@master pod]# cat k8s_pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: web
spec:
  containers:
    - name: nginx
      image: 10.0.0.202:5000/nginx:latest
      ports:
        - containerPort: 80

基于yaml创建pod:

[root@master pod]# kubectl create -f k8s_pod.yaml 
pod "nginx1" created
[root@master pod]# kubectl get pod
NAME      READY     STATUS              RESTARTS   AGE
nginx     0/1       ContainerCreating   0          12s

此时发现pod创建没能成功,通过kubectl describe pod nginx查看详细信息:

Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason		Message
  ---------	--------	-----	----			-------------	--------	------		-------
  2m		2m		1	{default-scheduler }			Normal		Scheduled	Successfully assigned nginx to 10.0.0.197
  2m		43s		4	{kubelet 10.0.0.197}			Warning		FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"

  1m	6s	6	{kubelet 10.0.0.197}		Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""

由上可见,需要本地镜像仓库需要pod-infrastructure:latest 这个pod基础镜像,所以需要在拉取镜像
docker pull tianyebj/pod-infrastructure,并且push到本地镜像仓库

docker pull tianyebj/pod-infrastructure
docker tag docker.io/tianyebj/pod-infrastructure 10.0.0.202:5000/rhel7/pod-infrastructure:latest
docker push 10.0.0.202:5000/rhel7/pod-infrastructure:latest
#同时改下node节点: /etc/kubernetes/kubelet
#更改配置:KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.202:5000/rhel7/pod-infrastructure:latest"

systemctl restart kubelet.service

在此查看pod创建过程:

Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason		Message
  ---------	--------	-----	----			-------------	--------	------		-------
  14m		14m		1	{default-scheduler }			Normal		Scheduled	Successfully assigned nginx to 10.0.0.197
  13m		3m		7	{kubelet 10.0.0.197}			Warning		FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"

  13m	58s	54	{kubelet 10.0.0.197}		Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""

  45s	45s	1	{kubelet 10.0.0.197}	spec.containers{nginx}	Normal	Pulling			pulling image "10.0.0.202:5000/nginx:1.13"
  46s	41s	2	{kubelet 10.0.0.197}				Warning	MissingClusterDNS	kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  41s	41s	1	{kubelet 10.0.0.197}	spec.containers{nginx}	Normal	Pulled			Successfully pulled image "10.0.0.202:5000/nginx:1.13"
  41s	41s	1	{kubelet 10.0.0.197}	spec.containers{nginx}	Normal	Created			Created container with docker id 9254a49fc62c; Security:[seccomp=unconfined]
  41s	41s	1	{kubelet 10.0.0.197}	spec.containers{nginx}	Normal	Started			Started container with docker id 9254a49fc62c

到此pod创建完成

详细

在这里需要把pod容器tag改成需要用的:

[root@master pod]# docker tag docker.io/tianyebj/pod-infrastructure 10.0.0.202:5000/rhel7/pod-infrastructure:latest
[root@master pod]# docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
docker.io/registry                         latest              f32a97de94e1        9 months ago        25.8 MB
docker.io/tianyebj/pod-infrastructure      latest              34d3450d733b        2 years ago         205 MB
10.0.0.202:5000/rhel7/pod-infrastructure   latest              34d3450d733b        2 years ago         205 MB

强制删除pod资源:

[root@master pod]# kubectl delete pod nginx --force  --grace-period=0 -n  default
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "nginx" deleted

安装docker镜像并且push到本地镜像仓库

[root@master pod]# docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
docker.io/b3log/solo                       latest              4e49320bec76        2 days ago          152 MB
10.0.0.202:5000/b3log/solo                 latest              4e49320bec76        2 days ago          152 MB
docker.io/mysql                            5.7                 1e4405fe1ea9        2 weeks ago         437 MB
10.0.0.202:5000/nginx                      latest              231d40e811cd        2 weeks ago         126 MB
docker.io/nginx                            latest              231d40e811cd        2 weeks ago         126 MB
docker.io/registry                         latest              f32a97de94e1        9 months ago        25.8 MB
10.0.0.202:5000/rhel7/pod-infrastructure   latest              34d3450d733b        2 years ago         205 MB
docker.io/tianyebj/pod-infrastructure      latest              34d3450d733b        2 years ago         205 MB
[root@master pod]# docker tag docker.io/nginx 10.0.0.202:5000/nginx:latest
[root@master pod]# docker push 10.0.0.202:5000/nginx
[root@master pod]# docker tag docker.io/mysql:5.7  10.0.0.202:5000/mysql:5.7
[root@master pod]# docker push 10.0.0.202:5000/mysql

创建pod

[root@master pod]# kubectl create -f k8s_pod.yaml 
pod "nginx" created
[root@master pod]# kubectl get pod
NAME      READY     STATUS    RESTARTS   AGE
nginx     1/1       Running   0          7s

pod运行多个容器:

[root@master pod]# cat k8s_pod2.yaml   
apiVersion: v1  
kind: Pod  
metadata:  
 name: test  
 labels:  
 app: web  
spec:  
 containers:  
 - name: nginx  
 image: 10.0.0.202:5000/nginx:latest  
 ports:  
 - containerPort: 80  
 - name: solo  
 image: 10.0.0.202:5000/b3log/solo:latest  
 command: ["sleep","10000"]  
[root@master pod]# kubectl create -f k8s_pod2.yaml   
pod "test" created  
[root@master pod]# kubectl get pod -o wide   
NAME      READY     STATUS    RESTARTS   AGE       IP            NODE  
nginx     1/1       Running   0          15h       172.18.57.2   10.0.0.197  
test      2/2       Running   0          15s       172.18.39.2   10.0.0.163

查看k8s pod的容器资源详细信息:

docker inspect 3f6cdafa32f5

创建一个pod资源,才能实现k8s的高级功能.
pod容器:基础架构容器
nginx容器: 业务容器

[root@node1 ~]# docker ps   
CONTAINER ID        IMAGE                                             COMMAND                  CREATED             STATUS              PORTS               NAMES  
3f6cdafa32f5        10.0.0.202:5000/nginx:latest                      "nginx -g 'daemon ..."   17 hours ago        Up 17 hours                             k8s_nginx.c74a033c_nginx_default_e5a7a5ea-1bf8-11ea-950d-fa163ef9cf10_5e72b386  
d0198f64f479        10.0.0.202:5000/rhel7/pod-infrastructure:latest   "/pod"                   17 hours ago        Up 17 hours                             k8s_POD.7b6a03f3_nginx_default_e5a7a5ea-1bf8-11ea-950d-fa163ef9cf10_7ee05d13

2.2 ReplicationController资源

k8s资源的常见操作:
kubectl create -f xxx.yaml
kubectl get pod|rc
kubectl describe pod nginx
kubectl delete pod nginx 或者kubectl delete -f xxx.yaml
kubectl edit pod nginx
rc:保证指定数量的pod始终存活,rc通过标签选择器来关联pod

POD资源被关闭也会及时启动自愈。

rc资源

pod资源的高可用行

[root@node2 ~]# docker ps -a -q
f14a9650d408
5a4a70671124
[root@node2 ~]# docker rm -f `docker ps -a -q`
f14a9650d408
5a4a70671124
[root@node2 ~]# docker ps -a -q
7f4142ea7eb9
2ec639b32132

pod资源被强制删除停止后,这些资源会立即重新生成并启动。

rc创建

rc:保证指定数量的pod始终够存活,rc通过标签选择器来关联pod

  • 创建yaml文件
[root@master rc]# cat  k8s_rc.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 5
  selector:
    app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: 10.0.0.202:5000/nginx:1.13
        ports:
        - containerPort: 80
  • 创建rc:
[root@master rc]# pwd
/root/k8s/rc
[root@master rc]# kubectl create -f k8s_rc.yaml 
replicationcontroller "nginx" created

查看状态:

[root@master rc]# kubectl  get rc
NAME      DESIRED   CURRENT   READY     AGE
nginx     5         5         5         9m
  • 启动了多个pod

image.png

rc作用:
当node节点上的pod资源在其中一个node挂掉之后,rc会将挂掉的node上的pod资源驱逐到存活的node之上。

示例1:
pod资源删除后,会马上重新创建新的pod。

[root@master rc]# kubectl get pod -o wide 
NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
nginx         1/1       Running   0          3d        172.18.42.2   10.0.0.197
nginx-0lrwh   1/1       Running   0          37m       172.18.96.4   10.0.0.163
nginx-7wss6   1/1       Running   0          37m       172.18.42.4   10.0.0.197
nginx-g3pnd   1/1       Running   0          37m       172.18.96.3   10.0.0.163
nginx-j173g   1/1       Running   0          37m       172.18.96.5   10.0.0.163
nginx-pxlbx   1/1       Running   0          37m       172.18.42.5   10.0.0.197
nginxmysql    2/2       Running   0          1h        172.18.42.3   10.0.0.197
test          1/1       Running   0          1h        172.18.96.2   10.0.0.163
[root@master rc]# kubectl delete pod nginx-7wss6 
pod "nginx-7wss6" deleted
[root@master rc]# kubectl get pod -o wide 
NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
nginx         1/1       Running   0          3d        172.18.42.2   10.0.0.197
nginx-0lrwh   1/1       Running   0          38m       172.18.96.4   10.0.0.163
nginx-g3pnd   1/1       Running   0          38m       172.18.96.3   10.0.0.163
nginx-j173g   1/1       Running   0          38m       172.18.96.5   10.0.0.163
nginx-j7jrw   1/1       Running   0          3s        172.18.42.6   10.0.0.197
nginx-pxlbx   1/1       Running   0          38m       172.18.42.5   10.0.0.197
nginxmysql    2/2       Running   0          1h        172.18.42.3   10.0.0.197
test          1/1       Running   0          1h        172.18.96.2   10.0.0.163
  • 示例2:把node2删掉,会出现如下现象
[root@master rc]# kubectl get pod -o wide 
NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
nginx         1/1       Running   0          3d        172.18.42.2   10.0.0.197
nginx-0lrwh   1/1       Running   0          3h        172.18.96.4   10.0.0.163
nginx-g3pnd   1/1       Running   0          3h        172.18.96.3   10.0.0.163
nginx-j173g   1/1       Running   0          3h        172.18.96.5   10.0.0.163
nginx-j7jrw   1/1       Running   0          2h        172.18.42.6   10.0.0.197
nginx-pxlbx   1/1       Running   0          3h        172.18.42.5   10.0.0.197
nginxmysql    2/2       Running   1          3h        172.18.42.3   10.0.0.197
test          1/1       Running   0          3h        172.18.96.2   10.0.0.163
[root@master rc]# kubectl delete node 10.0.0.163
node "10.0.0.163" deleted
[root@master rc]# kubectl get pod -o wide 
NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
nginx         1/1       Running   0          3d        172.18.42.2   10.0.0.197
nginx-fl07j   1/1       Running   0          2s        172.18.42.7   10.0.0.197
nginx-j7jrw   1/1       Running   0          2h        172.18.42.6   10.0.0.197
nginx-jp6jg   1/1       Running   0          2s        172.18.42.4   10.0.0.197
nginx-pxlbx   1/1       Running   0          3h        172.18.42.5   10.0.0.197
nginx-zr9mw   1/1       Running   0          2s        172.18.42.8   10.0.0.197
nginxmysql    2/2       Running   1          3h        172.18.42.3   10.0.0.197

image.png

image.png

node2的kubelet重启后会自动注册到集群

rc是通过什么来关联pod的呢?

通过更改[root@master rc]# kubectl edit pod nginx<span> </span>同一标签为myweb后,默认会把pod存活年龄最小的剔除
所以rc的pod资源是有标签选择器来进行维护。

image.png

rc滚动升级

  • 版本升级:
    image.png
[root@master rc]# kubectl rolling-update nginx -f k8s_rc1.yaml --update-period=10s

--update-period=10s:升级间隔10s

image.png

找一个pod容器进行nginx版本测试
image.png
升级成功

  • 版本回滚:
[root@master rc]# kubectl rolling-update nginx2 -f k8s_rc.yaml --update-period=1s

image.png
版本测试:
image.png

  • 升级回滚配置文件:
    image.png

最后总结一下RC(Replica Set)的一些特性与作用。

◎ 在大多数情况下,我们通过定义一个RC实现Pod的创建及副本数量的自动控制。
◎ 在RC里包括完整的Pod定义模板。
◎ RC通过Label Selector机制实现对Pod副本的自动控制。
◎ 通过改变RC里的Pod副本数量,可以实现Pod的扩容或缩容。
◎ 通过改变RC里Pod模板中的镜像版本,可以实现Pod的滚动升级。

2.3 service资源

service帮助pod暴露端口,通过架构图看出通过任何服务可通过kube-proxy组件进行通信。

创建一个service

[root@master svc]# cat k8s_svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myweb
spec:
  type: NodePort  #ClusterIP
  ports:
    - port: 80          #clusterIP
      nodePort: 30000   #node port
      targetPort: 80    #pod port
  selector:
    app: myweb2
[root@master svc]# kubectl create -f k8s_svc.yaml

image.png

注意:
svc标签选择器和rc的标签需要一致,不然访问不了后端服务。

image.png

通过:[root@master svc]# kubectl edit svc myweb 修改标签和rc标签一致。

  • 再次查看:

image.png

  • 并且现在和pod的ip相对应
    image.png
  • 查看node服务L:
    image.png
  • 测试服务:
    image.png

测试负载均衡

  • 减少pod节点进行测试
[root@master svc]# kubectl scale rc nginx --replicas=2

调整副本数量的命令
image.png

  • 进入容器修改index页面
[root@master svc]# kubectl exec -it nginx-175x8 /bin/bash
root@nginx-175x8:/# cd /usr/share/nginx/html/
root@nginx-175x8:/usr/share/nginx/html# ls                  
50x.html  index.html
root@nginx-175x8:/usr/share/nginx/html# echo 'cuijianzhe' > index.html

测试:
image.png

  • 默认端口范围是30000-32767
    修改ports端口数量:
[root@master svc]# vim /etc/kubernetes/apiserver
KUBE_API_ARGS="--service-node-port-range=3000-50000"

[root@master svc]# systemctl restart kube-apiserver
  • 添加svc 3000端口
[root@master svc]# cat k8s_svc2.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myweb2
spec:
  type: NodePort  #ClusterIP
  ports:
    - port: 80          #clusterIP
      nodePort: 3000   #node port
      targetPort: 80    #pod port
  selector:
    app: myweb2

修改svc统一标签myweb

kubectl edit svc myweb
[root@master svc]# kubectl  get svc -o wide
NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE       SELECTOR
kubernetes   10.254.0.1       <none>        443/TCP        3d        <none>
myweb        10.254.134.169   <nodes>       80:30000/TCP   55m       app=myweb
myweb2       10.254.55.214    <nodes>       80:3000/TCP    2m        app=myweb

image.png

通过一条命令添加service资源

kubectl scale rc nginx --replicas=2
kubectl exec -it pod_name /bin/bash
kubectl expose rc nginx --type=NodePort --port=80

image.png

2.4 deployment资源

有rc在滚动升级之后,会造成服务访问中断,于是k8s引入了deployment资源

  • 创建deployment
[root@master deploy]# cat k8s_deploy.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: 10.0.0.202:5000/nginx:1.13
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 100m
          requests:
            cpu: 100m
[root@master deploy]# kubectl create -f k8s_deploy.yaml 
deployment "nginx-deployment" created
[root@master deploy]# kubectl get deploy
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3         3         3            3           18s

给deployment分配端口,出现一个新的svc:

[root@master deploy]# kubectl expose deployment nginx-deployment --type=NodePort --port=80
service "nginx-deployment" exposed
[root@master deploy]# kubectl get svc 
NAME               CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes         10.254.0.1       <none>        443/TCP        3d
myweb              10.254.134.169   <nodes>       80:30000/TCP   17h
myweb2             10.254.55.214    <nodes>       80:3000/TCP    16h
nginx-deployment   10.254.2.186     <nodes>       80:33604/TCP   13s

通过curl测试:
image.png

滚动升级:

更改[root@master deploy]# kubectl edit deployment nginx-deployment<span> </span>配置文件

[root@master deploy]# kubectl edit deployment nginx-deployment 
#将 - image: 10.0.0.202:5000/nginx:1.13 改为
- image: 10.0.0.202:5000/nginx:latest

curl测试升级完成:
image.png

升级策略

[root@master deploy]# kubectl edit deployment nginx-deployment
...
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
...

升级回滚

  • 第一种
[root@master deploy]# kubectl rollout history deployment nginx-deployment
deployments "nginx-deployment"
REVISION	CHANGE-CAUSE
2		<none>
3		<none>
[root@master deploy]# kubectl rollout undo deployment nginx-deployment
deployment "nginx-deployment" rolled back
[root@master deploy]# kubectl rollout history deployment nginx-deployment
deployments "nginx-deployment"
REVISION	CHANGE-CAUSE
3		<none>
4		<none>

回滚之后版本:
image.png

指定回滚版本号:

[root@master deploy]# kubectl rollout undo deployment nginx-deployment --to-revision=4
deployment "nginx-deployment" rolled back

image.png

  • 第二种:因为第一种版本号镜像版本没有详细版本,删除deplotment用命令行启动deployment,再次查看会发现具体到相关镜像版本号等信息
[root@master deploy]# kubectl delete deployment nginx-deployment
deployment "nginx-deployment" deleted
[root@master deploy]# kubectl rollout history deployment nginx 
deployments "nginx"
REVISION	CHANGE-CAUSE
1		kubectl run nginx --image=10.0.0.202:5000/nginx:1.13 --replicas=3 --record

升级版本

[root@master deploy]# kubectl set image deployment nginx nginx=10.0.0.202:5000/nginx:1.16
deployment "nginx" image updated
[root@master deploy]# kubectl rollout history deployment nginx 
deployments "nginx"
REVISION	CHANGE-CAUSE
1		kubectl run nginx --image=10.0.0.202:5000/nginx:1.13 --replicas=3 --record
2		kubectl edit deployment nginx
3		kubectl set image deployment nginx nginx=10.0.0.202:5000/nginx:1.16

[root@master deploy]# kubectl set image deployment nginx nginx=10.0.0.202:5000/nginx:latest
deployment "nginx" image updated
[root@master deploy]# kubectl rollout history deployment nginx 
deployments "nginx"
REVISION	CHANGE-CAUSE
1		kubectl run nginx --image=10.0.0.202:5000/nginx:1.13 --replicas=3 --record
3		kubectl set image deployment nginx nginx=10.0.0.202:5000/nginx:1.16
4		kubectl set image deployment nginx nginx=10.0.0.202:5000/nginx:latest

deployment升级和回滚
命令行创建deployment
kubectl run nginx --image=10.0.0.202:5000/nginx:1.13 --replicas=3 --record
命令行升级版本
kubectl set image deploy nginx nginx=10.0.0.202:5000/nginx:1.15
查看deployment所有历史版本
kubectl rollout history deployment nginx
deployment回滚到上一个版本
kubectl rollout undo deployment nginx
deployment回滚到指定版本
kubectl rollout undo deployment nginx --to-revision=2

三、 Tomcat+Mysql练习

在K8S容器之间相互访问,通过VIP地址!

  1. 启动Mysql yaml配置文件,创建Rc资源
[root@master tomcat_demo]# cat mysql-rc.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: 10.0.0.202:5000/mysql:5.7
        ports:
        - containerPort: 3306
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: '598941324'
[root@master tomcat_demo]# kubectl create -f mysql-rc.yaml 
replicationcontroller "mysql" created

2.创建Mysql-svc yaml配置文件,创建Service资源

[root@master tomcat_demo]# cat mysql-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
 # type: NodePort  #ClusterIP
  ports:
    - port: 3306          
      targetPort: 3306    #pod port
  selector:
    app: mysql
[root@master tomcat_demo]# kubectl create -f mysql-svc.yaml 
service "mysql" created
[root@master tomcat_demo]# kubectl get svc
NAME               CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes         10.254.0.1       <none>        443/TCP        4d
mysql              10.254.76.200    <none>        3306/TCP       1m
myweb              10.254.134.169   <nodes>       80:30000/TCP   1d
myweb2             10.254.55.214    <nodes>       80:3000/TCP    1d
nginx-deployment   10.254.2.186     <nodes>       80:33604/TCP   1d
  1. Tomcat yaml配置文件,创建Tomcat Rc资源
[root@master tomcat_demo]# cat tomcat-rc.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: tomcat
spec:
  replicas: 1
  selector:
    app: tomcat
  template:
    metadata:
      labels:
        app: tomcat
    spec:
      containers:
      - name: tomcat
        image: 10.0.0.202:5000/tomcat:latest
        ports:
        - containerPort: 8080
        env:
        - name: MYSQL_SERVICE_HOST
          value: '10.254.76.200'     #此处是上面查看svc的MySQL  CLUSTER-IP
        - name: MYSQL_SERVICE_PORT
          value: '3306'
  1. 创建tomcat-svc yaml配置文件,启动tomcat-svc资源
[root@master tomcat_demo]# cat tomcat-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: tomcat
spec:
  type: NodePort  #ClusterIP
  ports:
    - port: 8080          #clusterIP
      nodePort: 30008   #node port
      targetPort: 8080    #pod port
  selector:
    app: tomcat
[root@master tomcat_demo]# kubectl create -f tomcat-svc.yaml 
service "tomcat" created

测试tomcat服务:
image.png

posted @ 2022-04-25 15:20  夏尔_717  阅读(122)  评论(0编辑  收藏  举报