kubernetes常用资源
一、k8s最小的资源单位pod
Kubernetes中最小的资源单位。由位于同一个节点上若干个容器组成,彼此共享网络命名空间和存储卷。一般每个pod中除了应用容器外,还包括一个初始的pause容器,完成网络和存储空间的初始化。如图所示为pod的组成示意图。每个pod都有一个特殊的被称为“根容器”的pause容器。Pause容器对应的镜像属于Kubernetes平台的一部分,除了pause容器,每个pod还包含一个或多个紧密相关的用户业务容器。
1. 设计pod这个特殊组成的原因:
1). 被称为“根容器”的pause容器与业务无关且不易死亡,以它的状态代表整个容器组的状态,就可以对在一组容器作为一个单元的情况下,对“整体”简单的进行判断和有效的进行行动。
2). pod里的多个业务容器共享pause容器的ip,共享pause容器挂接的Volume,这样简化了密切关联的业务容器之间的通信问题,也很好的解决了它们之间的文件共享问题。
kubernetes为每个pod都分配了唯一的IP地址,称之为PodIP,一个pod里的多个容器共享PodIP地址。kubernetes要求底层网络支持集群内任意两个pod之间的TCP/IP直接通信,这通常采用虚拟二层网络技术实现。因此,在kubernetes中,一个pod里的容器与另外主机上的pod容器能够直接通信。
2. pod的类型
pod有两种类型:普通pod及静态pod。
静态pod并不存放在kubernetes的etcd存储里,而是存放在某个具体的node上的一个具体文件中,并且只在此node上启动运行。
普通pod一旦被创建,就会被放入到etcd中存储,随后会被kubernetes master调度到某个具体的node上并进行绑定,随后该pod被对应的node上的kubelet进程实例化成一组相关的docker容器并启动起来。在默认情况下,当pod里的某个容器停止时,kubernetes会自动检测到这个问题并且重新启动这个pod,如果所在的pod宕机,则会将这个node上所有的pod重新调度到其他节点上。
3.创建pod
[root@kub_master ~]# mkdir k8s [root@kub_master ~]# cd k8s/ [root@kub_master k8s]# mkdir pod [root@kub_master k8s]# cd pod/ [root@kub_master pod]# vim nginx_pod.yaml
1 [root@kub_master pod]# cat nginx_pod.yaml
2 apiVersion: v1
3 kind: Pod
4 metadata:
5 name: nginx
6 labels:
7 app: web
8 spec:
9 containers:
10 - name: nginx
11 image: nginx:1.13
12 ports:
13 - containerPort: 80
[root@kub_master pod]# kubectl create -f nginx_pod.yaml
pod "nginx" created
[root@kub_master pod]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 18s
#执行以下命令查看
[root@kub_master pod]# kubectl describe pod nginx
#发现以下报错信息
#查找原因:
[root@kub_node1 ~]# ll /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt
lrwxrwxrwx 1 root root 27 Sep 20 23:29 /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt -> /etc/rhsm/ca/redhat-uep.pem
#解决办法:
[root@kub_node1 ~]# docker search pod-infrastructure
INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED
docker.io docker.io/tianyebj/pod-infrastructure registry.access.redhat.com/rhel7/pod-infra... 2
docker.io docker.io/w564791/pod-infrastructure latest 1
docker.io docker.io/xiaotech/pod-infrastructure registry.access.redhat.com/rhel7/pod-infra... 1 [OK]
docker.io docker.io/092800/pod-infrastructure 0
docker.io docker.io/1127566696/pod-infrastructure K8S pod-infrastructure 0
docker.io docker.io/812557942/pod-infrastructure 0
[root@kub_node1 ~]# vim /etc/kubernetes/kubelet
[root@kub_node1 ~]# cat /etc/kubernetes/kubelet
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.0.184"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.0.212:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=docker.io/tianyebj/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""
[root@kub_node1 ~]# systemctl restart kubelet
#为了节省时间,手动上传 pod-infrastructure镜像文件包至所有节点
[root@kub_master ~]# docker load -i pod-infrastructure-latest.tar.gz
df9d2808b9a9: Loading layer [==================================================>] 202.3 MB/202.3 MB
0a081b45cb84: Loading layer [==================================================>] 10.24 kB/10.24 kB
ba3d4cbbb261: Loading layer [==================================================>] 12.51 MB/12.51 MB
Loaded image: docker.io/tianyebj/pod-infrastructure:latest
[root@kub_master ~]# docker load -i docker_nginx1.13.tar.gz
d626a8ad97a1: Loading layer [==================================================>] 58.46 MB/58.46 MB
82b81d779f83: Loading layer [==================================================>] 54.21 MB/54.21 MB
7ab428981537: Loading layer [==================================================>] 3.584 kB/3.584 kB
Loaded image: docker.io/nginx:1.13
#删除之前的pod,重新创建pod
[root@kub_master pod]# kubectl delete pod nginx
[root@kub_master pod]# kubectl create -f nginx_pod.yaml
pod "nginx" created
[root@kub_master pod]# kubectl get pod nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 0/1 ContainerCreating 0 4m <none> 192.168.0.212
#过段时间后
[root@kub_master pod]# kubectl describe pod nginx
Name: nginx
Namespace: default
Node: 192.168.0.212/192.168.0.212
Start Time: Mon, 21 Sep 2020 23:33:55 +0800
Labels: app=web
Status: Running
IP: 172.16.81.3
Controllers: <none>
Containers:
nginx:
Container ID: docker://9b7ce2eb3a825e1d0ac275048f82190ee8c3aef6bf44e01b3413c522ef7a023b
Image: nginx:1.13
Image ID: docker://sha256:ae513a47849c895a155ddfb868d6ba247f60240ec8495482eca74c4a2c13a881
Port: 80/TCP
State: Running
Started: Mon, 21 Sep 2020 23:38:49 +0800
Ready: True
Restart Count: 0
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
No volumes.
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 {default-scheduler } Normal Scheduled Successfully assigned nginx to 192.168.0.212
21s 21s 1 {kubelet 192.168.0.212} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for docker.io/tianyebj/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (net/http: request canceled)"
7s 7s 2 {kubelet 192.168.0.212} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
7s 7s 1 {kubelet 192.168.0.212} spec.containers{nginx} Normal Pulled Container image "nginx:1.13" already present on machine
7s 7s 1 {kubelet 192.168.0.212} spec.containers{nginx} Normal Created Created container with docker id 9b7ce2eb3a82; Security:[seccomp=unconfined]
7s 7s 1 {kubelet 192.168.0.212} spec.containers{nginx} Normal Started Started container with docker id 9b7ce2eb3a82
[root@kub_master pod]# kubectl get pod nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 1/1 Running 0 4m 172.16.81.3 192.168.0.212
由此容器已经创建成功,但是时间很长,使用私有仓库,解决镜像下载很慢,容器启动很慢的问题
4. master节点配置私有仓库(之前已经配置过),创建pod资源
#上传pod-infrastructure:latest和nginx镜像包至私有仓库上
[root@kub_node1 ~]# docker tag docker.io/tianyebj/pod-infrastructure:latest 192.168.0.212:5000/pod-infrastructure:latest
[root@kub_node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.0.212:5000/busybox latest 6858809bf669 12 days ago 1.23 MB
docker.io/busybox latest 6858809bf669 12 days ago 1.23 MB
docker.io/nginx 1.13 ae513a47849c 2 years ago 109 MB
192.168.0.212:5000/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB
docker.io/tianyebj/pod-infrastructure latest 34d3450d733b 3 years ago 205 MB
[root@kub_node1 ~]# docker push 192.168.0.212:5000/pod-infrastructure:latest
The push refers to a repository [192.168.0.212:5000/pod-infrastructure]
ba3d4cbbb261: Pushed
0a081b45cb84: Pushed
df9d2808b9a9: Pushed
latest: digest: sha256:a378b2d7a92231ffb07fdd9dbd2a52c3c439f19c8d675a0d8d9ab74950b15a1b size: 948
[root@kub_node1 ~]# docker tag docker.io/nginx:1.13 192.168.0.212:5000/nginx:1.13
[root@kub_node1 ~]# docker push 192.168.0.212:5000/nginx:1.13
The push refers to a repository [192.168.0.212:5000/nginx]
7ab428981537: Pushed
82b81d779f83: Pushed
d626a8ad97a1: Pushed
1.13: digest: sha256:e4f0474a75c510f40b37b6b7dc2516241ffa8bde5a442bde3d372c9519c84d90 size: 948
#在私有仓库上查看
[root@kub_master pod]# ll /opt/myregistry/docker/registry/v2/repositories/
total 12
drwxr-xr-x 5 root root 4096 Sep 21 22:19 busybox
drwxr-xr-x 5 root root 4096 Sep 22 00:08 nginx
drwxr-xr-x 5 root root 4096 Sep 21 23:56 pod-infrastructure
#修改配置文件(所有node节点)
[root@kub_node1 ~]# vim /etc/sysconfig/docker
[root@kub_node1 ~]# grep -v "^#" /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=192.168.0.212:5000'
if [ -z "${DOCKER_CERT_PATH}" ]; then
DOCKER_CERT_PATH=/etc/docker
fi
[root@kub_node1 ~]# vim /etc/kubernetes/kubelet
[root@kub_node1 ~]# tail /etc/kubernetes/kubelet
KUBELET_HOSTNAME="--hostname-override=192.168.0.184"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.0.212:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.0.212:5000/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""
#重启docker和kubelet服务
[root@kub_node1 ~]# systemctl restart docker
[root@kub_node1 ~]# systemctl restart kubelet
#创建pod
[root@kub_master pod]# vim test_pod.yaml
1 [root@kub_master pod]# cat test_pod.yaml
2 apiVersion: v1
3 kind: Pod
4 metadata:
5 name: test
6 labels:
7 app: web
8 spec:
9 containers:
10 - name: test
11 image: 192.168.0.212:5000/nginx:1.13
12 ports:
13 - containerPort: 80
[root@kub_master pod]# kubectl create -f test_pod.yaml
pod "test" created
[root@kub_master pod]# kubectl get pod test -o wide
NAME READY STATUS RESTARTS AGE IP NODE
test 1/1 Running 0 31s 172.16.46.2 192.168.0.184
[root@kub_master pod]# curl -I 172.16.46.2
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Mon, 21 Sep 2020 16:23:28 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
#在192.168.0.184节点上查看容器
[root@kub_node1 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
95da50c411ed 192.168.0.212:5000/nginx:1.13 "nginx -g 'daemon ..." 21 hours ago Up 21 hours k8s_test.7460409_test_default_74d1075b-fc26-11ea-bef8-fa163e38ad0d_3d0a3cc5
499dbe7b861f 192.168.0.212:5000/pod-infrastructure:latest "/pod" 21 hours ago Up 21 hours k8s_POD.a152028d_test_default_74d1075b-fc26-11ea-bef8-fa163e38ad0d_09ac03b8
e72d8438c415 docker.io/busybox:latest "sh" 25 hours ago Exited (0) 24 hours ago reverent_snyder
[root@kub_node1 ~]# docker inspect 499dbe7b861f |grep -i ipaddress
"SecondaryIPAddresses": null,
"IPAddress": "172.16.46.2",
"IPAddress": "172.16.46.2",
[root@kub_node1 ~]# docker inspect 95da50c411ed |grep -i ipaddress
"SecondaryIPAddresses": null,
"IPAddress": "",
由此可见,k8s中创建一个pod资源,控制docker至少启动两个容器,业务容器nginx,基础容器pod容器。而且这些容器共用一个PodIP地址。
5. 实例:一个pod创建多个容器
#创建pod,编写yaml文件
[root@kub_master pod]# vim nginx-busybox.yaml
1 [root@kub_master pod]# cat nginx-busybox.yaml
2 apiVersion: v1
3 kind: Pod
4 metadata:
5 name: test2
6 labels:
7 app: web
8 name: nginx
9 spec:
10 containers:
11 - name: nginx
12 image: 192.168.0.212:5000/nginx:1.13
13 ports:
14 - containerPort: 80
15 - name: busybox
16 image: docker.io/busybox:latest
17 imagePullPolicy: IfNotPresent
18 command: ["sleep","3600"]
19 ports:
20 - containerPort: 81
[root@kub_master pod]# kubectl create -f nginx-busybox.yaml
pod "test2" created
[root@kub_master pod]# kubectl get pods
NAME READY STATUS RESTARTS AGE
test 1/1 Running 0 22h
test2 2/2 Running 0 8s
[root@kub_master pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
test 1/1 Running 0 22h 172.16.46.2 192.168.0.184
test2 2/2 Running 0 2m 172.16.46.3 192.168.0.184
[root@kub_master pod]# curl -I 172.16.46.3
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Tue, 22 Sep 2020 14:27:59 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
#查看docker容器
[root@kub_node1 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cb346021bbd1 docker.io/busybox:latest "sleep 3600" 3 minutes ago Up 3 minutes k8s_busybox.288309fa_test2_default_43ff2144-fcdf-11ea-8a8e-fa163e38ad0d_dbd6e67b
344ed99373d9 192.168.0.212:5000/nginx:1.13 "nginx -g 'daemon ..." 3 minutes ago Up 3 minutes k8s_nginx.2470046d_test2_default_43ff2144-fcdf-11ea-8a8e-fa163e38ad0d_9060c480
54cb803f005b 192.168.0.212:5000/pod-infrastructure:latest "/pod" 3 minutes ago Up 3 minutes k8s_POD.f8802539_test2_default_43ff2144-fcdf-11ea-8a8e-fa163e38ad0d_b1a0c303
95da50c411ed 192.168.0.212:5000/nginx:1.13 "nginx -g 'daemon ..." 22 hours ago Up 22 hours k8s_test.7460409_test_default_74d1075b-fc26-11ea-bef8-fa163e38ad0d_3d0a3cc5
499dbe7b861f 192.168.0.212:5000/pod-infrastructure:latest "/pod" 22 hours ago Up 22 hours k8s_POD.a152028d_test_default_74d1075b-fc26-11ea-bef8-fa163e38ad0d_09ac03b8
e72d8438c415 docker.io/busybox:latest "sh" 25 hours ago Exited (0) 25 hours ago reverent_snyder
[root@kub_node1 ~]# docker inspect 54cb803f005b |grep -i ipaddress
"SecondaryIPAddresses": null,
"IPAddress": "172.16.46.3",
"IPAddress": "172.16.46.3",
6.pod的其他操作
1)删除pod
[root@kub_master pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
test 1/1 Running 0 22h 172.16.46.2 192.168.0.184
test2 2/2 Running 0 6m 172.16.46.3 192.168.0.184
[root@kub_master pod]# kubectl delete pod test --force --grace-period=0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "test" deleted
[root@kub_master pod]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
test2 2/2 Running 0 8m 172.16.46.3 192.168.0.184
2)查看详细信息
[root@kub_master pod]# kubectl describe pod test2
3)更新pod
[root@kub_master pod]# kubectl apply -f nginx-busybox.yaml
pod "test2" configured
二、Replication Controller
RC是kubernetes系统中的核心概念之一。它其实是定义了一个期望的场景,即声明某种Pod的副本数量在任意时刻都符合某个预期值,所有RC定义包括了如下几个部分:
1)Pod期待的副本数(replicas)
2) 用于筛选目标pod的Label Selector
3)当Pod副本数量小于预期数量时,用于创建新pod的pod模板(template)
1. RC的作用
应用托管在Kubernetes之后,Kubernetes需要保证应用能够持续运行,这是RC的工作内容,它会确保任何时间Kubernetes中都有指定数量的Pod在运行。在此基础上,RC还提供了一些更高级的特性,比如滚动升级、升级回滚等。
2. 创建RC
[root@kub_master k8s]# mkdir rc
[root@kub_master k8s]# cd rc/
[root@kub_master rc]# vim nginx-rc.yaml
1 [root@kub_master rc]# cat nginx-rc.yaml
2 apiVersion: v1
3 kind: ReplicationController
4 metadata:
5 name: myweb
6 spec:
7 replicas: 2
8 selector:
9 app: myweb
10 template:
11 metadata:
12 labels:
13 app: myweb
14 spec:
15 containers:
16 - name: myweb
17 image: 192.168.0.212:5000/nginx:1.13
18 ports:
19 - containerPort: 80
[root@kub_master rc]# kubectl create -f nginx-rc.yaml
replicationcontroller "myweb" created
[root@kub_master rc]# kubectl get rc -o wide
NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR
myweb 2 2 2 5s myweb 192.168.0.212:5000/nginx:1.13 app=myweb
[root@kub_master rc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-4l99t 1/1 Running 0 10s 172.16.66.2 192.168.0.208
myweb-4lmnh 1/1 Running 0 10s 172.16.81.3 192.168.0.212
test2 2/2 Running 0 43m 172.16.46.3 192.168.0.184
#本实例中定义了pod的副本数为2,因此,创建了两个pod。假如删掉一个pod,则会再自动创建一个pod,必须保证是2个pod副本。
[root@kub_master rc]# kubectl delete pod myweb-4l99t
pod "myweb-4l99t" deleted
[root@kub_master rc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-3kb26 1/1 Running 0 3s 172.16.46.2 192.168.0.184
myweb-4lmnh 1/1 Running 0 6m 172.16.81.3 192.168.0.212
test2 2/2 Running 0 50m 172.16.46.3 192.168.0.184
#此外,还可以通过修改RC副本数量,来实现pod的动态缩放功能。
[root@kub_master rc]# kubectl scale rc myweb --replicas=3
replicationcontroller "myweb" scaled
[root@kub_master rc]# kubectl get rc -o wide
NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR
myweb 3 3 3 9m myweb 192.168.0.212:5000/nginx:1.13 app=myweb
[root@kub_master rc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-3kb26 1/1 Running 0 2m 172.16.46.2 192.168.0.184
myweb-4lmnh 1/1 Running 0 9m 172.16.81.3 192.168.0.212
myweb-s6x9n 1/1 Running 0 11s 172.16.66.2 192.168.0.208
test2 2/2 Running 0 52m 172.16.46.3 192.168.0.184
定义了一个RC并提交给k8s集群中以后,master节点上的controller manager组件就得到通知,定期巡检系统中当前存活的pod,并确保pod的实例数量刚好等于RC的期望值,如果有过多的pod副本在运行,系统就会停掉一些pod;否则系统会再自动创建一些pod。所以,通过RC,k8s实现了用户应用集群的高可用性,并且大大减少了系统管理员的运维工作。
3. RC与Pod的关联是通过Label标签
[root@kub_master rc]# kubectl get rc -o wide
NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR
myweb 3 3 3 9m myweb 192.168.0.212:5000/nginx:1.13 app=myweb
[root@kub_master rc]# kubectl describe pod myweb-3kb26
Name: myweb-3kb26
Namespace: default
Node: 192.168.0.184/192.168.0.184
Start Time: Tue, 22 Sep 2020 23:14:17 +0800
Labels: app=myweb
Status: Running
IP: 172.16.46.2
Controllers: ReplicationController/myweb
#将pod test2标签修改为myweb
[root@kub_master rc]# kubectl edit pod/test2
pod "test2" edited
[root@kub_master rc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-3kb26 1/1 Running 0 8m 172.16.46.2 192.168.0.184
myweb-4lmnh 1/1 Running 0 15m 172.16.81.3 192.168.0.212
test2 2/2 Running 0 59m 172.16.46.3 192.168.0.184
可以看出pod test2加入到了rc中,pod myweb-s6x9n自动删除。从而保证rc下保留3个pod的副本。
4. 删除RC
kubectl 提供了stop和delete命令一次性删除RC和RC控制的全部Pod
[root@kub_master pod]# kubectl delete rc myweb
replicationcontroller "myweb" deleted
[root@kub_master pod]# kubectl get rc -o wide
No resources found.
5. RC的滚动升级
滚动升级是一种平滑过渡的升级方式,通过逐步替换的策略,保证整体系统的稳定,在初始升级的时候就可以及时发现、调整问题,以保证问题影响度不会扩大。升级开始后,首先依据提供的定义文件创建V2版本的RC,然后每隔10s(--update-period=10s)逐步的增加V2版本的Pod副本数,逐步减少V1版本Pod的副本数。升级完成之后,删除V1版本的RC,保留V2版本的RC,及实现滚动升级。
[root@kub_master rc]# docker load -i docker_nginx1.15.tar.gz
8b15606a9e3e: Loading layer [==================================================>] 58.44 MB/58.44 MB
94ad191a291b: Loading layer [==================================================>] 54.35 MB/54.35 MB
92b86b4e7957: Loading layer [==================================================>] 3.584 kB/3.584 kB
Loaded image: docker.io/nginx:latest
[root@kub_master rc]# docker images |grep nginx
docker.io/nginx latest be1f31be9a87 24 months ago 109 MB
192.168.0.212:5000/nginx 1.13 ae513a47849c 2 years ago 109 MB
docker.io/nginx 1.13 ae513a47849c 2 years ago 109 MB
[root@kub_master rc]# docker tag docker.io/nginx:latest 192.168.0.212:5000/nginx:1.15
[root@kub_master rc]# docker push 192.168.0.212:5000/nginx:1.15
The push refers to a repository [192.168.0.212:5000/nginx]
92b86b4e7957: Pushed
94ad191a291b: Pushed
8b15606a9e3e: Pushed
1.15: digest: sha256:204a9a8e65061b10b92ad361dd6f406248404fe60efd5d6a8f2595f18bb37aad size: 948
[root@kub_master rc]# docker images |grep nginx
192.168.0.212:5000/nginx 1.15 be1f31be9a87 24 months ago 109 MB
docker.io/nginx latest be1f31be9a87 24 months ago 109 MB
192.168.0.212:5000/nginx 1.13 ae513a47849c 2 years ago 109 MB
docker.io/nginx 1.13 ae513a47849c 2 years ago 109 MB
#滚动升级,新建一个新建一个nginx-rc2.yml
[root@kub_master rc]# cp nginx-rc.yaml nginx-rc2.yaml
[root@kub_master rc]# vim nginx-rc2.yaml
1 [root@kub_master rc]# cat nginx-rc2.yaml
2 apiVersion: v1
3 kind: ReplicationController
4 metadata:
5 name: myweb2
6 spec:
7 replicas: 3
8 selector:
9 app: myweb2
10 template:
11 metadata:
12 labels:
13 app: myweb2
14 spec:
15 containers:
16 - name: myweb2
17 image: 192.168.0.212:5000/nginx:1.15
18 imagePullPolicy: IfNotPresent
19 ports:
20 - containerPort: 80
[root@kub_master rc]# kubectl rolling-update myweb -f nginx-rc2.yaml --update-period=10s
Created myweb2
Scaling up myweb2 from 0 to 3, scaling down myweb from 3 to 0 (keep 3 pods available, don't exceed 4 pods)
Scaling myweb2 up to 1
Scaling myweb down to 2
Scaling myweb2 up to 2
Scaling myweb down to 1
Scaling myweb2 up to 3
Scaling myweb down to 0
Update succeeded. Deleting myweb
replicationcontroller "myweb" rolling updated to "myweb2"
#查看pod变换
[root@kub_master rc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-3kb26 1/1 Running 0 23h 172.16.46.2 192.168.0.184
myweb-4lmnh 1/1 Running 0 23h 172.16.81.3 192.168.0.212
myweb-h4hr6 1/1 Running 0 15m 172.16.66.3 192.168.0.208
myweb2-pvc4w 1/1 Running 0 9s 172.16.46.3 192.168.0.184
[root@kub_master rc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-3kb26 1/1 Running 0 23h 172.16.46.2 192.168.0.184
myweb-4lmnh 1/1 Running 0 23h 172.16.81.3 192.168.0.212
myweb2-k5njf 0/1 ContainerCreating 0 2s <none> 192.168.0.208
myweb2-pvc4w 1/1 Running 0 12s 172.16.46.3 192.168.0.184
[root@kub_master rc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb-4lmnh 1/1 Running 0 23h 172.16.81.3 192.168.0.212
myweb2-ckbm2 1/1 Running 0 6s 172.16.81.4 192.168.0.212
myweb2-k5njf 1/1 Running 0 16s 172.16.66.2 192.168.0.208
myweb2-pvc4w 1/1 Running 0 26s 172.16.46.3 192.168.0.184
[root@kub_master rc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb2-ckbm2 1/1 Running 0 15s 172.16.81.4 192.168.0.212
myweb2-k5njf 1/1 Running 0 25s 172.16.66.2 192.168.0.208
myweb2-pvc4w 1/1 Running 0 35s 172.16.46.3 192.168.0.184
#回滚操作
[root@kub_master rc]# kubectl rolling-update myweb myweb2 --update-period=10s --rollback
Setting "myweb" replicas to 3
Continuing update with existing controller myweb.
Scaling up myweb from 2 to 3, scaling down myweb2 from 1 to 0 (keep 3 pods available, don't exceed 4 pods)
Scaling myweb up to 3
Scaling myweb2 down to 0
Update succeeded. Deleting myweb2
replicationcontroller "myweb" rolling updated to "myweb2"
#查看pod
[root@kub_master rc]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myweb-3kb26 1/1 Running 0 22h
myweb-4lmnh 1/1 Running 0 22h
myweb-h4hr6 1/1 Running 0 2m
由此可以看出:1)定义一个RC实现Pod的创建过程及副本数量的自动控制;2)RC中包括完整的Pod定义模板;3)RC通过Label Selector机制实现对Pod副本的自动控制;4)通过改变RC里的Pod副本数量,可以实现Pod的扩容或缩容功能;5)通过改变RC里Pod模板中的镜像版本,可以实现Pod的滚动升级。
三、service(服务)
Service也是k8s里的最核心的资源对象之一。k8s中的service定义了一个服务的访问入口地址,前端的应用pod通过这个入口地址访问其背后的一组由Pod副本组成的集群实例,service与其后端Pod副本集群之间则是通过Label Selector来实现“无缝对接”的。而RC的作用实际上是保证了Service的服务能力和服务质量始终处于预期的标准。
1. Kubernetes集群里有三种IP地址
1)Node IP:Node节点的IP地址,即物理网卡的IP地址。每个Service都会在Node节点上开通一个端口,外部可以通过NodeIP:NodePort即可访问Service里的Pod。
2)Pod IP: Pod的IP地址,即docker容器的IP地址,此为虚拟IP地址。它是Docker Engine根据docker网桥的IP地址段进行分配的,通常是一个虚拟的二层网络。
同Service下的pod可以直接根据PodIP相互通信;
不同Service下的pod在集群间pod通信要借助于 cluster ip;
pod和集群外通信,要借助于node ip
3)Cluster IP:Service的IP地址,此为虚拟IP地址。外部网络无法ping通,只有kubernetes集群内部访问使用。
Cluster IP仅仅作用于Kubernetes Service这个对象,并由Kubernetes管理和分配P地址;Cluster IP只能结合Service Port组成一个具体的通信端口,单独的Cluster IP不具备通信的基础,并且他们属于Kubernetes集群这样一个封闭的空间。
2. 创建Service
[root@kub_master k8s]# mkdir svc
[root@kub_master k8s]# cd src
[root@kub_master svc]# vim nginx-svc.yaml
1 [root@kub_master svc]# cat nginx-svc.yaml
2 apiVersion: v1
3 kind: Service
4 metadata:
5 name: myweb2
6 spec:
7 type: NodePort
8 ports:
9 - port: 80
10 nodePort: 30000
11 targetPort: 80
12 selector:
13 app: myweb2
[root@kub_master svc]# kubectl create -f nginx-svc.yaml
service "myweb2" created
[root@kub_master svc]# kubectl get svc myweb2
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myweb2 192.168.207.224 <nodes> 80:30000/TCP 17s
[root@kub_master svc]# kubectl describe svc myweb2
Name: myweb2
Namespace: default
Labels: <none>
Selector: app=myweb2
Type: NodePort
IP: 192.168.207.224
Port: <unset> 80/TCP
NodePort: <unset> 30000/TCP
Endpoints: 172.16.46.3:80,172.16.66.2:80,172.16.81.4:80
Session Affinity: None
No events.
[root@kub_master svc]# kubectl get endpoints |grep myweb2
myweb2 172.16.46.3:80,172.16.66.2:80,172.16.81.4:80 4m
#访问service
[root@kub_master svc]# curl 192.168.0.212:30000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
3. service服务自动发现
[root@kub_master svc]# kubectl scale rc myweb2 --replicas=4
replicationcontroller "myweb2" scaled
[root@kub_master svc]# kubectl get endpoints |grep myweb2
myweb2 172.16.46.2:80,172.16.46.3:80,172.16.66.2:80 + 1 more... 7m
[root@kub_master rc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb2-41n2v 1/1 Running 0 1m 172.16.46.2 192.168.0.184
myweb2-ckbm2 1/1 Running 0 53m 172.16.81.4 192.168.0.212
myweb2-k5njf 1/1 Running 0 53m 172.16.66.2 192.168.0.208
myweb2-pvc4w 1/1 Running 0 53m 172.16.46.3 192.168.0.184
4. service的负载均衡
[root@kub_master rc]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb2-41n2v 1/1 Running 0 16m 172.16.46.2 192.168.0.184
myweb2-ckbm2 1/1 Running 0 1h 172.16.81.4 192.168.0.212
myweb2-k5njf 1/1 Running 0 1h 172.16.66.2 192.168.0.208
myweb2-pvc4w 1/1 Running 0 1h 172.16.46.3 192.168.0.184
[root@kub_master rc]# echo "node1" >index.html
[root@kub_master rc]# kubectl cp index.html myweb2-41n2v:/usr/share/nginx/html/index.html
[root@kub_master rc]# echo "node2" >index.html
[root@kub_master rc]# kubectl cp index.html myweb2-ckbm2:/usr/share/nginx/html/index.html
[root@kub_master rc]# echo "node3" >index.html
[root@kub_master rc]# kubectl cp index.html myweb2-k5njf:/usr/share/nginx/html/index.html
[root@kub_master rc]# echo "node4" >index.html
[root@kub_master rc]# kubectl cp index.html myweb2-pvc4w:/usr/share/nginx/html/index.html
#访问
[root@kub_master rc]# curl 192.168.0.212:30000
node4
[root@kub_master rc]# curl 192.168.0.212:30000
node1
[root@kub_master rc]# curl 192.168.0.212:30000
node2
[root@kub_master rc]# curl 192.168.0.212:30000
node1
[root@kub_master rc]# curl 192.168.0.212:30000
node3
[root@kub_master rc]# curl 192.168.0.184:30000
node3
[root@kub_master rc]# curl 192.168.0.184:30000
node2
[root@kub_master rc]# curl 192.168.0.184:30000
node3
[root@kub_master rc]# curl 192.168.0.184:30000
node4
[root@kub_master rc]# curl 192.168.0.184:30000
node1
[root@kub_master rc]# curl 192.168.0.208:30000
node4
[root@kub_master rc]# curl 192.168.0.208:30000
node2
[root@kub_master rc]# curl 192.168.0.208:30000
node3
[root@kub_master rc]# curl 192.168.0.208:30000
node4
[root@kub_master rc]# curl 192.168.0.208:30000
node4
[root@kub_master rc]# curl 192.168.0.208:30000
node1
5.修改nodePort范围
vim /etc/kubernetes/apiserver
KUBE_API_ARGS="--service-node-port-range=3000-50000"
四、deployment资源
RC在滚动升级之后,会造成服务访问中断,于是k8s引入了deployment资源。deployment资源相对于RC的一个最大升级是可以随时知道当前Pod“部署”的进度。
1. 创建一个deployment
[root@kub_master k8s]# mkdir deployment [root@kub_master k8s]# cd deployment/ [root@kub_master deployment]# vi nginx-deploy.yaml [root@kub_master deployment]# cat nginx-deploy.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 template: metadata: labels: app: nginx spec: containers: - name: nginx image: 192.168.0.212:5000/nginx:1.13 ports: - containerPort: 80
[root@kub_master deployment]# kubectl create -f nginx-deploy.yaml deployment "nginx-deployment" created [root@kub_master deployment]# kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 3 3 3 0 33s
[root@kub_master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-deployment-3445735607-jhnwm 1/1 Running 0 29s 172.16.46.2 192.168.0.184 nginx-deployment-3445735607-mn6tt 1/1 Running 0 29s 172.16.66.2 192.168.0.208 nginx-deployment-3445735607-x3803 1/1 Running 0 29s 172.16.81.3 192.168.0.212
2. 关联service
[root@kub_master ~]# kubectl expose deployment nginx-deployment --port=80 --type=NodePort service "nginx-deployment" exposed [root@kub_master ~]# kubectl get all -o wide NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nginx-deployment 3 3 3 3 25m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR svc/kubernetes 192.168.0.1 <none> 443/TCP 3d <none> svc/nginx-deployment 192.168.223.101 <nodes> 80:32401/TCP 1m app=nginx NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR rs/nginx-deployment-3445735607 3 3 3 25m nginx 192.168.0.212:5000/nginx:1.13 app=nginx,pod-template-hash=3445735607 NAME READY STATUS RESTARTS AGE IP NODE po/nginx-deployment-3445735607-jhnwm 1/1 Running 0 25m 172.16.46.2 192.168.0.184 po/nginx-deployment-3445735607-mn6tt 1/1 Running 0 25m 172.16.66.2 192.168.0.208 po/nginx-deployment-3445735607-x3803 1/1 Running 0 25m 172.16.81.3 192.168.0.212
#访问
[root@kub_master ~]# curl -I 192.168.0.212:32401 HTTP/1.1 200 OK Server: nginx/1.13.12 Date: Thu, 24 Sep 2020 15:06:10 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT Connection: keep-alive ETag: "5acb8e45-264" Accept-Ranges: bytes
[root@kub_master ~]# curl 192.168.0.212:32401 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
3. 版本升级
[root@kub_master deployment]# kubectl edit deployment nginx-deployment deployment "nginx-deployment" edited
[root@kub_master ~]# kubectl get all -o wide NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nginx-deployment 3 3 3 3 31m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR svc/kubernetes 192.168.0.1 <none> 443/TCP 3d <none> svc/nginx-deployment 192.168.223.101 <nodes> 80:32401/TCP 7m app=nginx NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR rs/nginx-deployment-3445735607 0 0 0 31m nginx 192.168.0.212:5000/nginx:1.13 app=nginx,pod-template-hash=3445735607 rs/nginx-deployment-3608264889 3 3 3 1m nginx 192.168.0.212:5000/nginx:1.15 app=nginx,pod-template-hash=3608264889 NAME READY STATUS RESTARTS AGE IP NODE po/nginx-deployment-3608264889-19hhv 1/1 Running 0 1m 172.16.46.3 192.168.0.184 po/nginx-deployment-3608264889-d2cpm 1/1 Running 0 1m 172.16.81.4 192.168.0.212 po/nginx-deployment-3608264889-qcw7l 1/1 Running 0 1m 172.16.66.3 192.168.0.208
#访问
[root@kub_master ~]# curl -I 192.168.0.212:32401 HTTP/1.1 200 OK Server: nginx/1.15.5 Date: Thu, 24 Sep 2020 15:08:48 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 02 Oct 2018 14:49:27 GMT Connection: keep-alive ETag: "5bb38577-264" Accept-Ranges: bytes
4.查看deployment所有历史版本
[root@kub_master ~]# kubectl rollout history deployment nginx-deployment deployments "nginx-deployment" REVISION CHANGE-CAUSE 1 <none> 2 <none>
5. 回滚到上一个版本
[root@kub_master ~]# kubectl rollout undo deployment nginx-deployment deployment "nginx-deployment" rolled back [root@kub_master ~]# kubectl get all -o wide NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nginx-deployment 3 3 3 3 39m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR svc/kubernetes 192.168.0.1 <none> 443/TCP 3d <none> svc/nginx-deployment 192.168.223.101 <nodes> 80:32401/TCP 15m app=nginx NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR rs/nginx-deployment-3445735607 3 3 3 39m nginx 192.168.0.212:5000/nginx:1.13 app=nginx,pod-template-hash=3445735607 rs/nginx-deployment-3608264889 0 0 0 9m nginx 192.168.0.212:5000/nginx:1.15 app=nginx,pod-template-hash=3608264889 NAME READY STATUS RESTARTS AGE IP NODE po/nginx-deployment-3445735607-5gqv2 1/1 Running 0 5s 172.16.81.3 192.168.0.212 po/nginx-deployment-3445735607-m6ct9 1/1 Running 0 5s 172.16.66.2 192.168.0.208 po/nginx-deployment-3445735607-ptk8d 1/1 Running 0 5s 172.16.46.2 192.168.0.184
#访问查看版本
[root@kub_master ~]# curl -I 192.168.0.212:32401 HTTP/1.1 200 OK Server: nginx/1.13.12 Date: Thu, 24 Sep 2020 15:19:04 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT Connection: keep-alive ETag: "5acb8e45-264" Accept-Ranges: bytes
6. deployment的最佳实践
#版本发布
[root@kub_master ~]# kubectl run nginx --image=192.168.0.212:5000/nginx:1.13 --replicas=3 --record deployment "nginx" created [root@kub_master ~]# kubectl get all -o wide NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nginx 3 3 3 3 3s NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR svc/kubernetes 192.168.0.1 <none> 443/TCP 4d <none> svc/nginx-deployment 192.168.223.101 <nodes> 80:32401/TCP 27m app=nginx NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR rs/nginx-291675973 3 3 3 3s nginx 192.168.0.212:5000/nginx:1.13 pod-template-hash=291675973,run=nginx NAME READY STATUS RESTARTS AGE IP NODE po/nginx-291675973-38328 1/1 Running 0 3s 172.16.81.3 192.168.0.212 po/nginx-291675973-kzghl 1/1 Running 0 3s 172.16.46.2 192.168.0.184 po/nginx-291675973-rb5nx 1/1 Running 0 3s 172.16.66.2 192.168.0.208
#版本升级
[root@kub_master ~]# kubectl set image deploy nginx nginx=192.168.0.212:5000/nginx:1.15 deployment "nginx" image updated [root@kub_master ~]# kubectl get all -o wide NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nginx 3 3 3 3 2m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR svc/kubernetes 192.168.0.1 <none> 443/TCP 4d <none> svc/nginx-deployment 192.168.223.101 <nodes> 80:32401/TCP 29m app=nginx NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR rs/nginx-291675973 0 0 0 2m nginx 192.168.0.212:5000/nginx:1.13 pod-template-hash=291675973,run=nginx rs/nginx-441491271 3 3 3 4s nginx 192.168.0.212:5000/nginx:1.15 pod-template-hash=441491271,run=nginx NAME READY STATUS RESTARTS AGE IP NODE po/nginx-441491271-904qk 1/1 Running 0 4s 172.16.46.3 192.168.0.184 po/nginx-441491271-dx0zc 1/1 Running 0 3s 172.16.66.2 192.168.0.208 po/nginx-441491271-wr15g 1/1 Running 0 4s 172.16.81.4 192.168.0.212 ##查看历史版本 [root@kub_master ~]# kubectl rollout history deployment nginx deployments "nginx" REVISION CHANGE-CAUSE 1 kubectl run nginx --image=192.168.0.212:5000/nginx:1.13 --replicas=3 --record 2 kubectl set image deploy nginx nginx=192.168.0.212:5000/nginx:1.15
#回滚到指定版本
[root@kub_master ~]# kubectl rollout undo deployment nginx --to-revision=1 deployment "nginx" rolled back [root@kub_master ~]# kubectl get all -o wide NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/nginx 3 3 3 3 8m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR svc/kubernetes 192.168.0.1 <none> 443/TCP 4d <none> svc/nginx-deployment 192.168.223.101 <nodes> 80:32401/TCP 35m app=nginx NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR rs/nginx-291675973 3 3 3 8m nginx 192.168.0.212:5000/nginx:1.13 pod-template-hash=291675973,run=nginx rs/nginx-441491271 0 0 0 6m nginx 192.168.0.212:5000/nginx:1.15 pod-template-hash=441491271,run=nginx NAME READY STATUS RESTARTS AGE IP NODE po/nginx-291675973-417pk 1/1 Running 0 2m 172.16.81.3 192.168.0.212 po/nginx-291675973-4nf2h 1/1 Running 0 2m 172.16.46.2 192.168.0.184 po/nginx-291675973-bvc66 1/1 Running 0 2m 172.16.66.3 192.168.0.208 [root@kub_master ~]# kubectl rollout history deployment nginx deployments "nginx" REVISION CHANGE-CAUSE 2 kubectl set image deploy nginx nginx=192.168.0.212:5000/nginx:1.15 3 kubectl run nginx --image=192.168.0.212:5000/nginx:1.13 --replicas=3 --record
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 记一次.NET内存居高不下排查解决与启示