etcd、资源、存储-Day03

一、ETCD

1.etcd简介

  • etcd是CoreOS团队于2013年6月发起的开源项目,它的目标是构建一个高可用的分布式键值(key-value)数据库。etcd内部采用

raft协议作为一致性算法,etcd基于Go语言实现。

官方网站:https://etcd.io/

github地址:https://github.com/etcd-io/etcd

官方文档:https://etcd.io/docs/v3.5/op-guide/maintenance/

​ 中等环境(数千个pod): 8c 16G/32G

​ 大型环境:16c 32G/64G

  • etcd具有下面这些属性

    • 完全复制:集群中的每个节点都可以使用完整的存档

    • 高可用性:Etcd可用于避免硬件的单点故障或网络问题

    • 一致性:每次读取都会返回跨多主机的最新写入

    • 简单:包括一个定义良好、面向用户的API(gRPC)

    • 安全:实现了带有可选的客户端证书身份验证的自动化TLS

    • 快速:每秒10000次写入的基准速度

    • 可靠:使用Raft算法实现了存储的合理分布Etcd的工作原理

image-20230519095908412

2.etcd配置

cat /etc/systemd/system/etcd.service

配置如下:

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd  # 数据存储目录
ExecStart=/usr/local/bin/etcd \ # 二进制文件路径
  --name=etcd-192.168.248.103 \ # 当前node节点名称
  --cert-file=/etc/kubernetes/ssl/etcd.pem \  # 证书
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \
  --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
  --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
  --initial-advertise-peer-urls=https://192.168.248.103:2380 \ # 通告自己的集群端口
  --listen-peer-urls=https://192.168.248.103:2380 \ # 集群之间通讯端口
  --listen-client-urls=https://192.168.248.103:2379,http://127.0.0.1:2379 \ # 客户端访问地址
  --advertise-client-urls=https://192.168.248.103:2379 \ # 通告自己的客户端端口
  --initial-cluster-token=etcd-cluster-0 \ # 创建集群使用的token,一个集群内的节点保持一致
  --initial-cluster=etcd-192.168.248.103=https://192.168.248.103:2380,etcd-192.168.248.104=https://192.168.248.104:2380,etcd-192.168.248.105=https://192.168.248.105:2380 \ # 集群所有的节点信息
  --initial-cluster-state=new \ # 新建集群的时候的值为new,如果是已经存在的集群为existing。
  --data-dir=/var/lib/etcd \  # 数据目录路径
  --wal-dir= \
  --snapshot-count=50000 \
  --auto-compaction-retention=1 \
  --auto-compaction-mode=periodic \
  --max-request-bytes=10485760 \  # 最大请求字节数限制,单位字节
  --quota-backend-bytes=8589934592 #  存储空间大小限制,单位字节
Restart=always
RestartSec=15
LimitNOFILE=65536
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

3.etcd选举

选举简介

  • etcd基于Raft算法进行集群角色选举,使用Raft的还有Consul、InfluxDB、kafka(KRaft)等。

  • Raft算法是由斯坦福大学的Diego Ongaro(迭戈·翁加罗)和John Ousterhout(约翰·欧斯特霍特)于2013年提出的,在

​ Raft算法之前,Paxos算法是最著名的分布式一致性算法,但Paxos算法的理论和实现都比较复杂,不太容易理解和实现,

​ 因此,以上两位大神就提出了Raft算法,旨在解决Paxos算法存在的一些问题,如难以理解、难以实现、可扩展性有

​ 限等。

  • Raft算法的设计目标是提供更好的可读性、易用性和可靠性,以便工程师能够更好地理解和实现它,它采用了Paxos算

​ 法中的一些核心思想,但通过简化和修改,使其更容易理解和实现,Raft算法的设计理念是使算法尽可能简单化,以便于

​ 人们理解和维护,同时也能够提供强大的一致性保证和高可用性。

节点角色

集群中每个节点只能处于 Leader、Follower 和 Candidate 三种状态的一种

  • follower:追随者(Redis Cluster的Slave节点)

  • candidate:候选节点,选举过程中。

  • leader:主节点(Redis Cluster的Master节点)

节点启动后基于termID(任期ID)进行相互投票,termID是一个整数默认值为0,在Raft算法中,一个term代表leader

的一段任期周期,每当一个节点成为leader时,就会进入一个新的term, 然后每个节点都会在自己的term ID上加1,

以便与上一轮选举区分开来。

image-20230519111858515

选举

首次选举:

  1. 各etcd节点启动后默认为 follower角色、默认termID为0、如果发现集群内没有leader,则会变成 candidate角色并进行选举 leader。
  2. candidate(候选节点)向其它候选节点发送投票信息(RequestVote),默认投票给自己。
  3. 各候选节点相互收到另外的投票信息(如A收到BC的,B收到AC的,C收到AB的),然后对比日志是否比自己的更新,如果比自己的更新,则将自己的选票投给目的候选人,并回复一个包含自己最新日志信息的响应消息,如果C的日志更新,那么将会得到A、B、C的投票,则C全票当选,如果B挂了,得到A、C的投票,则C超过半票当选。
  4. C向其它节点发送自己leader心跳信息,以维护自己的身份(heartbeat-interval、默认100毫秒)。
  5. 其它节点将角色切换为Follower并向leader同步数据。
  6. 如果选举超时(election-timeout )、则重新选举,如果选出来两个leader,则超过集群总数半票的生效。

后期选举:

  1. 当一个follower节点在规定时间内未收到leader的消息时,它将转换为candidate状态,向其他节点发送投票请求(自己的term ID和日志更新记录),并等待其他节点的响应,如果该candidate的(日志更新记录最新),则会获多数投票,它将成为新的leader。
  2. 新的leader将自己的termID +1 并通告至其它节点。
  3. 如果旧的leader恢复了,发现已有新的leader,则加入到已有的leader中并将自己的term ID更新为和leader一致,在同一个任期内所有节点的term ID是一致的。

image-20230519114337117

4. etcd优化配置

配置参数优化

vim /etc/systemd/system/etcd.service

--max-request-bytes=10485760 # request size limit(请求的最大字节数,默认一个key最大1.5Mib,官方推荐最大不要超出10Mib)
--quota-backend-bytes=8589934592 # storage size limit(磁盘存储空间大小限制,默认为2G,此值超过8G启动会有警告信息)

集群碎片整理

etcd使用时间比较长了,可以对etcd进行磁盘清理

root@etcd01:~# ETCDCTL_API=3 /usr/local/bin/etcdctl defrag --cluster --endpoints=https://192.168.248.103:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem
Finished defragmenting etcd member[https://192.168.248.104:2379]
Finished defragmenting etcd member[https://192.168.248.105:2379]
Finished defragmenting etcd member[https://192.168.248.103:2379]
root@etcd01:~#

image-20230519133742417

5.etcd常用命令

说明及帮助

etcd有多个不同的API访问版本,v1版本已经废弃,etcd v2 和 v3 本质上是共享同一套 raft 协议代码的两个独立的应用,接口不一样,存储不一样,数据互相隔离。也就是说如果从 Etcd v2 升级到 Etcd v3,原来v2 的数据还是只能用 v2 的接口访问,v3 的接口创建的数据只能访问通过v3 的接口访问。

# 查看etcd命令帮助
etcdctl --help
# 声明版本的查看帮助
ETCDCTL_API=3 etcdctl --help

image-20230519142444217

etcd集群状态

查看集群信息

# 查看集群信息
root@etcd01:~# etcdctl member list
3e291ed462300d71, started, etcd-192.168.248.104, https://192.168.248.104:2380, https://192.168.248.104:2379, false
7f3b83695fc408cc, started, etcd-192.168.248.105, https://192.168.248.105:2380, https://192.168.248.105:2379, false
f7628b44ff01feef, started, etcd-192.168.248.103, https://192.168.248.103:2380, https://192.168.248.103:2379, false
root@etcd01:~#

# 以表格的方式输出
root@etcd01:~# etcdctl member list --write-out=table
+------------------+---------+----------------------+------------------------------+------------------------------+-----------
|        ID        | STATUS  |         NAME         |          PEER ADDRS          |         CLIENT ADDRS         | IS LEARNER
+------------------+---------+----------------------+------------------------------+------------------------------+-----------
| 3e291ed462300d71 | started | etcd-192.168.248.104 | https://192.168.248.104:2380 | https://192.168.248.104:2379 |      false
| 7f3b83695fc408cc | started | etcd-192.168.248.105 | https://192.168.248.105:2380 | https://192.168.248.105:2379 |      false
| f7628b44ff01feef | started | etcd-192.168.248.103 | https://192.168.248.103:2380 | https://192.168.248.103:2379 |      false
+------------------+---------+----------------------+------------------------------+------------------------------+-----------

查看集群心跳状态

# 指定节点ip
export NODE_IPS="192.168.248.103 192.168.248.104 192.168.248.105"
# 查看
root@etcd01:~# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done
https://192.168.248.103:2379 is healthy: successfully committed proposal: took = 6.703992ms
https://192.168.248.104:2379 is healthy: successfully committed proposal: took = 7.053345ms
https://192.168.248.105:2379 is healthy: successfully committed proposal: took = 7.22859ms

查看节点详细信息

主要查看哪些节点是leader

export NODE_IPS="192.168.248.103 192.168.248.104 192.168.248.105"
root@etcd01:~# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl endpoint status --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem --write-out=table; done
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|           ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.248.103:2379 | f7628b44ff01feef |   3.5.6 |  4.0 MB |      true |      false |        15 |     367360 |             367360 |        |
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|           ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.248.104:2379 | 3e291ed462300d71 |   3.5.6 |  4.1 MB |     false |      false |        15 |     367360 |             367360 |        |
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|           ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.248.105:2379 | 7f3b83695fc408cc |   3.5.6 |  4.0 MB |     false |      false |        15 |     367360 |             367360 |        |
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

IS LEADER是true表示是leader

image-20230519143854762

查看etcd数据

# 以路径的方式所有key信息
ETCDCTL_API=3 etcdctl get / --prefix --keys-only

image-20230519144353817

# 查看pod信息
ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep pod
# 查看namespace信息:
root@etcd01:~# ETCDCTL_API=3 etcdctl  get / --prefix --keys-only |grep namespaces
/registry/namespaces/default
/registry/namespaces/kube-node-lease

# 查看deployment控制器信息:
root@etcd01:~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep deployment
/calico/resources/v3/projectcalico.org/profiles/ksa.kube-system.deployment-controller

# 查看calico组件信息:
root@etcd01:~# ETCDCTL_API=3 etcdctl get / --prefix --keys-only | grep calico
/calico/ipam/v2/assignment/ipv4/block/10.200.122.128-26

image-20230519144559363

查看对应的value

ETCDCTL_API=3 etcdctl   get /registry/pods/kube-system/calico-node-4wdgk

增删改查

# 新增数据 
root@etcd01:~# ETCDCTL_API=3 /usr/local/bin/etcdctl put /name "add test"
OK
# 查询新增的数据
root@etcd01:~# ETCDCTL_API=3 /usr/local/bin/etcdctl get /name
/name
add test
# 修改数据,会覆盖原有的数据
root@etcd01:~# ETCDCTL_API=3 /usr/local/bin/etcdctl put /name "test2222"
OK
root@etcd01:~# ETCDCTL_API=3 /usr/local/bin/etcdctl get /name
/name
test2222
root@etcd01:~#
# 删除数据
root@etcd01:~# ETCDCTL_API=3 /usr/local/bin/etcdctl del /name
1
root@etcd01:~# ETCDCTL_API=3 /usr/local/bin/etcdctl get /name
root@etcd01:~#

etcd watch机制

etcd基于不断监看数据,发生变化就主动触发通知客户端,Etcd v3 的watch机制支持watch某个固定的key,也支持watch一个范围。

演示etcd 的watch机制:

# 在etcd node1上watch一个key,没有此key也可以执行watch,后期可以再创建
root@etcd01:~# ETCDCTL_API=3 /usr/local/bin/etcdctl watch /watchtest

node1上watch的key此时是空的

image-20230519162046275

#在etcd node2上新增一条数据
root@etcd02:~# ETCDCTL_API=3 /usr/local/bin/etcdctl put /watchtest  "11111"

返回node1查看已经有数据了

image-20230519162236915

再次修改

root@etcd02:~# ETCDCTL_API=3 /usr/local/bin/etcdctl put /watchtest  "222222"

再次查看

image-20230519162448246

6.etcd备份

WAL是write ahead log(预写日志)的缩写,顾名思义,也就是在执行真正的写操作之前先写一个日志,预写日志。

wal: 存放预写式日志,最大的作用是记录了整个数据变化的全部历程。在etcd中,所有数据的修改在提交前,都要先写入到WAL中。

手动备份

备份说明

image-20230519164645421

# 备份,一般随便找一个节点备份即可。如果每个节点都备份,这样更安全
etcdctl snapshot save /tmp/etcd.db

image-20230519164807626

备份脚本

#!/bin/bash
source /etc/profile
DATE=`date +%Y-%m-%d_%H-%M-%S`
ETCDCTL_API=3 /usr/local/bin/etcdctl snapshot save /data/etcd-backup-dir/etcd-snapshot-${DATE}.db

恢复说明

单机恢复:

如果是单节点部署的,可以使用:etcdctl snapshot restore直接恢复

# 将备份数据恢复到一个新的目录,注意此目录不能存在
ETCDCTL_API=3 etcdctl snapshot restore /tmp/snapshot.db --data-dir=/opt/etcd-testdir
# 然后停止etcd,把/opt/etcd-testdir/member 拷贝到etcd默认数据目录重启即可

集群恢复:

如果是多节点部署的,使用:etcdctl snapshot restore恢复就有问题,多节点恢复需要新增一些参数,并且每一台节点都进行执行。

先把备份文件复制到每一台节点上

scp /tmp/etcd.db 192.168.248.104:/tmp/
scp /tmp/etcd.db 192.168.248.105:/tmp/

etcd1节点执行

ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd.db \
  --name etcd-192.168.248.103 \
  --initial-cluster=etcd-192.168.248.103=https://192.168.248.103:2380,etcd-192.168.248.104=https://192.168.248.104:2380,etcd-192.168.248.105=https://192.168.248.105:2380  \
  --initial-cluster-token etcd-cluster-0 \
  --initial-advertise-peer-urls https://192.168.248.103:2380 \

etcd2节点执行

注意修改--name和--initial-advertise-peer-urls

ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd.db \
  --name etcd-192.168.248.104 \
  --initial-cluster=etcd-192.168.248.103=https://192.168.248.103:2380,etcd-192.168.248.104=https://192.168.248.104:2380,etcd-192.168.248.105=https://192.168.248.105:2380  \
  --initial-cluster-token etcd-cluster-0 \
  --initial-advertise-peer-urls https://192.168.248.104:2380 

etcd3节点执行

注意修改--name和--initial-advertise-peer-urls

ETCDCTL_API=3 etcdctl snapshot restore /tmp/test0522.db \
  --name etcd-192.168.248.105 \
  --initial-cluster=etcd-192.168.248.103=https://192.168.248.103:2380,etcd-192.168.248.104=https://192.168.248.104:2380,etcd-192.168.248.105=https://192.168.248.105:2380  \
  --initial-cluster-token etcd-cluster-0 \
  --initial-advertise-peer-urls https://192.168.248.105:2380 

验证恢复

# k8s节点创建一个测试deployment
root@k8s-master01:~# kubectl  create deployment test-etcdbak --image=nginx

root@k8s-master01:~# kubectl  get deployment,pod
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/test-etcdbak   1/1     1            1           9m21s

NAME                                READY   STATUS    RESTARTS   AGE
pod/test-etcdbak-6d998bbb66-vq79c   1/1     Running   0          9m21s
root@k8s-master01:~#

执行备份

# etcd1节点执行
etcdctl snapshot save /tmp/etcdbaktest.db
# 把备份文件复制到其它两个节点
scp /tmp/etcdbaktest.db192.168.248.104:/tmp/
scp /tmp/etcdbaktest.db 192.168.248.105:/tmp/
# k8s节点删除测试deployment
root@k8s-master01:~# kubectl  delete deployment test-etcdbak
deployment.apps "test-etcdbak" deleted
root@k8s-master01:~# kubectl  get deployment,pod
No resources found in default namespace.

恢复

注意:此步骤是危险操作请谨慎操作

# 把三个etcd停掉
systemctl  stop etcd
# 清理原有数据,3台etcd节点都执行(此步骤是危险操作请谨慎操作)
mv /var/lib/etcd /data/bak/etcd/etcd-`date +%Y-%m-%d_%H-%M-%S`
# etcd节点
ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcdbaktest.db \
  --name etcd-192.168.248.103 \
  --initial-cluster=etcd-192.168.248.103=https://192.168.248.103:2380,etcd-192.168.248.104=https://192.168.248.104:2380,etcd-192.168.248.105=https://192.168.248.105:2380  \
  --initial-cluster-token etcd-cluster-0 \
  --initial-advertise-peer-urls https://192.168.248.103:2380 \
  --data-dir=/var/lib/etcd
# etcd节点2
ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcdbaktest.db \
  --name etcd-192.168.248.104 \
  --initial-cluster=etcd-192.168.248.103=https://192.168.248.103:2380,etcd-192.168.248.104=https://192.168.248.104:2380,etcd-192.168.248.105=https://192.168.248.105:2380  \
  --initial-cluster-token etcd-cluster-0 \
  --initial-advertise-peer-urls https://192.168.248.104:2380 \
  --data-dir=/var/lib/etcd
# etcd节点3
ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcdbaktest.db \
  --name etcd-192.168.248.105 \
  --initial-cluster=etcd-192.168.248.103=https://192.168.248.103:2380,etcd-192.168.248.104=https://192.168.248.104:2380,etcd-192.168.248.105=https://192.168.248.105:2380  \
  --initial-cluster-token etcd-cluster-0 \
  --initial-advertise-peer-urls https://192.168.248.105:2380 \
  --data-dir=/var/lib/etcd

测试

# 启动3个节点etcd
systemctl  start etcd
# 检查etcd状态
systemctl  status etcd
etcdctl member list --write-out=table
export NODE_IPS="192.168.248.103 192.168.248.104 192.168.248.105"
for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done
# k8s查看删除的deployment是否还在
root@k8s-master01:~# kubectl  get deployment,pod
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/test-etcdbak   1/1     1            1           38m

NAME                                READY   STATUS    RESTARTS   AGE
pod/test-etcdbak-6d998bbb66-vq79c   1/1     Running   0          38m

kubeasz备份及恢复

创建测试资源

# 和之前一样创建一个测试资源
kubectl  create deployment kubeasz-etcdbak --image=nginx

使用ezctl备份etcd

# 在部署节点执行
root@harbor02:~# /etc/kubeasz/ezctl  backup k8s-cluster1
# 查看生成的备份文件。每次备份时会生成两个.db文件,一个是带时间的,一个是默认恢复的文件(以最后一次备份为准)
root@harbor02:~# ll /etc/kubeasz/clusters/k8s-cluster1/backup/
total 9504
drwxr-xr-x 2 root root    4096 May 22 14:12 ./
drwxr-xr-x 5 root root    4096 May 17 06:10 ../
-rw------- 1 root root 4857888 May 22 14:12 snapshot.db
-rw------- 1 root root 4857888 May 22 14:12 snapshot_202305221412.db

删除测试资源

kubectl  delete deployment kubeasz-etcdbak 

image-20230522221342405

使用ezctl恢复etcd

恢复时默认读取的文件是/etc/kubeasz/clusters/k8s-cluster1/backup/snapshot.db

此文件是备份时最新的生产文件,如我执行了多个备份操作,以最后一次为准

image-20230522222338853

需要恢复其它的备份文件需要把备份文件改成/etc/kubeasz/clusters/k8s-cluster1/backup/snapshot.db

# 需要恢复其它的备份文件需要把备份文件改成`/etc/kubeasz/clusters/k8s-cluster1/backup/snapshot.db`
cd /etc/kubeasz/clusters/k8s-cluster1/backup/
cp snapshot_202305221412.db snapshot.db
# 确定好备份文件后执行恢复操作,注意执行恢复时会停止etcd
root@harbor02:~# /etc/kubeasz/ezctl restore k8s-cluster1

恢复完成后查看测试资源是否恢复

root@k8s-master01:~# kubectl  get pod
NAME                               READY   STATUS    RESTARTS   AGE
kubeasz-etcdbak-7bf86bcfd7-r69vt   1/1     Running   0          25m

集群恢复

当etcd集群宕机数量超过集群总节点数一半以上的时候(如总数为三台宕机两台),就会导致整合集群宕机,后期需要重新恢复数据,

则恢复流程如下:

  1. 恢复服务器系统

  2. 重新部署ETCD集群

  3. 停止kube-apiserver/controller-manager/scheduler/kubelet/kube-proxy

​ 4.停止ETCD集群

  1. 各ETCD节点恢复同一份备份数据

  2. 启动各节点并验证ETCD集群

  3. 启动kube-apiserver/controller-manager/scheduler/kubelet/kube-proxy

  4. 验证k8s master状态及pod数据

二、Coredns

1.简介

​ DNS组件历史版本有skydns、kube-dns和coredns三个,k8s 1.3版本之前使用skydns,之后的版本到1.17及之间的版本使用kube-dns,1.18开始目前主要使用coredns,DNS组件用于解析k8s集群中service name所对应得到IP地址。

官网地址:

https://github.com/coredns/coredns

https://coredns.io

https://coredns.io/plugins/

资源配置:

默认资源如下

        resources:
          limits:
            memory: 256Mi
            cpu: 200m
          requests:
            cpu: 100m
            memory: 70Mi

如果解析比较频繁可以适当增加资源,一般2C1G就够了

        resources:
          limits:
            memory: 1024Mi
            cpu: 2000m
          requests:
            cpu: 2000m
            memory: 1024Mi

2.插件

errors:错误信息标准输出。

health:在CoreDNS的 http://localhost:8080/health 端口提供 CoreDNS 服务的健康报告。

ready:监听8181端口,当coredns的插件都已就绪时,访问该接口会返回 200OK。

kubernetes:CoreDNS 将基于 kubernetes service name进行 DNS 查询并返回查询记录给客户端.

prometheus:CoreDNS 的度量指标数据以 Prometheus 的key-value的格式在http://localhost:9153/metrics URI上提供。

forward: 不是Kubernetes 集群内的其它任何域名查询都将转发到 预定义的目的server,如 (/etc/resolv.conf或IP(如8.8.8.8)).

cache:启用 service解析缓存,单位为秒。

loop:检测域名解析是否有死循环,如coredns转发给内网DNS服务器,而内网DNS服务器又转发给coredns,如果发现解析是死循环,则强制中止 CoreDNS 进程(kubernetes会重建)。

reload:检测corefile是否更改,在重新编辑configmap 配置后,默认2分钟后会优雅的自动加载。

loadbalance:轮训DNS域名解析, 如果一个域名存在多个记录则轮训解析。

# 修改
kubectl  edit cm -n kube-system coredns
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        #forward . /etc/resolv.conf {
        forward . 223.6.6.6 {
            max_concurrent 1000
        }
        cache 600
        loop
        reload
        loadbalance
    }
        myserver.online {
          forward . 172.16.16.16:53
        }
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"Corefile":".:53 {\n    errors\n    health {\n        lameduck 5s\n    }\n    ready\n    kubernetes cluster.local in-addr.arpa ip6.arpa {\n        pods insecure\n        fallthrough in-addr.arpa ip6.arpa\n        ttl 30\n    }\n    prometheus :9153\n    #forward . /etc/resolv.conf {\n    forward . 223.6.6.6 {\n        max_concurrent 1000\n    }\n    cache 600\n    loop\n    reload\n    loadbalance\n}\n    myserver.online {\n      forward . 172.16.16.16:53\n    }\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"coredns","namespace":"kube-system"}}
  creationTimestamp: "2023-05-22T17:02:34Z"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
  name: coredns
  namespace: kube-system
  resourceVersion: "52990"
  uid: 160408ac-67fb-4a23-a8e6-2ed8b3c06de2

3.域名解析

优先从coredns查找可解析域名没有则转到powerdns查找,如果还没有最后从223.6.6.6查找。

[root@net-test1 /]# ping 10.200.58.196
PING 10.200.58.196 (10.200.58.196) 56(84) bytes of data.
64 bytes from 10.200.58.196: icmp_seq=1 ttl=62 time=0.429 ms
64 bytes from 10.200.58.196: icmp_seq=2 ttl=62 time=0.371 ms
^C
--- 10.200.58.196 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.371/0.400/0.429/0.029 ms
[root@net-test1 /]# ping baidu.com
PING baidu.com (110.242.68.66) 56(84) bytes of data.
64 bytes from 110.242.68.66 (110.242.68.66): icmp_seq=1 ttl=127 time=66.2 ms# 创建一个测试pod
kubectl run net-test1 --image=centos:7.9.2009 sleep 360000
# 进入容器
kubectl exec -it net-test1 bash
# ping 本地pod和外网地址
[root@net-test1 /]# ping 10.200.58.196
PING 10.200.58.196 (10.200.58.196) 56(84) bytes of data.
64 bytes from 10.200.58.196: icmp_seq=1 ttl=62 time=0.429 ms
64 bytes from 10.200.58.196: icmp_seq=2 ttl=62 time=0.371 ms

--- 10.200.58.196 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.371/0.400/0.429/0.029 ms
[root@net-test1 /]# ping baidu.com
PING baidu.com (110.242.68.66) 56(84) bytes of data.
64 bytes from 110.242.68.66 (110.242.68.66): icmp_seq=1 ttl=127 time=66.2 ms

image-20230523105051112

测试

[root@net-test1 /]# yum install net-tools bind-utils
[root@net-test1 /]# nslookup kubernetes
Server:         10.100.0.2
Address:        10.100.0.2#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.100.0.1

[root@net-test1 /]# nslookup kube-dns.kube-system.svc.cluster.local
Server:         10.100.0.2
Address:        10.100.0.2#53

Name:   kube-dns.kube-system.svc.cluster.local
Address: 10.100.0.2

[root@net-test1 /]#

如果域名写的不全回去自动查找补全

[root@net-test1 /]# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.100.0.2
options ndots:5
[root@net-test1 /]#

三、K8S资源

1. K8S设计理念

image-20230523111125266

API设计原则

官网说明:https://www.kubernetes.org.cn/kubernetes设计理念

  • 所有API应该是声明式的。
  • API对象是彼此互补而且可组合的。
  • 高层API以操作意图为基础设计。
  • 低层API根据高层API的控制需要设计。
  • 尽量避免简单封装,不要有在外部API无法显式知道的内部隐藏的机制。
  • API操作复杂度与对象数量成正比。
  • API对象状态不能依赖于网络连接状态。
  • 尽量避免让操作机制依赖于全局状态,因为在分布式系统中要保证全局状态的同步是非常困难的。

API简介

image-20230523114112011

内置API: 部署好kubernetes集群后自带的API接口.
自定义资源:CRD(Custom ResourceDefinition),部署kubernetes之后通过安装其它组件等方式扩展出来的API。

image-20230523114015365

2. Pod

Pod简介

  • pod是k8s中最小部署单元
  • 一个pod可以运行一个或多个容器
  • 运行多个容器时,这些容器是一起被调度的
  • 一个Pod中的容器共享网络命名空间
  • Pod是生命周期是短暂的,不会自愈,用完就会销毁
  • 一般是使用Controller来创建和管理pod

image-20230523120928796

使用yaml创建pod

apiVersion: v1 #  api版本
kind: Pod # 资源类型
metadata: # 元数据
  name: nginx # 定义pod名称
spec:  # 定义pod的规格
  containers:   # 定义容器
  - name: nginx # 容器名称
    image: nginx:1.20.2-alpine # 镜像
    ports: # 定义容器端口
    - containerPort: 80
  - name: mytest-container2 # 定义第二个容器
    image: alpine 
    command: ['sh','-c','/bin/sleep 1000' # 容器执行的命令

3.Job & CronJob

Job:一次性执行

应用场景:初始化数据,如:mysql第一次运行导入初始化数据

演示:创建一个job,在容器中执行一条输出语句并打印到挂载目录下的文件中

apiVersion: batch/v1
kind: Job
metadata:
  name: job-test2
spec:
  template:
    spec:
      containers:
      - name: job-test2
        image: harbor.linuxarchitect.io/base_images/centos:7.9.2009
        command: ["/bin/sh"]
        args: ["-c", "echo data init job at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"]
        volumeMounts:
        - mountPath: /cache
          name: cache-volume
        - mountPath: /etc/localtime
          name: localtime
      volumes:
      - name: cache-volume
        hostPath:
          path: /tmp/jobdata
      - name: localtime
        hostPath:
          path: /etc/localtime
      restartPolicy: Never
# 执行
kubectl  apply -f job.yaml
# 查看
root@k8s-master01:~# kubectl  get pod  -o wide |grep job-test
job-test2-2m5fs                    0/1     Completed   0          2m25s   10.200.217.65   k8s-node04   <none>           <none>
root@k8s-master01:~#

进入k8s-node04节点,查看/tmp/jobdata/data.log文件中是否有打印内容

root@k8s-node04:~# cat /tmp/jobdata/data.log
data init job at 2023-05-23_15-00-53
root@k8s-node04:~#

CronJob

CronJob:定时任务,像Linux的Crontab一样。

应用场景:备份,如mysql定时备份

演示:创建一个cornjob,在容器中每分执行一条输出语句并打印到挂载目录下的文件中

apiVersion: batch/v1
kind: CronJob
metadata:
  name: cronjob-mysql-databackup
spec:
  #schedule: "30 2 * * *"
  schedule: "* * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: cronjob-mysql-databackup-pod
            image: harbor.linuxarchitect.io/base_images/centos:7.9.2009
            command: ["/bin/sh"]
            args: ["-c", "echo mysql databackup cronjob at `date +%Y-%m-%d_%H-%M-%S` >> /cache/data.log"]
            volumeMounts:
            - mountPath: /cache
              name: cache-volume
            - mountPath: /etc/localtime
              name: localtime
          volumes:
          - name: cache-volume
            hostPath:
              path: /tmp/cronjobdata
          - name: localtime
            hostPath:
              path: /etc/localtime
          restartPolicy: OnFailure
kubectl  apply -f cronjob.yaml
# 查看pod
root@k8s-master01:~# kubectl  get pod -o wide |grep cron
cronjob-mysql-databackup-28080434-8p2vl   0/1     Completed   0          83s   10.200.85.200    k8s-node01   <none>           <none>
cronjob-mysql-databackup-28080435-x6s4r   0/1     Completed   0          23s   10.200.135.134   k8s-node03   <none>           <none>

查看是否有新增数据

root@k8s-node01:~# cat /tmp/cronjobdata/data.log
mysql databackup cronjob at 2023-05-23_15-14-00
mysql databackup cronjob at 2023-05-23_15-18-00
mysql databackup cronjob at 2023-05-23_15-19-00
root@k8s-node01:~#

4. 副本控制器

RC

第一代副本控制器,标签选择器只支持=!=

只能实现pod副本管理,不能实现回滚,滚动更新

参考文档:

https://kubernetes.io/zh/docs/concepts/workloads/controllers/replicationcontroller/

https://kubernetes.io/zh/docs/concepts/overview/working-with-objects/labels/

演示:

apiVersion: v1  
kind: ReplicationController  
metadata:  
  name: ng-rc
spec:  
  replicas: 2
  selector:  
    app: ng-rc-80 
    #app1: ng-rc-81
  template:   
    metadata:  
      labels:  
        app: ng-rc-80
        #app1: ng-rc-81
    spec:  
      containers:  
      - name: ng-rc-80 
        image: nginx  
        ports:  
        - containerPort: 80 
root@k8s-master01:~# kubectl  apply -f  rc.yaml
replicationcontroller/ng-rc created
root@k8s-master01:~# kubectl  get rc
NAME    DESIRED   CURRENT   READY   AGE
ng-rc   2         2         0       3s

root@k8s-master01:~# kubectl  get pod |grep rc
ng-rc-4p425                        1/1     Running     0          3m43s
ng-rc-kg2ld                        1/1     Running     0          3m43s
root@k8s-master01:~#

验证pod副本控制是否生效

# 删除其中一个pod,看看是否会自动启动一个新的
root@k8s-master01:~# kubectl  delete pod ng-rc-4p425
pod "ng-rc-4p425" deleted
# 删除后查看已经在重新创建了
root@k8s-master01:~# kubectl  get pod |grep rc
ng-rc-kg2ld                        1/1     Running             0          7m15s
ng-rc-r4vk8                        0/1     ContainerCreating   0          7s
root@k8s-master01:~#

RS

第二代副本控制器,select 不仅支持=!=精确匹配,还支持innot in模糊匹配(一般也不用到模糊匹配)

也是只能实现pod副本管理,不能实现回滚,滚动更新

参考文档:

https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/replicaset/

演示:

apiVersion: apps/v1 
kind: ReplicaSet
metadata:
  name: frontend
spec:
  replicas: 2
  selector:
    #matchLabels:
    #  app: ng-rs-80
    matchExpressions:
      - {key: app, operator: In, values: [ng-rs-80,ng-rs-81]}
  template:
    metadata:
      labels:
        app: ng-rs-80
    spec:  
      containers:  
      - name: ng-rs-80 
        image: nginx  
        ports:  
        - containerPort: 80
root@k8s-master01:~# kubectl  apply -f rs.yaml
root@k8s-master01:~# kubectl  get rs |grep frontend
frontend                     2         2         2       21s
root@k8s-master01:~# kubectl  get pod |grep frontend
frontend-dqh8k                     1/1     Running     0          32s
frontend-jljt7                     1/1     Running     0          32s
root@k8s-master01:~#

验证pod副本控制是否生效

# 删除其中一个pod,看看是否会自动启动一个新的
root@k8s-master01:~# kubectl  delete pod frontend-dqh8k
pod "frontend-dqh8k" deleted
# 删除后查看已经在重新创建了
root@k8s-master01:~#  kubectl  get pod |grep frontend
frontend-7j6lz                     1/1     Running     0          12s
frontend-jljt7                     1/1     Running     0          5m4s
root@k8s-master01:~#

Deployment

第三代pod控制器,比rs更高一级的控制器,除了有rs的功能之外,还有很多高级功能,比如说最重要的:滚动升级、回滚等

参考文档:

https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.16.2
        ports:
        - containerPort: 80

滚动更新演示:

先创建一个nginx:1.16.1版本的pod

root@k8s-master01:~# kubectl  apply -f deployment.yaml
# 查看pod
root@k8s-master01:~# kubectl  get pod |grep nginx
nginx-deployment-7fb9ddcd79-jgndq   1/1     Running   0          9m7s
nginx-deployment-7fb9ddcd79-k4wk9   1/1     Running   0          8m27s
root@k8s-master01:~#

修改yaml,把nginx版本升级到nginx:1.20.1

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.20.1 # 升级版本
        ports:
        - containerPort: 80
# 重新apply
root@k8s-master01:~# kubectl  apply -f deployment.yaml
deployment.apps/nginx-deployment configured
# apply后查看pod,发现新的pod已经在新建了并且老的pod还没删除
root@k8s-master01:~# kubectl  get pod |grep nginx
nginx-deployment-5fd8584d98-5vplc   0/1     ContainerCreating   0          2s
nginx-deployment-7fb9ddcd79-jgndq   1/1     Running             0          11m
nginx-deployment-7fb9ddcd79-k4wk9   1/1     Running             0          11m

# 过一段时间后再次查看,已经重新新建了pod
root@k8s-master01:~# kubectl  get pod |grep nginx
nginx-deployment-5fd8584d98-5vplc   1/1     Running   0          62s
nginx-deployment-5fd8584d98-pv6dt   1/1     Running   0          40s
# 查看nginx版本确定是否已经升级
root@k8s-master01:~# kubectl  exec -it nginx-deployment-5fd8584d98-pv6dt  -- nginx -v
nginx version: nginx/1.20.1

image-20230523165238462

升级后再测试一下回滚到1.16.1版本

# 查看历史
kubectl rollout history deployment nginx-deployment

image-20230523165441732

# 回退到指定版本
kubectl rollout undo deployment nginx-deployment --to-revision=3
# 查看 
root@k8s-master01:~# kubectl  get pod |grep nginx
nginx-deployment-5fd8584d98-5vplc   1/1     Running             0          9m51s
nginx-deployment-68657ff5c6-hnqsk   1/1     Running             0          26s
nginx-deployment-7fb9ddcd79-sz62t   0/1     ContainerCreating   0          9s
root@k8s-master01:~#
# 查看是否回退成功
root@k8s-master01:~# kubectl  exec -it nginx-deployment-7fb9ddcd79-sz62t -- nginx -v
nginx version: nginx/1.16.1
root@k8s-master01:~#

5. Service

简介

  • 为什么需要Service

    Pod是短暂的,每次创建/删除都会重新生成新的IP。这就导致了一个问题: 如果一组 Pod他 Pod提供功能或服务, 那要怎么连接固定的IP呢?service则解耦了服务和应用,service的实现方式就是通过label标签动态匹配后端endpoint。

  • service通过标签选择器关联pod

    kube-proxy监听着k8s-apiserver,一旦service资源发生变化(调k8s- api修改service信息),kube- proxy就会生成对应的负载调度的调整,这样就保证service的最新状态。

  • kube-proxy三种调度算法

    • userspace:k8s1.1之前

    • iptables:1.2-k8s1.11之前

    • ipvs:k8s 1.11之后,如果没有开启ipvs,则自动降级为iptables

image-20230523172033597

Service类型

  • ClusterIP:用于内部服务基于service name的访问。

  • NodePort:用于kubernetes集群以外的服务主动访问运行在kubernetes集群内部的服务。

  • LoadBalancer:用于公有云环境的服务暴露。

  • ExternalName:用于将k8s集群外部的服务映射至k8s集群内部访问,从而让集群内部的pod能够通过固定的service name访问集群外部的服务,有时候也用于将不同namespace之间的pod通过ExternalName进行访问。

ClusterIP

示例:

先创建一个nginx演示使用

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.20.1
        ports:
        - containerPort: 80

创建一个ClusterIP类型的Service

apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  type: ClusterIP # 指定类型
  selector: # 标签选择器
    app: ng-deploy-80 # 标签要与pod 中的保持一致,可以有多个标签
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP

查看及测试

# 应用
root@k8s-master01:~# kubectl  apply -f svc-cluserip.yaml
# 查看
root@k8s-master01:~# kubectl  get svc -o wide |grep ng
ng-deploy-80   ClusterIP   10.100.122.126   <none>        80/TCP    80s     app=ng-deploy-80
# 使用pod ip进行访问
root@k8s-master01:~# kubectl  get pod  -o wide|grep nginx
nginx-deployment-7fb9ddcd79-sz62t   1/1     Running   0          17h   10.200.135.142   k8s-node03   <none>           <none>
nginx-deployment-7fb9ddcd79-xcr6g   1/1     Running   0          17h   10.200.217.77    k8s-node04   <none>           <none>
root@k8s-master01:~# curl  10.200.135.142
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@k8s-master01:~#

# 使用127.0.0.1访问

NodePort

对外暴露应用。在每个节点上启用一个端口来暴露服务,可以在集群外部访问。也会分配一个稳定内部集群IP地址。访问地址:: ,不指定端口 默认的端口范围是30000-32067,

apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  type: NodePort # 指定类型
  selector: # 标签选择器
    app: ng-deploy-80 # 标签要与pod 中的保持一致,可以有多个标签
  ports:
  - name: http
    port: 80 # service端口
    targetPort: 80 # 容器端口
    nodePort: 30008 # 指定nodeport端口
    protocol: TCP
# 查看svc
root@k8s-master01:~# kubectl  get svc |grep ng
ng-deploy-80   NodePort    10.100.168.238   <none>        80:30008/TCP   8s
root@k8s-master01:~#
# 查看对应的ep
root@k8s-master01:~# kubectl  get ep |grep ng
ng-deploy-80   10.200.135.142:80,10.200.217.77:80                               4m14s
root@k8s-master01:~#

通过外部访问

http://192.168.248.102:30008/

image-20230524104907931

四. volume存储卷

1.简介

Volume将容器中的指定目录数据和容器解耦,并将数据存储到指定的位置,不同的存储卷功能不一样,如果是基于网络存储的存储卷可以实现容器间的数据共享和持久化。
静态存储卷需要在使用前手动创建PV和PVC,然后绑定至pod使用。

常用的几种卷:

  • emptyDir:本地临时卷

  • hostPath:本地存储卷

  • nfs等:网络存储卷

  • Secret:是一种包含少量敏感信息例如密码、令牌或密钥的对象

  • configmap: 配置文件

参考地址:https://kubernetes.io/zh/docs/concepts/storage/volumes/

image-20230524110517304

2.emptyDir

​ 当 Pod 被分配给节点时,首先创建 emptyDir 卷,并且只要该 Pod 在该节点上运行,该卷就会存在,正如卷的名字

所述,它最初是空的,Pod 中的容器可以读取和写入 emptyDir 卷中的相同文件,尽管该卷可以挂载到每个容器中

的相同或不同路径上。当出于任何原因从节点中删除 Pod 时,emptyDir 中的数据将被永久删除。

image-20230526094341519

演示:

创建一个deployment,挂载/cache,卷类型为emptyDir

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /cache # 如果没有此目录会自动创建
          name: cache-volume # 卷名称
      volumes:
      - name: cache-volume # 指定卷名称
        emptyDir: {}  # 卷类型
# 创建
root@k8s-master01:~# kubectl  apply -f emptyDir.yaml
# 查看
root@k8s-master01:~# kubectl  get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
nginx-deployment-6d6f6ccff9-2vtfn   1/1     Running   0          2m12s   10.200.58.201   k8s-node02   <none>           <none>
# 可以看到pod是运行在k8s-node02节点,访问k8s-node02节点查看是否有
root@k8s-node02:~# find / -name "cache-volume"
/var/lib/kubelet/pods/cc7103e5-c136-4967-b556-d0ac115e8f4b/volumes/kubernetes.io~empty-dir/cache-volume
/var/lib/kubelet/pods/cc7103e5-c136-4967-b556-d0ac115e8f4b/plugins/kubernetes.io~empty-dir/cache-volume

# 进入pod写入数据
root@k8s-master01:~# kubectl  exec  -it nginx-deployment-6d6f6ccff9-2vtfn bash
root@nginx-deployment-6d6f6ccff9-2vtfn:/# echo "empty test " >> /cache/test.log

# k8s-node02节点查看是否有测试数据
root@k8s-node02:/var/lib/kubelet/pods/cc7103e5-c136-4967-b556-d0ac115e8f4b/volumes/kubernetes.io~empty-dir/cache-volume# cat test.log
empty test

# 删除pod
root@k8s-master01:~# kubectl  delete pod nginx-deployment-6d6f6ccff9-2vtfn

# 删除后查看k8s-node02节点上的数据是不是也被删除了,登录后查找发现连目录都已经删除了
root@k8s-node02:~# !find
find / -name "cache-volume"
root@k8s-node02:~#

3.hostPath

hostPath 卷将主机节点的文件系统中的文件或目录挂载到集群中,pod删除的时候,卷不会被删除

image-20230526105530579

演示:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx 
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /cache
          name: cache-volume
      volumes:
      - name: cache-volume
        hostPath:
          path: /data/kubernetes
# 创建
root@k8s-master01:~# kubectl  apply -f hostPath.yaml
# 查看pod所在的节点
root@k8s-master01:~# kubectl  get pod -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
nginx-deployment-b8587d7c-65rdt   1/1     Running   0          41s   10.200.85.215   k8s-node01   <none>           <none>
# 进入pod,并写入测试数据
root@k8s-master01:~# kubectl  exec -it nginx-deployment-b8587d7c-65rdt bash
root@nginx-deployment-b8587d7c-65rdt:/# echo "hostPath test" >> /cache/test.log
# 到node1节点查看
root@k8s-node01:~# cat /data/kubernetes/test.log
hostPath test
root@k8s-node01:~#
# 把pod扩容到3个,看看不同节点上的数据是否会同步
root@k8s-master01:~# kubectl  scale deployment nginx-deployment --replicas=3
root@k8s-master01:~# kubectl  get pod -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
nginx-deployment-b8587d7c-65rdt   1/1     Running   0          5m24s   10.200.85.215   k8s-node01   <none>           <none>
nginx-deployment-b8587d7c-mfh76   1/1     Running   0          57s     10.200.58.202   k8s-node02   <none>           <none>
nginx-deployment-b8587d7c-qr29l   1/1     Running   0          57s     10.200.217.79   k8s-node04   <none>           <none>

# 到node2 和node4查看是否有测试数据
root@k8s-node02:~# ll /data/kubernetes/
total 8
drwxr-xr-x 2 root root 4096 May 26 11:12 ./
drwxr-xr-x 3 root root 4096 May 26 11:12 ../
root@k8s-node02:~#
# node4
root@k8s-node04:~# ll /data/kubernetes/
total 8
drwxr-xr-x 2 root root 4096 May 26 11:12 ./
drwxr-xr-x 3 root root 4096 May 26 11:12 ../
root@k8s-node04:~#

# 删除pod查看node1上的测试数据还在不在
root@k8s-master01:~# kubectl  delete deployment nginx-deployment
# node1上查看测试数据还在
root@k8s-node01:~# cat /data/kubernetes/test.log
hostPath test
root@k8s-node01:~#

4.nfs共享存储

​ nfs 卷允许将现有的 NFS(网络文件系统)挂载到容器中,且不像 emptyDir会丢失数据,当删除 Pod 时,nfs 卷

的内容被保留,卷仅仅是被卸载,这意味着 NFS 卷可以预先上传好数据待pod启动后即可直接使用,并且网络存

储可以在多 pod 之间共享同一份数据,即NFS 可以被多个pod同时挂载和读写。

image-20230526121915379

4.1 安装nfs

之前已经安装过nfs,这里在说明一下

# 找一台机器安装nfs。这里用harbor02
apt-get install nfs-server -y
# 创建目录
mkdir -p /data/k8sdata/
# 修改配置
vim /etc/exports
# 添加以下内容
/data/k8sdata *(rw,no_root_squash)
# 启动
systemctl restart nfs-server.service
systemctl enable nfs-server.service

4.2 演示

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx 
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /usr/share/nginx/html/mysite  # 需要共享的目录
          name: my-nfs-volume  # 卷名称
      volumes: 
      - name: my-nfs-volume # 指定卷名称
        nfs:
          server: 192.168.248.109 # nfs地址
          path: /data/k8sdata # nfs共享的目录

---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 30016
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80
#  部署
root@k8s-master01:~# kubectl  apply -f nfs.yaml
# 查看pod和svc
root@k8s-master01:~# kubectl  get pod,svc
NAME                                   READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-84789c747-2z6kn   1/1     Running   0          47s

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes     ClusterIP   10.100.0.1       <none>        443/TCP        8d
service/ng-deploy-80   NodePort    10.100.159.192   <none>        81:30016/TCP   47s
root@k8s-master01:~#

写入测试数据

# 进入pod
root@k8s-master01:~# kubectl  exec -it nginx-deployment-84789c747-2z6kn  bash
# 写入测试数据
root@nginx-deployment-84789c747-2z6kn:~# echo "test" > /usr/share/nginx/html/mysite/1.txt

访问测试

image-20230526123633251

4.3 多个pod 挂载多个nfs

nfs中创建多个目录

# 创建目录
mkdir  /data/k8sdata/pool{1,2}
# 修改配置
vim /etc/exports
# 添加以下内容
/data/k8sdata/pool1 *(rw,no_root_squash)
/data/k8sdata/pool2 *(rw,no_root_squash)
#  重启
systemctl restart nfs-server.service

yaml

#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-site2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-81
  template:
    metadata:
      labels:
        app: ng-deploy-81
    spec:
      containers:
      - name: ng-deploy-81
        image: nginx 
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /usr/share/nginx/html/pool1
          name: my-nfs-volume-pool1
        - mountPath: /usr/share/nginx/html/pool2
          name: my-nfs-volume-pool2
      volumes:
      - name: my-nfs-volume-pool1
        nfs:
          server: 192.168.248.109
          path: /data/k8sdata/pod1
      - name: my-nfs-volume-pool2
        nfs:
          server: 192.168.248.109
          path: /data/k8sdata/pod2

---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-81
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30017
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-81

测试

# 部署
root@k8s-master01:~# kubectl  apply -f nfs2.yaml
# 查看pod
root@k8s-master01:~# kubectl  get pod
NAME                                     READY   STATUS    RESTARTS   AGE
nginx-deployment-site2-d7b98b75d-bf4pr   1/1     Running   0          25s
root@k8s-master01:~#
# 测试
root@nginx-deployment-site2-d7b98b75d-bf4pr:/# echo "test1111111" > /usr/share/nginx/html/pool1/1.txt
root@nginx-deployment-site2-d7b98b75d-bf4pr:/# echo "test2222222" > /usr/share/nginx/html/pool2/2.txt

访问1.txt

image-20230526125343474

访问2.txt

image-20230526125429250

5.PV/PVC

5.1 简介

PV:PersistentVolume

​ PersistentVolume(PV):是集群中已经由kubernetes管理员配置的一个网络存储,集群中的存储资源一个集群资源,即不隶属于任何namespace,PV的数据最终存储在硬件存储,pod不能直接挂载PV,PV需要绑定给PVC并最终由pod挂载PVC使用,PV其支持NFS、Ceph、商业存储或云提供商的特定的存储等,可以自定义PV的类型是块还是文件存储、存储空间大小、访问模式等,PV的生命周期独立于Pod,即当使用PV的Pod被删除时可以对PV中的数据没有影响。

PVC:PersistentVolumeClaim

​ 是pod对存储的请求,pod挂载PVC并将数据存储在PVC,而PVC需要绑定到PV才能使用,另外PVC在创建的时候要指定namespace,即pod要和PVC运行在同一个namespace,可以对PVC设置特定的空间大小和访问模式,使用PVC的pod在删除时也可以对PVC中的数据没有影响。

PV/PVC作用:

  • 用于实现pod和storage的解耦,这样我们修改storage的时候不需要修改pod。
  • 与NFS的区别,可以在PV和PVC层面实现实现对存储服务器的空间分配、存储的访问权限管理等。
  • kubernetes 从1.0版本开始支持PersistentVolume和PersistentVolumeClaim

image-20230526144108719

PV/PVC总结:

  • PV是对底层网络存储的抽象,即将网络存储定义为一种存储资源,将一个整体的存储资源拆分成多份后给不同的业务使用。
  • PVC是对PV资源的申请调用,pod是通过PVC将数据保存至PV,PV再把数据保存至真正的硬件存储。

image-20230526145225407

5.2 参数

PV参数

访问模式:

kubectl explain PersistentVolume.spec.accessModes

accessModes :

  • ReadWriteOnce – PV只能被单个节点以读写权限挂载,RWO
  • ReadOnlyMany – PV以可以被多个节点挂载但是权限是只读的,ROX
  • ReadWriteMany – PV可以被多个节点是读写方式挂载使用,RWX

回收策略:

persistentVolumeReclaimPolicy #删除机制即删除存储卷卷时候,已经创建好的存储卷由以下删除操作:
#kubectl explain PersistentVolume.spec.persistentVolumeReclaimPolicy
Retain – 删除PV后保持原装,最后需要管理员手动删除(建议使用
Recycle – 空间回收,及删除存储卷上的所有数据(包括目录和隐藏文件),目前仅支持NFS和hostPath
Delete – 自动删除存储卷(慎重使用

PVC参数

访问模式:

kubectl explain PersistentVolumeClaim.spec.volumeMode

accessModes :PVC 访问模式

  • ReadWriteOnce – PVC只能被单个节点以读写权限挂载,RWO
  • ReadOnlyMany – PVC以可以被多个节点挂载但是权限是只读的,ROX
  • ReadWriteMany – PVC可以被多个节点是读写方式挂载使用,RWX

指定卷类型:

kubectl explain PersistentVolume.spec.volumeMode

volumeMode #卷类型
定义存储卷使用的文件系统是块设备还是文件系统,默认为文件系统

5.2 静态/动态存储

PV供给分为静态/动态供给:

  • 静态供给Static

    ​ 静态存储卷 ,需要在使用前手动创建PV、然后创建PVC并绑定到PV,然后挂载至pod使用,适用于PV和PVC相对比较固定的业务场景。

  • 动态提供Dynamic

    ​ 动态存储卷,先创建一个存储类storageclass,后期pod在使用PVC的时候可以通过存储类动态创建PVC,适用于有状态服务集群如MySQL一主多从、zookeeper集群等。

image-20230526151819871

静态存储

创建PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: myserver-myapp-static-pv
  namespace: myserver
spec:
  capacity:
    storage: 10Gi  # 存储空间大小
  accessModes:
    - ReadWriteMany  # 指定模式RWX:PV可以被多个节点是读写方式挂载使用
  nfs:
    path: /data/k8sdata/pv # nfs路径
    server: 192.168.248.109 # nfs地址

创建并查看

root@k8s-master01:~# kubectl  apply -f pv.yaml
persistentvolume/myserver-myapp-static-pv created
root@k8s-master01:~# kubectl  get pv
NAME                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
myserver-myapp-static-pv   10Gi       RWX            Retain           Available                                   4s
root@k8s-master01:~#

创建PVC

先创建命名空间kubectl create ns myserver。pv不需要指定命名空间,pvc需要

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myserver-myapp-static-pvc
  namespace: myserver  # 指定命名空间
spec:
  volumeName: myserver-myapp-static-pv
  accessModes:
    - ReadWriteMany  # 指定模式RWX:PV可以被多个节点是读写方式挂载使用 
  resources:
    requests:
      storage: 10Gi # 申请大小

创建并查看

root@k8s-master01:~# kubectl  apply -f pvc.yaml
# 查看
root@k8s-master01:~# kubectl  get pvc -n myserver
NAME                        STATUS   VOLUME                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myserver-myapp-static-pvc   Bound    myserver-myapp-static-pv   10Gi       RWX                           31s

创建POD测试

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myserver-myapp 
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 3 
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: nginx:1.20.0 
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
            claimName: myserver-myapp-static-pvc  

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30080
  selector:
    app: myserver-myapp-frontend

创建并测试

# 创建
root@k8s-master01:~# kubectl  apply -f test-web.yaml
# 查看pod,svc
root@k8s-master01:~# kubectl  get pod,svc -n myserver
NAME                                                 READY   STATUS    RESTARTS   AGE
pod/myserver-myapp-deployment-name-b7cfb8476-455nx   1/1     Running   0          20m
pod/myserver-myapp-deployment-name-b7cfb8476-tzlfb   1/1     Running   0          20m
pod/myserver-myapp-deployment-name-b7cfb8476-z67qp   1/1     Running   0          20m

NAME                                  TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/myserver-myapp-service-name   NodePort   10.100.253.15   <none>        80:30080/TCP   20m
root@k8s-master01:~#

在nfs中的/data/k8sdata/pv目录上传一个静态文件,然后通过页面访问

image-20230526165750658

动态存储

创建rbac

apiVersion: v1
kind: Namespace
metadata:
  name: nfs
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

创建存储类

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
  annotations:
    storageclass.kubernetes.io/is-default-class: "true" 
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
reclaimPolicy: Retain #PV的删除策略,默认为delete,删除PV后立即删除NFS server的数据
mountOptions:
  #- vers=4.1 #containerd有部分参数异常
  #- noresvport #告知NFS客户端在重新建立网络连接时,使用新的传输控制协议源端口
  - noatime #访问文件时不更新文件inode中的时间戳,高并发环境可提高性能
parameters:
  #mountOptions: "vers=4.1,noresvport,noatime"
  archiveOnDelete: "true"  #删除pod时保留pod数据,默认为false时为不保留数据 

创建NFS provisioner

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
spec:
  replicas: 1
  strategy: #部署策略
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 
          image: registry.cn-qingdao.aliyuncs.com/zhangshijie/nfs-subdir-external-provisioner:v4.0.2 
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.248.109
            - name: NFS_PATH
              value: /data/volumes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.248.109
            path: /data/volumes

创建PVC

# Test PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myserver-myapp-dynamic-pvc
  namespace: myserver
spec:
  storageClassName: nfs-csi #调用的storageclass 名称
  accessModes:
    - ReadWriteMany #访问权限
  resources:
    requests:
      storage: 500Mi #空间大小

创建测试服务

kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-myapp 
  name: myserver-myapp-deployment-name
  namespace: myserver
spec:
  replicas: 1 
  selector:
    matchLabels:
      app: myserver-myapp-frontend
  template:
    metadata:
      labels:
        app: myserver-myapp-frontend
    spec:
      containers:
        - name: myserver-myapp-container
          image: nginx:1.20.0 
          #imagePullPolicy: Always
          volumeMounts:
          - mountPath: "/usr/share/nginx/html/statics"
            name: statics-datadir
      volumes:
        - name: statics-datadir
          persistentVolumeClaim:
            claimName: myserver-myapp-dynamic-pvc 

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-myapp-service
  name: myserver-myapp-service-name
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30080
  selector:
    app: myserver-myapp-frontend

查看各个服务的状态是否正常

# 存储类
root@k8s-master01:~# kubectl  get sc
NAME                PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-csi (default)   k8s-sigs.io/nfs-subdir-external-provisioner   Retain          Immediate           false                  5h48m
root@k8s-master01:~#

# 查看NFS provisioner
root@k8s-master01:~# kubectl  get pod -n nfs
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-5dbd968755-k67zt   1/1     Running   0          5h45m
root@k8s-master01:~#

# 查看pv
root@k8s-master01:~# kubectl  get pv |grep nfs-csi
pvc-d88b6835-7e55-41cc-93e7-551465a6be33   500Mi      RWX            Retain           Bound      myserver/myserver-myapp-dynamic-pvc   nfs-csi                 5h33m
root@k8s-master01:~#

# 查看pvc
root@k8s-master01:~# kubectl  get pvc -n myserver
NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
myserver-myapp-dynamic-pvc   Bound    pvc-d88b6835-7e55-41cc-93e7-551465a6be33   500Mi      RWX            nfs-csi        5h34m
root@k8s-master01:~#

# 查看测试web服务
root@k8s-master01:~# kubectl  get pod,svc -n myserver
NAME                                                  READY   STATUS    RESTARTS   AGE
pod/myserver-myapp-deployment-name-65ff65446f-c6hrx   1/1     Running   0          6h9m

NAME                                  TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/myserver-myapp-service-name   NodePort   10.100.112.58   <none>        80:30080/TCP   6h9m
root@k8s-master01:~#

查看nfs中是否创建目录

root@harbor02:~# ll /data/volumes/myserver-myserver-myapp-dynamic-pvc-pvc-d88b6835-7e55-41cc-93e7-551465a6be33/
total 8
drwxrwxrwx 2 root root 4096 May 26 17:45 ./
drwxr-xr-x 6 root root 4096 May 26 17:45 ../
root@harbor02:~#

测试

在nfs目录中上传一张图片

image-20230526233102358

页面访问测试

http://192.168.248.100:30080/statics/test01.png

image-20230526233223911

posted @   xxxy丶  阅读(159)  评论(0编辑  收藏  举报
编辑推荐:
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
点击右上角即可分享
微信分享提示