Kubernetes对接Ceph CSI
1.创建存储池
ceph osd pool create kubernetes 128 128
rbd pool init kubernetes
2.创建并授权认证用户
ceph auth get-or-create client.kubernetes mon 'profile rbd' \
osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes'
[client.kubernetes]
key = AQAVbc1hw04aGBAALuVDb9BbZV1SVkrO8gL+nw== # 生成的key用于cephx认证
ceph auth list | grep -A 4 client.kubernetes
3.收集集群信息
ceph mon dump
<...>
fsid 3f5ad768-3381-4a48-8a53-f957138db67a # 集群id
<...>
0: [v2:172.18.41.121:3300/0,v1:172.18.41.121:6789/0] mon.master01 # mon节点地址和端口
1: [v2:172.18.41.122:3300/0,v1:172.18.41.122:6789/0] mon.master02 # mon节点地址和端口
2: [v2:172.18.41.123:3300/0,v1:172.18.41.123:6789/0] mon.master03 # mon节点地址和端口
4.应用配置文件
4.1创建并应用csi-config-map配置文件
vim csi-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "3f5ad768-3381-4a48-8a53-f957138db67a",
"monitors": [
"172.18.41.121:6789",
"172.18.41.122:6789",
"172.18.41.123:6789"
]
}
]
metadata:
name: "ceph-csi-config"
kubectl apply -f csi-config-map.yaml
4.2创建并应用csi-kms-config-map配置文件
vim csi-kms-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
{}
metadata:
name: ceph-csi-encryption-kms-config
kubectl apply -f csi-kms-config-map.yaml
4.3创建并应用ceph-config-map配置文件
vim ceph-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
ceph.conf: |
[global]
fsid = 3f5ad768-3381-4a48-8a53-f957138db67a
public_network = 172.18.41.0/24
cluster_network = 172.18.41.0/24
mon_initial_members = master01, master02, master03
mon_host = 172.18.41.121,172.18.41.122,172.18.41.123
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon_allow_pool_delete = true
# keyring is a required key and its value should be empty
keyring: |
metadata:
name: ceph-config
kubectl apply -f ceph-config-map.yaml
5.创建并应用cephx认证secret
vim csi-rbd-secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: default
stringData:
userID: kubernetes
userKey: AQAVbc1hw04aGBAALuVDb9BbZV1SVkrO8gL+nw==
kubectl apply -f csi-rbd-secret.yaml
6.配置ceph-csi插件
6.1应用RBAC鉴权文件
kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
6.2创建ceph-csi配置器和节点插件
默认情况下将拉取ceph-csi容器的开发版本,且需从gcr拉取镜像。如果无法访问google则见步骤6.3
kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml
kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin.yaml
6.3从github克隆项目并修改镜像仓库
wget https://github.com/ceph/ceph-csi/archive/refs/heads/release-v3.4.zip
unzip ceph-csi-release-v3.4.zip
cd ceph-csi-release-v3.4/deploy/rbd/kubernetes
vim csi-rbdplugin-provisioner.yaml # 修改镜像仓库地址
vim csi-rbdplugin.yaml # 修改镜像仓库地址
原文件镜像
k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2
k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1
k8s.gcr.io/sig-storage/csi-attacher:v3.2.1
k8s.gcr.io/sig-storage/csi-resizer:v1.2.0
k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
quay.io/cephcsi/cephcsi:v3.4-canary
dockerhub上的可替代镜像
antidebug/csi-provisioner:v2.2.2
antidebug/csi-snapshotter:v4.1.1
antidebug/csi-attacher:v3.2.1
antidebug/csi-resizer:v1.2.0
antidebug/csi-node-driver-registrar:v2.2.0
quay.io/cephcsi/cephcsi:v3.4-canary
应用更改后的配置清单
kubectl apply -f csi-rbdplugin-provisioner.yaml
kubectl apply -f csi-rbdplugin.yaml
7.创建并应用StorageClass
vim csi-rbd-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]'
provisioner: rbd.csi.ceph.com
parameters:
clusterID: 3f5ad768-3381-4a48-8a53-f957138db67a
pool: kubernetes
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
kubectl apply -f csi-rbd-sc.yaml
8.测试用例
8.1创建基于块的PersistentVolumeClaim
vim raw-block-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: raw-block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
kubectl apply -f raw-block-pvc.yaml
8.2将上述PersistentVolumeClaim绑定到Pod资源作为原始块设备
vim raw-block-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: pod-with-raw-block-volume
spec:
containers:
- name: fc-container
image: fedora:26
command: ["/bin/sh", "-c"]
args: ["tail -f /dev/null"]
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: raw-block-pvc
kubectl apply -f raw-block-pod.yaml
8.3创建基于文件系统的PersistentVolumeClaim
vim pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
kubectl apply -f pvc.yaml
8.4将上述PersistentVolumeClaim绑定到Pod资源作为文件目录
vim pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: csi-rbd-demo-pod
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: mypvc
mountPath: /var/lib/www/html
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: false
kubectl apply -f pod.yaml
9.附报错解决
问题描述:kubesphere集群下将openebs的local本地持久卷更换成ceph存储的过程中遇到如下错误:
driver name rbd.csi.ceph.com not found in the list of registered CSI drivers
问题原因:ceph-csi官方的deploy目录下的daemonset的配置中,默认是不允许在master节点上部署pod的。
这样导致,master节点上通过ceph-csi申请volume的pod,可以申请到PV但却无法挂载到pod。
kubesphere的redis-ha-server服务需要部署在master节点,因此报错。
解决方案:csi-rbdplugin的daemonset中增加对master节点的容忍度
vim ceph-csi/deploy/rbd/kubernetes/csi-rbdplugin.yaml
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
官网:https://docs.ceph.com/en/latest/rbd/rbd-kubernetes/
参考文档:https://blog.csdn.net/DANTE54/article/details/106471848/
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· .NET Core 中如何实现缓存的预热?
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· TypeScript + Deepseek 打造卜卦网站:技术与玄学的结合
· Manus的开源复刻OpenManus初探
· 三行代码完成国际化适配,妙~啊~
· .NET Core 中如何实现缓存的预热?
· 如何调用 DeepSeek 的自然语言处理 API 接口并集成到在线客服系统