k8s基于keyring文件认证对接rbd块设备

一.k8s基于ceph的keyring秘钥环文件认证-admin用户

1.ceph集群创建rbd块设备

	1 创建K8S特用的存储池
[root@ceph141 ~]# ceph osd pool create yinzhengjie-k8s 128 128
pool 'yinzhengjie-k8s' created
[root@ceph141 ~]# 


	2 创建镜像大小
[root@ceph141 ~]# rbd create -s 5G yinzhengjie-k8s/nginx-web --image-feature layering,exclusive-lock
[root@ceph141 ~]# 
[root@ceph141 ~]# rbd info yinzhengjie-k8s/nginx-web | grep "\sfeatures"
	features: layering, exclusive-lock
[root@ceph141 ~]# 

2.k8s所有worker节点部署ceph-common的ceph工具通用包

curl  -s -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
cat > /etc/yum.repos.d/ceph.repo << EOF
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
EOF
yum -y install ceph-common

3.ceph集群将ceph管理员的秘钥环keyring拷贝到所有的worker节点

[root@ceph141 ~]# scp /etc/ceph/ceph.client.admin.keyring 10.0.0.231:/etc/ceph/
[root@ceph141 ~]# scp /etc/ceph/ceph.client.admin.keyring 10.0.0.232:/etc/ceph/
[root@ceph141 ~]# scp /etc/ceph/ceph.client.admin.keyring 10.0.0.233:/etc/ceph/ 

4.编写资源清单

[root@master231 rbd]# cat 01-deploy-svc-volume-rbd-admin-keyring.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-volume-rbd-admin-keyring
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: ceph-rbd
  template:
    metadata:
      labels:
        apps: ceph-rbd
    spec:
      volumes:
      - name: data
        # 指定存储卷的类型是rbd
        rbd:
          # 连接ceph集群的mon组件地址
          monitors:
          - 10.0.0.141:6789
          - 10.0.0.142:6789
          - 10.0.0.143:6789
          # 指定存储池
          pool: yinzhengjie-k8s
          # 指定块设备镜像
          image: nginx-web
          # 指定文件系统,目前仅支持: "ext4", "xfs", "ntfs"。
          fsType: xfs
          # 块设备是否只读,默认值为false。
          readOnly: false
          # 指定连接ceph集群的用户,若不指定,默认为admin
          user: admin
          # 指定ceph秘钥环的路径,默认值为: "/etc/ceph/keyring"
          keyring: "/etc/ceph/ceph.client.admin.keyring"
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
        volumeMounts:
        - name: data
          mountPath: /yinzhengjie-data
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-rbd
spec:
  type: NodePort
  selector:
    apps: ceph-rbd
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 20080
[root@master231 rbd]# 
[root@master231 rbd]# kubectl apply -f 01-deploy-svc-volume-rbd-admin-keyring.yaml 
deployment.apps/deploy-volume-rbd-admin-keyring created
service/svc-rbd created
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
deploy-volume-rbd-admin-keyring-5c75b54fd4-vhvls   1/1     Running   0          10s   10.100.2.30    worker233   <none>           <none>

...
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get svc svc-rbd 
NAME      TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
svc-rbd   NodePort   10.200.40.135   <none>        80:20080/TCP   70s
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get ep svc-rbd 
NAME      ENDPOINTS        AGE
svc-rbd   10.100.2.30:80   2m20s
[root@master231 rbd]# 

5.访问测试

http://10.0.0.233:20080

6.将副本数调到3时,观察实验对比

[root@master231 rbd]# kubectl get pods -o wide
NAME                                               READY   STATUS              RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
deploy-volume-rbd-admin-keyring-5c75b54fd4-5qr59   0/1     Pending             0          79s     <none>         <none>      <none>           <none>
deploy-volume-rbd-admin-keyring-5c75b54fd4-5r5cp   1/1     Running             0          2m42s   10.100.2.26    worker233   <none>           <none>
deploy-volume-rbd-admin-keyring-5c75b54fd4-9mgr6   0/1     ContainerCreating   0          79s     <none>         worker232   <none>           <none>
...
[root@master231 rbd]# 


温馨提示:
	发现有2个Pod处于无法正常运行,原因是同一块设备同时不能被多个Pod共同使用。

7. 在删除Pod资源前,注意观察ceph集群对于块设备的状态锁的释放情况

[root@ceph141 ~]# rbd ls -p yinzhengjie-k8s -l
NAME      SIZE  PARENT FMT PROT LOCK 
nginx-web 5 GiB          2      excl 
[root@ceph141 ~]# 

8.删除资源

[root@master231 rbd]# kubectl delete -f 01-deploy-svc-volume-rbd-admin-keyring.yaml 
deployment.apps "deploy-volume-rbd-admin-keyring" deleted
service "svc-rbd" deleted
[root@master231 rbd]# 

9. 在删除Pod资源后,注意观察ceph集群对于块设备的状态锁的释放情况

[root@ceph141 ~]# rbd ls -p yinzhengjie-k8s -l
NAME      SIZE  PARENT FMT PROT LOCK 
nginx-web 5 GiB          2           
[root@ceph141 ~]# 

二.k8s基于ceph的keyring秘钥环文件认证-自定义用户

1.ceph要创建自定义用户

[root@ceph141 ~]# ceph auth get-or-create-key client.k8s mon 'allow r' osd 'allow rwx'
AQBkQrxlR6aVGBAAerMOjQ5Nah/HYafJu+aTsg==
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph auth get client.k8s 
[client.k8s]
	key = AQBkQrxlR6aVGBAAerMOjQ5Nah/HYafJu+aTsg==
	caps mon = "allow r"
	caps osd = "allow rwx"
exported keyring for client.k8s
[root@ceph141 ~]# 

2.导出秘钥环

[root@ceph141 ~]# ceph auth export client.k8s -o ceph.client.k8s.keyring
export auth(key=AQBkQrxlR6aVGBAAerMOjQ5Nah/HYafJu+aTsg==)
[root@ceph141 ~]# 
[root@ceph141 ~]# ll ceph.client.k8s.keyring 
-rw-r--r-- 1 root root 107 Feb  3 09:39 ceph.client.k8s.keyring
[root@ceph141 ~]# 
[root@ceph141 ~]# cat ceph.client.k8s.keyring 
[client.k8s]
	key = AQBkQrxlR6aVGBAAerMOjQ5Nah/HYafJu+aTsg==
	caps mon = "allow r"
	caps osd = "allow rwx"
[root@ceph141 ~]# 

3.拷贝文件到所有worker node节点,而且我故意不放在"/etc/ceph"路径。

[root@ceph141 ~]# scp ceph.client.k8s.keyring 10.0.0.231:/etc/ceph/
[root@ceph141 ~]# 
[root@ceph141 ~]# scp ceph.client.k8s.keyring 10.0.0.232:/etc/ceph/
[root@ceph141 ~]# 
[root@ceph141 ~]# scp ceph.client.k8s.keyring 10.0.0.233:/etc/ceph/
[root@ceph141 ~]# 

4.k8s编写资源清单

[root@master231 rbd]# cat 02-deploy-svc-ing-volume-rbd-k8s-keyring.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-volume-rbd-k8s-keyring
spec:
  replicas: 1
  selector:
    matchLabels:
      apps: ceph-rbd
  template:
    metadata:
      labels:
        apps: ceph-rbd
    spec:
      volumes:
      - name: data
        rbd:
          monitors:
          - 10.0.0.141:6789
          - 10.0.0.142:6789
          - 10.0.0.143:6789
          pool: yinzhengjie-k8s
          image: nginx-web
          fsType: xfs
          readOnly: false
          # 指定连接ceph集群的用户
          user: k8s
          # 指定连接ceph集群的秘钥路径,注意,必须放在worker节点的"/etc/ceph"目录
          # 如果你将认证文件拷贝到其他目录,对于咱们的ceph nautilus(V14.2.22)版本而言,ceph本身不支持指定文件的路径。
          keyring: "/etc/ceph/ceph.client.k8s.keyring"
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
        volumeMounts:
        - name: data
          mountPath: /yinzhengjie-data
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: svc-rbd-k8s
spec:
  selector:
    apps: ceph-rbd
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: apps-ingress
spec:
  # 指定Ingress controller的名称
  ingressClassName: mytraefik
  rules:
  - host: v2.yinzhengjie.com
    http:
      paths:
      - backend:
          service:
            name: svc-rbd-k8s
            port:
              number: 80
        path: /
        pathType: ImplementationSpecific
[root@master231 rbd]# 

5.运行服务

[root@master231 rbd]# kubectl apply -f 02-deploy-svc-ing-volume-rbd-k8s-keyring.yaml 
deployment.apps/deploy-volume-rbd-k8s-keyring created
service/svc-rbd-k8s created
ingress.networking.k8s.io/apps-ingress created
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get pods
NAME                                            READY   STATUS    RESTARTS   AGE
deploy-volume-rbd-k8s-keyring-878f87d5c-nwcwr   1/1     Running   0          51s
...
[root@master231 rbd]# 
[root@master231 rbd]# 
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get ingress 
apps-ingress            yinzhengjie-traefik-apps  
[root@master231 rbd]# kubectl get ingress apps-ingress 
NAME           CLASS       HOSTS              ADDRESS   PORTS   AGE
apps-ingress   mytraefik   v2.yinzhengjie.com             80      58s
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get ingress apps-ingress 
NAME           CLASS       HOSTS              ADDRESS   PORTS   AGE
apps-ingress   mytraefik   v2.yinzhengjie.com             80      59s
[root@master231 rbd]# 
[root@master231 rbd]# kubectl describe ingress apps-ingress 
Name:             apps-ingress
Labels:           <none>
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host              Path  Backends
  ----              ----  --------
  v2.yinzhengjie.com  
                    /   svc-rbd-k8s:80 (10.100.2.29:80)
Annotations:        <none>
Events:             <none>
[root@master231 rbd]# 
[root@master231 rbd]# kubectl get svc svc-rbd-k8s 
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
svc-rbd-k8s   ClusterIP   10.200.66.84   <none>        80/TCP    71s
[root@master231 rbd]# 
[root@master231 rbd]# kubectl -n yinzhengjie-traefik get svc mytraefik 
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
mytraefik   LoadBalancer   10.200.205.77   10.0.0.189    80:18238/TCP,443:13380/TCP   12d
[root@master231 rbd]# 

	
可能会出现的报错:
2024-02-03 09:53:54.855 7f5c52484c80 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2024-02-03 09:53:54.881 7f5c52484c80 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.k8s.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2024-02-03 09:53:54.881 7f5c52484c80 -1 AuthRegistry(0x55750150e088) no keyring found at /etc/ceph/ceph.client.k8s.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
2024-02-03 09:53:54.881 7f5c52484c80 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.k8s.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2024-02-03 09:53:54.881 7f5c52484c80 -1 AuthRegistry(0x7ffe24b3e658) no keyring found at /etc/ceph/ceph.client.k8s.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
2024-02-03 09:53:54.889 7f5c52484c80 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
rbd: couldn't connect to the cluster!


问题原因:
	找不到ceph的认证文件。请注意文件是否拷贝成功。

6.windows配置解析

10.0.0.189  v2.yinzhengjie.com

7.访问测试

http://v2.yinzhengjie.com/
posted @ 2021-01-25 22:58  尹正杰  阅读(205)  评论(0编辑  收藏  举报