Ceph结合K8s使用案例

静态创建

创建初始化rbd

ceph osd pool create k8s-rbd-pool1 32 32 #创建新的存储池
ceph osd pool ls #查看
ceph osd pool application enable k8s-rbd-pool1 rbd #存储池启用rbd
rbd pool init -p k8s-rbd-pool1 #初始化rbd

创建image

rbd create k8s-img-img1 --size 3G --pool k8s-rbd-pool1 --image-format 2 --image-feature layering #创建
rbd ls --pool k8s-rbd-pool1 #验证
rbd --image k8s-img-img1 --pool k8s-rbd-pool1 info #验证镜像信息

分别在k8s master和node节点安装ceph-common组件包

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
sudo echo  'deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific/ bionic main' >> /etc/apt/sources.list
apt update
apt-cache madison ceph-common
apt install ceph-common

创建ceph用户与授权,导出用户信息至keyring文件。

ceph auth get-or-create client.k8s mon 'allow r' osd 'allow * pool=k8s-rbd-pool1'
ceph auth get client.k8s
ceph auth get client.k8s -o ceph.client.k8s.keyring

把ceph.conf和 ceph.client.k8s.keyring都拷贝到k8s的master和node节点

scp ceph.conf ceph.client.k8s.keyring root@master:/etc/ceph
scp ceph.conf ceph.client.k8s.keyring root@node1:/etc/ceph
scp ceph.conf ceph.client.k8s.keyring root@node2:/etc/ceph

在k8s节点验证一下用户

ceph --user k8s -s

通过keyring文件直接挂载-busybox

kubectl apply -f case1-busybox-keyring.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox 
    command:
      - sleep
      - "3600"
    imagePullPolicy: Always 
    name: busybox
    #restartPolicy: Always
    volumeMounts:
    - name: rbd-data1
      mountPath: /data
  volumes:
    - name: rbd-data1
      rbd:
        monitors:
        - '172.31.6.101:6789'
        - '172.31.6.102:6789'
        - '172.31.6.103:6789'
        pool: k8s-rbd-pool1
        image: k8s-img-img1
        fsType: ext4
        readOnly: false
        user: k8s
        keyring: /etc/ceph/ceph.client.k8s.keyring

进到busybox容器df看下有没有挂载ceph,之后写点数据将pod删除掉

kubectl exec -it busybox sh

df
cd /data/
echo "yzy" > log.txt
cat log.txt
exit

kubectl delete -f case1-busybox-keyring.yaml

 

通过secret挂载rbd

将key进行编码

ceph auth print-key client.k8s
ceph auth print-key client.k8s | base64

创建secret

kubectl apply -f case3-secret-client-k8s.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-k8s
type: "kubernetes.io/rbd"
data:
  key: QVFDMVowbGlUYWxZQXhBQUt0Mzhpengxc0YrREVhZllOSkVuMkE9PQ== #写自己的key
kubectl apply -f case4-nginx-secret.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80

        volumeMounts:
        - name: rbd-data1
          mountPath: /usr/share/nginx/html/rbd
      volumes:
        - name: rbd-data1
          rbd:
            monitors:
            - '172.31.6.101:6789'
            - '172.31.6.102:6789'
            - '172.31.6.103:6789'
            pool: k8s-rbd-pool1
            image: k8s-img-img1
            fsType: ext4
            readOnly: false
            user: k8s
            secretRef:
              name: ceph-secret-k8s 

宿主机验证挂载

rbd showmapped

 

动态存储卷供给创建admin和k8s用户secret,k8s普通用户的yaml在上面

ceph auth print-key client.admin | base64
kubectl apply -f case5-secret-admin.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-admin
type: "kubernetes.io/rbd"
data:
  key: QVFBTkJVQmlRc3MzR3hBQWtqR3hCVlk0Y1VyRy9waS9TWlFqd3c9PQ== #填自己的key

 创建动态存储类,为pod提供动态pvc

kubectl apply -f case6-ceph-storage-class.yaml
kubectl get storageclass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-storage-class-k8s
  annotations:
    storageclass.kubernetes.io/is-default-class: "false" #设置为默认存储类
provisioner: kubernetes.io/rbd
parameters:
  monitors: 172.31.6.101:6789,172.31.6.102:6789,172.31.6.103:6789
  adminId: admin
  adminSecretName: ceph-secret-admin
  adminSecretNamespace: default 
  pool: shijie-rbd-pool1
  userId: k8s
  userSecretName: ceph-secret-k8s

创建基于存储类的pvc

kubectl apply -f case7-mysql-pvc.yaml
kubectl get pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-data-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ceph-storage-class-k8s 
  resources:
    requests:
      storage: '5Gi'

在ceph上查看镜像

rbd ls --pool k8s-rbd-pool1

创建一个mysql使用pvc

kubectl apply -f case8-mysql-single.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6.46
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: yzy123456
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-data-pvc 


---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: mysql-service-label 
  name: mysql-service
spec:
  type: NodePort
  ports:
  - name: http
    port: 3306
    protocol: TCP
    targetPort: 3306
    nodePort: 33306
  selector:
    app: mysql

 

cephfs使用案例,创建好之后在其中一个pod创建一个文件,去另外一个pod查是不是同步的。

kubectl apply -f case9-nginx-cephfs.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80

        volumeMounts:
        - name: yzy-staticdata-cephfs 
          mountPath: /usr/share/nginx/html/cephfs
      volumes:
        - name: yzy-staticdata-cephfs
          cephfs:
            monitors:
            - '172.31.6.101:6789'
            - '172.31.6.102:6789'
            - '172.31.6.103:6789'
            path: /
            user: admin
            secretRef:
              name: ceph-secret-admin

---
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: ng-deploy-80-service-label
  name: ng-deploy-80-service
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 33380
  selector:
    app: ng-deploy-80

 

posted @ 2022-09-14 19:04  Maniana  阅读(285)  评论(0编辑  收藏  举报