K8S 搭建 mongo 4.4 集群,开启认证 (后端存储为 nfs)

这里需要重新打镜像,主要是把 auth keyfile 放进镜像中,原因在下面讲述

mkdir -p /data/images/mongodb && cd /data/images/mongodb

openssl rand -base64 741 > mongodb-keyfile

vi Dockerfile 
FROM mongo:4.4
ADD mongodb-keyfile /data/config/mongodb-keyfile
RUN chown mongodb:mongodb /data/config/mongodb-keyfile && chmod 400 /data/config/mongodb-keyfile

docker build -t harbor.junengcloud.com/mongo/mongo4.4-auth:202111.2 .
docker push harbor.junengcloud.com/mongo/mongo4.4-auth:202111.2

原因:

通过 volume 的方式挂载 keyfile,MongoDB 因为打开 keyfile 存在 permission 问题,无法正常启动。
通过 keyFile 存储为 secret,然后通过 volume 的方式挂载到 MongoDB中,这种方式存在 permission 的问题,导致 MongoDB 无法正常启动

MongoDB 的 entrypoint 通过 gosu 来使用 mongodb 用户启动 MongoDB。 Volume 挂载到 MongoDB 中的 keyfile 是属于 root 的 UID 和 GID,为 keyfile 配置 600 或 400 的权限,用户 mongodb 在启动 MongoDB 的时候没有权限访问开发 keyfile。想到的办法是改变挂载的 keyfile 的 UID 和 GID。 不过 K8s Securitycontext 仅支持通过 fsGroup 参数设置 GID,不支持设置 volume 的UID。将 keyfile 的 GID 设置为 mongodb,同时配置660 或 440 来放开 group 权限,不过用户 mongodb 在启动 MongoDB 的时候又会提示权限 too open。所以,就进入了一个非常尴尬的境地,最终只好将 keyfile 直接打入镜像中。

创建 nfs,可以参考: https://www.cnblogs.com/klvchen/p/13234779.html

K8S集群中所有 worker 节点需要安装

yum install nfs-utils -y

在 k8s master 上操作

mkdir -p /data/yaml/kube-system/nfs-client-provisioner
cd /data/yaml/kube-system/nfs-client-provisioner

nfs-client-provisioner.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: harbor.junengcloud.com/tmp/nfs-client-provisioner:latest
          image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs 
            - name: NFS_SERVER
              value: 172.16.16.140
            - name: NFS_PATH
              value: /data/nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.16.16.140
            path: /data/nfs

rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs
provisioner: nfs
parameters:
  archiveOnDelete: "true"    # 这里为true,表示pv删除后,会自动归档,不会删除文件

创建

kubectl apply -f rbac.yaml  
kubectl apply -f nfs-client-provisioner.yaml  
kubectl apply -f storageclass.yaml

创建 mongodb 集群
sts-nfs.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    name: mongo
spec:
  ports:
  - port: 27017
    targetPort: 27017
  clusterIP: None
  selector:
    name: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo
spec:
  selector:
    matchLabels:
      name: mongo
  serviceName: "mongo"
  replicas: 3
  template:
    metadata:
      labels:
        name: mongo
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: harbor.junengcloud.com/mongo/mongo4.4-auth:202111.2
          env:
          - name: MONGO_INITDB_ROOT_USERNAME
            value: admin
          - name: MONGO_INITDB_ROOT_PASSWORD
            value: dSJN52PuXxs
          args:
            - mongod
            - "--replSet"
            - rs
            - "--bind_ip"
            - 0.0.0.0
            - --clusterAuthMode
            - keyFile
            - --keyFile
            - /data/config/mongodb-keyfile
          resources:
            requests:
              cpu: 500m
              memory: 204Mi
            limits:
              cpu: 2500m
              memory: 12Gi
          ports:
            - containerPort: 27017
          livenessProbe:
            tcpSocket:
              port: 27017
            initialDelaySeconds: 180
            periodSeconds: 60
          volumeMounts:
            - name: data
              mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: data
      annotations:
        volume.beta.kubernetes.io/storage-class: "nfs"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi


kubectl apply -f sts-nfs.yaml

mongo 集群创建

kubectl exec -it mongo-0 -- /bin/bash

# 连接mongo, 进行认证
mongo -u admin -p dSJN52PuXxs --authenticationDatabase admin

# 设置集群,ip 根据自身情况设置
var config={
     _id:"rs",
     members:[
         {_id:0,host:"mongo-0.mongo.default.svc.cluster.local:27017"},
         {_id:1,host:"mongo-1.mongo.default.svc.cluster.local:27017"},
         {_id:2,host:"mongo-2.mongo.default.svc.cluster.local:27017"}
]};

# 启动副本集
rs.initiate(config)

# 显示副本集配置对象
rs.conf()

# 查看副本集的当前状态
rs.status()

如果需要在外网访问,可以参考下面方式
https://www.cnblogs.com/klvchen/articles/15348271.html

posted @ 2021-09-29 16:52  klvchen  阅读(1295)  评论(0编辑  收藏  举报