k8s存储-GlusterFS

一、方案:Debian11使用GlusterFS分布式文件系统作为k8s的存储方案,其特点请参考官方文档:https://docs.gluster.org/en/latest/

说明:部署GlusterFS至少需要三块磁盘是为了防止脑裂,如果使用了heketi的话必须是裸磁盘。

二、部署

1、三节点上部署glusterfs(物理机方式部署)

修改/etc/hosts

192.168.152.20  master2
192.168.152.10  master1
192.168.152.30  node02

下载GPG key,添加到apk

wget -O - https://download.gluster.org/pub/gluster/glusterfs/9/rsa.pub | apt-key add -
DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"')
DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+')
DEBARCH=$(dpkg --print-architecture)
echo deb https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main > /etc/apt/sources.list.d/gluster.list
apt update
apt install glusterfs-server

启动服务并设置开机自启

systemctl start glusterd
systemctl enable glusterd

设置存储池,在任意节点上操作

gluster peer probe master1
gluster peer probe node02

查看存储池状态,能看到另外两个节点信息代表设置成功

 

 2、使用Heketi动态配置glusterfs(用物理机的方式,在任意节点部署heketi)

说明:部署Heketi是为了动态配置glusterfs,方便后面k8s pod动态创建pv,不需要的可忽略此步骤。

 2.1、在master2节点安装Heketi Server

wget https://github.com/heketi/heketi/releases/download/v10.4.0/heketi-v10.4.0-release-10.linux.amd64.tar.gz
tar zxvf heketi-v10.4.0-release-10.linux.amd64.tar.gz && cd heketi
cp ./heketi /usr/local/bin
mkdir -p /etc/heketi
useradd heketi && mkdir
/var/lib/heketi && chown -R heketi:heketi /var/libe/heketi

**创建heketi.service启动文件

vim /lib/systemd/system/heketi.service
[Unit]
Description=Heketi Server

[Service]
Type=simple
WorkingDirectory=/var/lib/heketi
User=heketi
ExecStart=/usr/local/bin/heketi --config=/etc/heketi/heketi.json
Restart=on-failure
StandardOutput=syslog
StandardError=syslog

[Install]
WantedBy=multi-user.target

**在部署了heketi节点上配置免密登录其他节点

ssh-keygen -m PEM -t rsa -b 4096 -q -f /etc/heketi/heketi_key -N ''
ssh-copy-id -i /etc/heketi/heketi_key.pub root@master1
ssh-copy-id -i /etc/heketi/heketi_key.pub root@node02
chown heketi:heketi /etc/heketi/heketi_key*

**修改heketi.json文件

cp ./heketi.json /etc/heketi
vim /etc/heketi/heketi.json

**主要要修改的配置,executor、keyfile、user、port、fstab

    "executor": "ssh",

    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key",
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab",
      "backup_lvm_metadata": false,
      "debug_umount_failures": true,
      "lvm_wrapper": ""
    },

    "_kubeexec_comment": "Kubernetes configuration",
    "kubeexec": {
      "host" :"https://kubernetes.host:8443",
      "cert" : "/path/to/crt.file",
      "insecure": false,
      "user": "kubernetes username",
      "password": "password for kubernetes user",
      "namespace": "OpenShift project or Kubernetes namespace",
      "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab",
      "mountopts": "Optional: Specify brick mount options.  Default is rw,inode64,noatime,nouuid",
      "pv_data_alignment": "Optional: Specify PV data alignment size. Default is 256K",
      "vg_physicalextentsize": "Optional: Specify VG physical extent size. Default is 4MB",
      "lv_chunksize": "Optional: Specify LV chunksize. Default is 256K",
      "backup_lvm_metadata": false,
      "_debug_umount_failures": "Optional: boolean to capture more details in case brick unmounting fails",
      "debug_umount_failures": true,
      "lvm_wrapper": ""
    },

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db",

**注意:要删掉默认的一部分配置,否则服务启动会报错

 

 **启动heketi server并设置开机自启

systemctl start heketi
systemctl enable heketi

2.2、在master2节点安装Heketi Client

wget https://github.com/heketi/heketi/releases/download/v10.4.0/heketi-client-v10.4.0-release-10.linux.amd64.tar.gz
tar zxvf heketi-client-v10.4.0-release-10.linux.amd64.tar.gz && cd hekeci-client
cp bin/hekeci-cli /usr/local/bin

**编辑topology.json文件,根据节点磁盘实际情况修改,manage:节点hostname,storage:节点ip,name:磁盘

vim /etc/heketi/topology.json
{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "master1"
                            ],
                            "storage": [
                                "192.168.152.10"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/sdb",
                            "destroydata": false
                        }
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "master2"
                            ],
                            "storage": [
                                "192.168.152.20"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/sdb",
                            "destroydata": false
                        }
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "node02"
                            ],
                            "storage": [
                                "192.168.152.30"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/sda",
                            "destroydata": false
                        }
                    ]
                }
            ]
        }
    ]
}

**加载环境,磁盘必须是裸磁盘,不用操作分区和挂载,heketi会自动配置glusterfs,往/etc/fstab写入挂载信息,实现开机自动挂载。

heketi-cli topology load --json=/etc/heketi/topology.json

 说明:加载环境时的报错问题,解决后再重新执行加载环境

**问题1:Unable to add device: Setup of device /dev/sdb failed (already initialized or contains data?): /bin/bash:行1: pvcreate:未找到命令

原因:磁盘有数据或没有安装pvcreate命令

解决办法:

所有节点的裸磁盘都要重新格式化

mkfs -t ext3 /dev/sdb
lsblk   #查看磁盘信息

所有节点安装pvcreate命令

apt-get install lvm2

**问题2:Unable to add device: Setup of device /dev/sda failed (already initialized or contains data?): WARNING: xfs signature detected on /dev/sdb at offset 0. Wipe it? [y/n]: [n]

原因:heketi执行pvcreate命令时,提示默认选择了n,导致加载失败。

解决办法:

手动在所有节点操作pvcreate命令,选择y,只需要手动操作一次,后面heketi在执行就不会有提示了

pvcreate /dev/sdb

2.3、安装glusterfs客户端

说明:安装glusterfs客户端是用于k8s节点连接gluster节点,这里的k8s节点与gluster节点是共用的,可以不用安装

wget -O - https://download.gluster.org/pub/gluster/glusterfs/11/rsa.pub | apt-key add -
echo deb [arch=amd64] https://download.gluster.org/pub/gluster/glusterfs/11/LATEST/Debian/stretch/amd64/apt stretch main > /etc/apt/sources.list.d/gluster.list 
apt-get update
apt-get install  glusterfs-client

2.4、在k8s上创建存储卷并在k8s deployment等资源上使用

创建存储卷,关联heketi

gluster-storageclass.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gluster-dyn
provisioner: kubernetes.io/glusterfs
parameters:
  # 连接heketi服务的url
  resturl: "http://master2:8080"
  # 是否开启认证
  restauthenabled: "false" 
  #volume模式,只有两个节点的需要开启,有3个节点的忽略 
  #volumetype: "replicate:2" 

创建pvc,关联存储卷

gluster-storageclass-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: gluster-dyn-pvc
 annotations:
   volume.beta.kubernetes.io/storage-class: gluster-dyn
spec:
 accessModes:
  - ReadWriteMany
 resources:
   requests:
     storage: 1Gi

创建nginx deployment ,使用pvc

nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: html
      volumes:
        - name: html
          persistentVolumeClaim:
            claimName: gluster-dyn-pvc

2.5、验证:在容器里创建文件,在三个节点会生成相同的文件

进入nginx的容器,在/usr/share/nginx/html创建文件

docker exec -ti c9284fb9119c bash
touch /usr/share/nginx/html/index.html && echo 123 > index.html

查看挂载的目录,三个节点目录下都有创建的文件

 3、不使用heketi动态配置glusterfs

说明:不使用heketi去动态配置glusterfs,就需要给glusterfs创建service、endpoint,pv连接endpoint

3.1、配置volume

说明:GlusterFS中的volume的模式,详情请参考官方文档

  • 分布卷(默认模式):即DHT, 也叫 分布卷: 将文件以hash算法随机分布到 一台服务器节点中存储。
  • 复制模式:即AFR, 创建volume 时带 replica x 数量: 将文件复制到 replica x 个节点中。
  • 条带模式:即Striped, 创建volume 时带 stripe x 数量: 将文件切割成数据块,分别存储到 stripe x 个节点中 ( 类似raid 0 )。
  • 分布式条带模式:最少需要4台服务器才能创建。 创建volume 时 stripe 2 server = 4 个节点: 是DHT 与 Striped 的组合型。
  • 分布式复制模式:最少需要4台服务器才能创建。 创建volume 时 replica 2 server = 4 个节点:是DHT 与 AFR 的组合型。
  • 条带复制卷模式:最少需要4台服务器才能创建。 创建volume 时 stripe 2 replica 2 server = 4 个节点: 是 Striped 与 AFR 的组合型。
  • 三种模式混合: 至少需要8台 服务器才能创建。 stripe 2 replica 2 , 每4个节点 组成一个 组

在各节点创建存储目录

mkdir -p /opt/gfs-data

创建复制卷

说明:这里采用了三副本,没数据安全要求的可以使用分布卷,由于使用了系统盘作为存储,所以需要加force

gluster volume create nginx-volume replica 3 master1:/opt/gfs_data master2:/opt/gfs_data node02:/opt/gfs_data force

启动复制卷

gluster volume start nginx-volume

#查看volume信息
gluster volume nginx-volume info

3.2、安装glusterfs客户端,请参考2.3

3.3、创建endpoints

vim glusterfs-endpoints.json
{
  "kind": "Endpoints",
  "apiVersion": "v1",
  "metadata": {
    "name": "glusterfs-cluster"
  },
  "subsets": [
    {
      "addresses": [
        {
          "ip": "192.168.152.10"
        },
        { 
          "ip": "192.168.152.20"
        },
       
        {
          "ip": "192.168.152.30"
        }
      ],
      "ports": [
        {
          "port": 1990
        }
      ]
    }
  ]
}
kubectl apply -f glusterfs-endpoints.json

3.4、创建service

vim glusterfs-service.json
{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "glusterfs-cluster"
  },
  "spec": {
    "ports": [
      {"port": 1990}
    ]
  }
}
kubectl apply -f glusterfs-service.json

3.5、创建pv和pvc

vim glusterfs-pv-pvc.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster-dev-volume
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: "glusterfs-cluster"
    path: "nginx-volume"
    readOnly: false
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-nginx
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi
kubectl apply -f glusterfs-pv-pvc.yaml

3.6、创建nginx deployment,使用pvc

vim nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test
spec:
  selector:
    matchLabels:
      app: nginx-test
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-test
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: html
      volumes:
        - name: html
          persistentVolumeClaim:
            claimName: glusterfs-nginx
kubectl apply -f nginx.yaml

3.7、验证:进入容器创建文件验证,在每个节点的/opt/gfs_data目录下都有相同的文件,进入容器验证方法请参考2.5。

至此GlusterFS分布式系统部署完成,可以根据实际情况决定是否使用heketi。

posted @ 2022-10-09 15:21  屠夫2022  阅读(1346)  评论(0编辑  收藏  举报