在CentOS 7上安装和使用GlusterFS

GlusterFS aggregates various storage servers over Ethernet or Infiniband RDMA interconnect into one large parallel network file system. It is free software, with some parts licensed under the GNU General Public License(GPL) v3 while others are dual licensed under either GPL v2 or the Lesser General Public License (LGPL) v3. GlusterFS is based on a stackable user space design.

GlusterFS has a client and server component. Servers are typically deployed as storage bricks, with each server running a glusterfsd daemon to export a local file system as a volume. The glusterfs client process, which connects to servers with a custom protocol over TCP/IP, InfiniBand or Sockets Direct Protocol, creates composite virtual volumes from multiple remote servers using stackable translators. By default, files are stored whole, but striping of files across multiple remote volumes is also supported.

The final volume may then be mounted by the client host using its own native protocol via the FUSE mechanism, using NFS v3 protocol using a built-in server translator, or accessed via gfapi client library. Native-protocol mounts may then be re-exported e.g. via the kernel NFSv4 server, SAMBA, or the object-based OpenStack Storage (Swift) protocol using the “UFO” (Unified File and Object) translator.

I am using 2 CentOS 7 nodes with hostnames: glusterfs1 and glusterfs2 .

[root@glusterfs1 ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
[root@glusterfs2 ~]# cat /etc/os-release
N NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

Add this to both servers in /etc/hosts .

192.168.254.133 glusterfs1
192.168.254.134 glusterfs2

Installing in CentOS:

# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
# yum -y install glusterfs glusterfs-fuse glusterfs-server
# systemctl start glusterd
For glusterfs it is important to setup 2 identical partitions on all nodes. I will use /dev/sdb1 1 GB in size (Am  using  Vmware/Virtualbox in my case example).
fdisk /dev/sdb
Type ‘n’ for new partition. choose ‘p’ for primary, follow the wizard to complete, ‘w’ to write data to disk.
Create file system:
mkfs.ext4 /dev/sdb1
Create Sync directory on both machine:
mkdir -p /data/gluster/brick
mount /dev/sdb1 /data/gluster

You can add this to fstab to make it ready for the next reboot.

/dev/sdb1 /data/gluster ext4 default 1 2

Add iptable rules for glusterfs:

-A INPUT -mstate --state NEW -m tcp -p tcp -s 192.168.254.0/24 --dport 111 -j ACCEPT
-A INPUT -mstate --state NEW -m udp -p udp -s 192.168.254.0/24 --dport 111 -j ACCEPT
-A INPUT -mstate --state NEW -m tcp -p tcp -s 192.168.254.0/24 --dport 2049 -j ACCEPT
-A INPUT -mstate --state NEW -m tcp -p tcp -s 192.168.254.0/24 --dport 24007 -j ACCEPT
-A INPUT -mstate --state NEW -m tcp -p tcp -s 192.168.254.0/24 --dport 38465:38469 -j ACCEPT
-A INPUT -mstate --state NEW -m tcp -p tcp -s 192.168.254.0/24 --dport 49152 -j ACCEPT

Using GlusterFS

Added glusterfs2 in glusterfs1′s hosts file, and tested the config:

[root@glusterfs1 ~]# gluster peer probe glusterfs2
peer probe: success.
[root@glusterfs2 ~]# gluster peer probe glusterfs1
peer probe: success. Host glusterfs1 port 24007 already in peer list
At this time I can test the storage pool:
[root@glusterfs1 glusterfs]# gluster pool list
UUID Hostname State
4cf47688-74ba-4c5b-bf3f-3270bb9a4871 glusterfs2 Connected
a3ce0329-35d8-4774-a061-148a735657c4 localhost Connected
[root@glusterfs1 ~]# gluster volume status
No volumes present
Create a gluster volume and test replication:
[root@glusterfs1 ~]# gluster
gluster> volume create vol0 rep 2 transport tcp glusterfs1:/data/gluster/brick glusterfs2:/data/gluster/brick force
volume create: vol0: success: please start the volume to access data
gluster>
 if vol creation fails for some reason, do # setfattr -x trusted.glusterfs.volume-id /data/gluster/brick and restart glusterd.
gluster> volume start vol0
volume start: vol0: success

Create mount point and mount the volume on both nodes:

[root@glusterfs1 ~]# mount -t glusterfs glusterfs1:/vol0 /mnt/gluster/
[root@glusterfs2 ~]# mount -t glusterfs glusterfs1:/vol0 /mnt/gluster/
[root@glusterfs1 ~]# cp /var/log/secure /mnt/gluster/
The content is automatically synced between nodes
[root@glusterfs1 ~]# ls /mnt/gluster/
secure
[root@glusterfs2 ~]# ls /mnt/gluster/
secure
posted @ 2018-09-07 15:42  Robin汉  阅读(363)  评论(0编辑  收藏  举报