linux系统中部署RAID5磁盘阵列+备份盘

RAID5磁盘阵列实现了硬盘读写速度的提高,同时通过奇偶校验信息+数据的方式实现了数据的备份,但是RAID5磁盘阵列仅允许一块硬盘损坏,所以说数据仍然有丢失的风险。

RAID10磁盘阵列技术也实现了读写速度的提高,同时通过RAID0+RAID1实现了数据的备份,允许损害RAID1磁盘阵列中的一块,如果其中一对RAID1磁盘同时损坏,则数据丢失,所以说数据也有丢失的风险。

 RAID磁盘阵列+备份盘的核心理念就是准备一块足够大的硬盘,这块硬盘平时处于闲置状态,一旦RAID磁盘阵列中有硬盘出现故障后则会马上自动顶替上去这等于又增加了一层防护。

 

这里部署RAID5磁盘阵列+备份盘

1、部署RAID5磁盘阵列至少需要3块硬盘,备份盘需要一块硬盘,因此一共需要4块硬盘

 

 

2、部署RAID5磁盘阵列+备份盘

[root@PC1linuxprobe dev]# mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sdb /dev/sdc /dev/sdd /dev/sde  ## -n 3 表示用三块硬盘制作RAID5磁盘阵列,-l 5 表示磁盘阵列级别,-x 1 表示一块备份盘
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 20954624K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@PC1linuxprobe dev]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Nov  8 13:24:23 2020
     Raid Level : raid5
     Array Size : 41909248 (39.97 GiB 42.92 GB)
  Used Dev Size : 20954624 (19.98 GiB 21.46 GB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sun Nov  8 13:26:15 2020
          State : clean 
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : PC1linuxprobe:0  (local to host PC1linuxprobe)
           UUID : 2e07ebb9:9858dc73:969f08aa:992df027
         Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd

       3       8       64        -      spare   /dev/sde

 

3、将RAID5磁盘阵列格式化为ext4文件格式

[root@PC1linuxprobe dev]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
2621440 inodes, 10477312 blocks
523865 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000, 7962624

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

 

4、挂载

[root@PC1linuxprobe dev]# mkdir /RAID5
[root@PC1linuxprobe dev]# mount /dev/md0 /RAID5/
[root@PC1linuxprobe dev]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   18G  2.9G   15G  17% /
devtmpfs               985M     0  985M   0% /dev
tmpfs                  994M  140K  994M   1% /dev/shm
tmpfs                  994M  8.9M  986M   1% /run
tmpfs                  994M     0  994M   0% /sys/fs/cgroup
/dev/sda1              497M  119M  379M  24% /boot
/dev/sr0               3.5G  3.5G     0 100% /run/media/root/RHEL-7.0 Server.x86_64
/dev/md0                40G   49M   38G   1% /RAID5

 

5、将挂载写入开机自动挂载

[root@PC1linuxprobe dev]# echo -e "/dev/md0\t/RAID5\text4\tdefaults\t0\t0" >> /etc/fstab 
[root@PC1linuxprobe dev]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Thu Nov  5 15:23:01 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root   /                       xfs     defaults        1 1
UUID=0ba20ae9-dd51-459f-ac48-7f7e81385eb8 /boot                   xfs     defaults        1 2
/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
/dev/md0    /RAID5    ext4    defaults    0    0

 

6、移除RAID5磁盘阵列中的一块硬盘(损坏模拟),观察/dev/sde是否能够自动顶替损坏盘

[root@PC1linuxprobe dev]# mdadm /dev/md0 -f /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md0
[root@PC1linuxprobe dev]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Nov  8 13:24:23 2020
     Raid Level : raid5
     Array Size : 41909248 (39.97 GiB 42.92 GB)
  Used Dev Size : 20954624 (19.98 GiB 21.46 GB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sun Nov  8 13:39:08 2020
          State : clean, degraded, recovering 
 Active Devices : 2
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 20% complete

           Name : PC1linuxprobe:0  (local to host PC1linuxprobe)
           UUID : 2e07ebb9:9858dc73:969f08aa:992df027
         Events : 23

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       3       8       64        1      spare rebuilding   /dev/sde
       4       8       48        2      active sync   /dev/sdd

       1       8       32        -      faulty   /dev/sdc
[root@PC1linuxprobe dev]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Nov  8 13:24:23 2020
     Raid Level : raid5
     Array Size : 41909248 (39.97 GiB 42.92 GB)
  Used Dev Size : 20954624 (19.98 GiB 21.46 GB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sun Nov  8 13:39:53 2020
          State : clean, degraded, recovering 
 Active Devices : 2
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 65% complete

           Name : PC1linuxprobe:0  (local to host PC1linuxprobe)
           UUID : 2e07ebb9:9858dc73:969f08aa:992df027
         Events : 36

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       3       8       64        1      spare rebuilding   /dev/sde
       4       8       48        2      active sync   /dev/sdd

       1       8       32        -      faulty   /dev/sdc
[root@PC1linuxprobe dev]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Nov  8 13:24:23 2020
     Raid Level : raid5
     Array Size : 41909248 (39.97 GiB 42.92 GB)
  Used Dev Size : 20954624 (19.98 GiB 21.46 GB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sun Nov  8 13:40:34 2020
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : PC1linuxprobe:0  (local to host PC1linuxprobe)
           UUID : 2e07ebb9:9858dc73:969f08aa:992df027
         Events : 47

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       3       8       64        1      active sync   /dev/sde
       4       8       48        2      active sync   /dev/sdd

       1       8       32        -      faulty   /dev/sdc

 

总结:部署RAID5+备份盘

  • 准备4块硬盘(3块用于RAID5, 1块备份盘)
  • 部署RAID5+备份盘:mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/disk1 /dev/disk2 /dev/disk3 /dev/disk4
  • 将磁盘阵列格式化为ext4文件系统
  • 挂载,并写入开机自动挂载
posted @ 2020-11-08 12:44  小鲨鱼2018  阅读(1219)  评论(0编辑  收藏  举报