7.1.7 磁盘阵列+备份盘

  RAID 10磁盘阵列中最多允许50%的硬盘设备发生故障,但是如果同一RAID 1磁盘阵列中的硬盘设备全部损坏,也会导致数据丢失。(RAID是由RAID 1 + RAID 0 组合而成)

  这种情况下可以使用RAID备份盘技术来预防这类事故。该技术的核心理念是准备一块足够大的硬盘,这块硬盘平时处于闲置状态,一旦RAID磁盘阵列中有硬盘出现故障后则会马上自动顶替上去。

  示例:

  创建一个RAID 5 磁盘阵列+备份盘。

  mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sdb /dev/sdc /dev/sdd /dev/sde 

  • 参数-n 3代表创建这个RAID 5磁盘阵列所需的硬盘数
  • 参数-l 5:RAID级别
  • -x 1:一块备份盘

 

  

[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Jun 20 05:46:58 2024
        Raid Level : raid5
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Jun 20 05:47:10 2024
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : b88544ba:8e7ad810:a628dc47:464f180c
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd

       3       8       64        -      spare   /dev/sde

  格式化磁盘阵列

[root@localhost ~]# mkfs.ext4 /dev/md0
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 1047040 4k blocks and 262144 inodes
Filesystem UUID: d84bc196-b0e6-4c14-b58c-332072b8a18a
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

 挂载

 vim /etc/fstab

 

 mount -a 挂载/etc/fstab 里所有的设备

[root@localhost ~]# mount -a
[root@localhost ~]# 
[root@localhost ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             947M     0  947M   0% /dev
tmpfs                975M     0  975M   0% /dev/shm
tmpfs                975M  9.4M  966M   1% /run
tmpfs                975M     0  975M   0% /sys/fs/cgroup
/dev/mapper/cl-root   17G  4.6G   13G  27% /
/dev/sda1           1014M  229M  786M  23% /boot
tmpfs                195M   24K  195M   1% /run/user/0
/dev/sr0              11G   11G     0 100% /run/media/root/CentOS-8-5-2111-x86_64-dvd
/dev/md0             3.9G   16M  3.7G   1% /RAID

 再次把硬盘设备/dev/sdb 移出磁盘阵列,再次查看可以看到备份盘自动顶替上去并开始了数据同步。

[root@localhost ~]# mdadm /dev/md0 -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Jun 20 05:46:58 2024
        Raid Level : raid5
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Jun 20 06:04:35 2024
             State : clean, degraded, recovering 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 42% complete

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : b88544ba:8e7ad810:a628dc47:464f180c
            Events : 26

    Number   Major   Minor   RaidDevice State
       3       8       64        0      spare rebuilding   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd

       0       8       16        -      faulty   /dev/sdb

 

posted @ 2024-06-19 22:06  ~技术小白  阅读(68)  评论(0)    收藏  举报