RAID磁盘冗余阵列

RAID磁盘冗余阵列

RAID技术通过把多个硬盘设备组合成一个容量更大、安全性更好的磁盘阵列,并把数据切割成多个区段后分别存放在各个不同的物理硬盘设备上,然后利用分散读写技术来提升磁盘阵列整体的性能,同时把多个重要数据的副本同步到不同的物理硬盘设备上,从而起到了非常好的数据冗余备份效果。

RAID 0、1、5、10方案技术对比

RAID级别 最少硬盘 可用容量 读写性能 安全性 特点
0 1 n n 追求最大容量和速度,任何一块盘损坏,数据全部异常。
1 2 n/2 n 追求最大安全性,只要阵列组中有一块硬盘可用,数据不受影响。
5 3 n-1 n-1 在控制成本的前提下,追求硬盘的最大容量、速度及安全性,允许有一块硬盘异常,数据不受影响。
10 4 n/2 n/2 综合RAID1和RAID0的优点,追求硬盘的速度和安全性,允许有一半硬盘异常(不可同组),数据不受影响

部署磁盘阵列

mdadm命令用于创建、调整、监控和管理RAID设备,英文全称为“multiple devices admin”,语法格式为“mdadm参数 硬盘名称”。

mdadm命令中的常用参数及作用如表所示

参数 作用
-a 检测设备名称
-n 指定设备数量
-l 指定RAID级别
-C 创建
-v 显示过程
-f 模拟设备损坏
-r 移除设备
-Q 查看摘要信息
-D 查看详细信息
-S 停止RAID磁盘阵列

创建RAID 10

1.创建RAID磁盘阵列

1
2
3
4
5
6
7
8
9
[root@superwu ~]# mdadm -Cv /dev/md/hoho -n 4 -l 10 /dev/sd[b-e]  //在/dev/md目录下创建hoho阵列(默认没有md目录,创建阵列时会自动创建,且阵列卡必须在md目录下创建),使用sdb-e磁盘(也可以将硬盘单独写出/dev/sdb /dev/sdc /dev/sdd /dev/sde)
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 20954112K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/hoho started.
[root@superwu md]# mdadm -Q /dev/md/hoho //磁盘阵列创建需要几分钟,查看阵列信息,-D可查看详细信息。
/dev/md/hoho: 39.97GiB raid10 4 devices, 0 spares. Use mdadm --detail for more detail.

2.格式化磁盘阵列

注意:创建磁盘阵列需要时间,建议等待几分钟或者查看阵列状态正常后进行格式化操作。

1
2
3
4
5
6
7
8
9
10
11
12
[root@superwu md]# mkfs.ext4 /dev/md/hoho
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: face2652-48c3-4883-9260-12fa11976a97
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624
 
Allocating group tables: done                           
Writing inode tables: done                           
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done  

3.挂载磁盘阵列

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
[root@superwu md]# mkdir /hoho   //创建挂载点
[root@superwu md]# mount /dev/md/hoho /hoho  
[root@superwu md]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               969M     0  969M   0% /dev
tmpfs                  984M     0  984M   0% /dev/shm
tmpfs                  984M  9.6M  974M   1% /run
tmpfs                  984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root   17G  3.9G   14G  23% /
/dev/sr0               6.7G  6.7G     0 100% /media/cdrom
/dev/sda1             1014M  152M  863M  15% /boot
tmpfs                  197M   16K  197M   1% /run/user/42
tmpfs                  197M  3.5M  194M   2% /run/user/0
/dev/md127              40G   49M   38G   1% /hoho
[root@superwu md]# echo "/dev/md/hoho /hoho ext4 defaults 0 0" >> /etc/fstab  //将挂载信息写入配置文件,实现开机自动挂载。
[root@superwu md]# cat /etc/fstab
 
#
# /etc/fstab
# Created by anaconda on Tue Jan 11 03:26:57 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=d7f53471-c95f-44f2-aafe-f86bd5ecebd7 /boot                   xfs     defaults        0 0
/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
/dev/cdrom             /media/cdrom             iso9660 defaults        0 0
/dev/md/hoho /hoho ext4 defaults 0 0
[root@superwu md]# mdadm -D /dev/md/hoho   //查看磁盘阵列详细信息
/dev/md/hoho:
           Version : 1.2
     Creation Time : Wed Feb  9 18:35:22 2022
        <strong>Raid Level : raid10</strong>
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent
 
       Update Time : Wed Feb  9 18:53:54 2022
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0
 
            Layout : near=2
        Chunk Size : 512K
 
Consistency Policy : resync
 
              Name : superwu.10:hoho  (local to host superwu.10)
              UUID : 6af87cfc:47705900:20fcf416:eeac2363
            Events : 17
 
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync set-A   /dev/sdb
       1       8       32        1      active sync set-B   /dev/sdc
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde

磁盘阵列故障盘处理

磁盘阵列中如果有硬盘出现故障,需要及时更换,负责会造成数据丢失风险。

1.模拟硬盘故障

虚拟机环境需要模拟硬盘损坏

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[root@superwu md]# mdadm /dev/md/hoho -f /dev/sdc //模拟一块硬盘失效
mdadm: set /dev/sdc faulty in /dev/md/hoho
[root@superwu md]# mdadm -D /dev/md/hoho
/dev/md/hoho:
           Version : 1.2
     Creation Time : Wed Feb  9 18:35:22 2022
        Raid Level : raid10
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent
 
       Update Time : Wed Feb  9 19:12:37 2022
             State : clean, degraded
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0
 
            Layout : near=2
        Chunk Size : 512K
 
Consistency Policy : resync
 
              Name : superwu.10:hoho  (local to host superwu.10)
              UUID : 6af87cfc:47705900:20fcf416:eeac2363
            Events : 19
 
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync set-A   /dev/sdb
       -       0        0        1      removed
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde
 
       1       8       32        -      faulty   /dev/sdc

2.将故障硬盘从阵列中移除

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[root@superwu md]# mdadm /dev/md/hoho -r /dev/sdc  //-r移除磁盘
mdadm: hot removed /dev/sdc from /dev/md/hoho
[root@superwu md]# mdadm -D /dev/md/hoho
/dev/md/hoho:
           Version : 1.2
     Creation Time : Wed Feb  9 18:35:22 2022
        Raid Level : raid10
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 3
       Persistence : Superblock is persistent
 
       Update Time : Wed Feb  9 19:16:27 2022
             State : clean, degraded
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0
 
            Layout : near=2
        Chunk Size : 512K
 
Consistency Policy : resync
 
              Name : superwu.10:hoho  (local to host superwu.10)
              UUID : 6af87cfc:47705900:20fcf416:eeac2363
            Events : 20
 
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync set-A   /dev/sdb
       -       0        0        1      removed
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde<br>// /dev/md/hoho阵列中已经没有了/dev/sdc硬盘。

3.更换硬盘

注意:在生产环境中,服务器一般都使用RAID卡,对于RAID1、10会自动同步。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[root@superwu md]# mdadm /dev/md/hoho -a /dev/sdc  //更换硬盘后,将新硬盘加入到阵列中,-a添加
mdadm: added /dev/sdc
[root@superwu md]# mdadm -D /dev/md/hoho
/dev/md/hoho:
           Version : 1.2
     Creation Time : Wed Feb  9 18:35:22 2022
        Raid Level : raid10
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent
 
       Update Time : Wed Feb  9 19:25:14 2022
             State : clean, degraded, recovering
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1
 
            Layout : near=2
        Chunk Size : 512K
 
Consistency Policy : resync
 
    Rebuild Status : 15% complete
 
              Name : superwu.10:hoho  (local to host superwu.10)
              UUID : 6af87cfc:47705900:20fcf416:eeac2363
            Events : 24
 
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync set-A   /dev/sdb
       4       8       32        1      spare rebuilding   /dev/sdc   //此时raid正在重构,重构需要时间,重构时间与磁盘大小有关。
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde

磁盘阵列+备份盘

RAID5+备份盘

备份盘的核心理念就是准备一块足够大的硬盘,这块硬盘平时处于闲置状态,一旦RAID磁盘阵列中有硬盘出现故障后则会马上自动顶替上去。

注意:备份盘的大小应等于或大于RAID成员盘。

1.创建磁盘阵列

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[root@superwu ~]# mdadm -Cv /dev/md/hehe -n 3 -l 5 -x 1 /dev/sdb /dev/sdc /dev/sdd /dev/sde   //-x 表示有一块备份盘
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 20954112K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/hehe started.
[root@superwu ~]# mdadm -D /dev/md/hehe
/dev/md/hehe:
           Version : 1.2
     Creation Time : Wed Feb  9 23:25:36 2022
        Raid Level : raid5
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent
 
       Update Time : Wed Feb  9 23:25:43 2022
             State : clean, degraded, recovering
    Active Devices : 2
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 2
 
            Layout : left-symmetric
        Chunk Size : 512K
 
Consistency Policy : resync
 
    Rebuild Status : 12% complete
 
              Name : superwu.10:hehe  (local to host superwu.10)
              UUID : 2bb20f83:f96626bb:04d1ccd4:fc94809e
            Events : 2
 
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd
 
      <strong> 3       8       64        -      spare   /dev/sde
</strong>

2.格式化磁盘阵列

1
2
3
4
5
6
7
8
9
10
11
12
[root@superwu ~]# mkfs.ext4 /dev/md/hehe
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 10477056 4k blocks and 2621440 inodes
Filesystem UUID: 65b5dd45-b4a7-4db8-b5d9-1331a95b4fba
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624
 
Allocating group tables: done                           
Writing inode tables: done                           
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done  

3.挂载磁盘阵列

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@superwu ~]# mkdir /opt/hehe
[root@superwu ~]# mount /dev/md/hehe /opt/hehe/
[root@superwu ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               969M     0  969M   0% /dev
tmpfs                  984M     0  984M   0% /dev/shm
tmpfs                  984M  9.6M  974M   1% /run
tmpfs                  984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root   17G  3.9G   14G  23% /
/dev/sr0               6.7G  6.7G     0 100% /media/cdrom
/dev/sda1             1014M  152M  863M  15% /boot
tmpfs                  197M   16K  197M   1% /run/user/42
tmpfs                  197M  3.5M  194M   2% /run/user/0
<strong>/dev/md127              40G   49M   38G   1% /opt/hehe</strong>
[root@superwu ~]# echo "/dev/md/hehe /opt/hehe ext4 defaults 0 0" >> /etc/fstab //加入开机自动挂载

当RAID中的硬盘故障时,备份盘会立即自动顶替故障盘。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[root@superwu ~]# mdadm /dev/md/hehe -f /dev/sdb  //模拟sdb盘故障
mdadm: set /dev/sdb faulty in /dev/md/hehe
[root@superwu ~]# mdadm -D /dev/md/hehe
/dev/md/hehe:
           Version : 1.2
     Creation Time : Wed Feb  9 23:25:36 2022
        Raid Level : raid5
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent
 
       Update Time : Wed Feb  9 23:42:59 2022
             State : clean, degraded, recovering
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 1
 
            Layout : left-symmetric
        Chunk Size : 512K
 
Consistency Policy : resync
 
    Rebuild Status : 1% complete
 
              Name : superwu.10:hehe  (local to host superwu.10)
              UUID : 2bb20f83:f96626bb:04d1ccd4:fc94809e
            Events : 20
 
    Number   Major   Minor   RaidDevice State
    <strong>   3       8       64        0      spare rebuilding   /dev/sde    //sde自动顶替故障盘,并开始同步数据</strong>
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd
 
       <strong>0       8       16        -      faulty   /dev/sdb
</strong>

删除磁盘阵列

1.卸载磁盘阵列,停用成员盘

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
[root@superwu ~]# umount /opt/hehe   //卸载阵列
[root@superwu ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               969M     0  969M   0% /dev
tmpfs                  984M     0  984M   0% /dev/shm
tmpfs                  984M  9.6M  974M   1% /run
tmpfs                  984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root   17G  3.9G   14G  23% /
/dev/sr0               6.7G  6.7G     0 100% /media/cdrom
/dev/sda1             1014M  152M  863M  15% /boot
tmpfs                  197M   16K  197M   1% /run/user/42
tmpfs                  197M  3.5M  194M   2% /run/user/0
[root@superwu ~]# mdadm /dev/md/hehe -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md/hehe
[root@superwu ~]# mdadm /dev/md/hehe -f /dev/sdc
mdadm: set /dev/sdc faulty in /dev/md/hehe
[root@superwu ~]# mdadm /dev/md/hehe -f /dev/sde
mdadm: set /dev/sde faulty in /dev/md/hehe
[root@superwu ~]# mdadm /dev/md/hehe -f /dev/sdd
mdadm: set /dev/sdd faulty in /dev/md/hehe
[root@superwu ~]# mdadm -D /dev/md/hehe
/dev/md/hehe:
           Version : 1.2
     Creation Time : Wed Feb  9 23:25:36 2022
        Raid Level : raid5
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent
 
       Update Time : Wed Feb  9 23:52:31 2022
             State : clean, FAILED
    Active Devices : 0
    Failed Devices : 4
     Spare Devices : 0
 
            Layout : left-symmetric
        Chunk Size : 512K
 
Consistency Policy : resync
 
    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       -       0        0        1      removed
       -       0        0        2      removed
 
<strong>       0       8       16        -      faulty   /dev/sdb
       1       8       32        -      faulty   /dev/sdc
       3       8       64        -      faulty   /dev/sde
       4       8       48        -      faulty   /dev/sdd
</strong>

2.移除成员盘

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[root@superwu ~]# mdadm /dev/md/hehe -r /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md/hehe
[root@superwu ~]# mdadm /dev/md/hehe -r /dev/sdc
mdadm: hot removed /dev/sdc from /dev/md/hehe
[root@superwu ~]# mdadm /dev/md/hehe -r /dev/sdd
mdadm: hot removed /dev/sdd from /dev/md/hehe
[root@superwu ~]# mdadm /dev/md/hehe -r /dev/sde
mdadm: hot removed /dev/sde from /dev/md/hehe
[root@superwu ~]# mdadm -D /dev/md/hehe
/dev/md/hehe:
           Version : 1.2
     Creation Time : Wed Feb  9 23:25:36 2022
        Raid Level : raid5
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 0
       Persistence : Superblock is persistent
 
       Update Time : Wed Feb  9 23:55:36 2022
             State : clean, FAILED
    Active Devices : 0
    Failed Devices : 0
     Spare Devices : 0
 
            Layout : left-symmetric
        Chunk Size : 512K
 
Consistency Policy : resync
 
    Number   Major   Minor   RaidDevice State
   <strong>    -       0        0        0      removed
       -       0        0        1      removed
       -       0        0        2      removed
</strong>

3.停用磁盘阵列

1
2
3
4
[root@superwu ~]# mdadm --stop /dev/md/hehe
mdadm: stopped /dev/md/hehe
[root@superwu ~]# ls -l /dev/md/hehe
ls: cannot access '/dev/md/hehe': No such file or directory
posted @   小蟋帅  阅读(435)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 无需6万激活码!GitHub神秘组织3小时极速复刻Manus,手把手教你使用OpenManus搭建本
· C#/.NET/.NET Core优秀项目和框架2025年2月简报
· Manus爆火,是硬核还是营销?
· 一文读懂知识蒸馏
· 终于写完轮子一部分:tcp代理 了,记录一下
点击右上角即可分享
微信分享提示