Linux创建RAID1_实战

Linux创建RAID1实战

Linux创建RAID1

  1. RAID1俗称镜像,它最少由两个硬盘组成,且两个硬盘上存储的数据均相同,以实现数据冗余
  2. RAID1读操作速度有所提高,写操作理论上与单硬盘速度一样,但由于数据需要同时写入所有硬盘,实际上稍为下降
  3. 容错性是所有组合方式里最好的,只要有一块硬盘正常,则能保持正常工作
  4. 它对硬盘容量的利用率则是最低,只有50%,因而成本也是最高
  5. RAID1适合对数据安全性要求非常高的场景,比如存储数据库数据文件之类

创建RAID1,并格式化,挂载使用,故障模拟,重新添加热备份

  1. 添加三块10G的虚拟硬盘,分区,分区ID为fd(分区过程就不演示了,可以参照RAID0设置)
[root@localhost ~]# lsblk 
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda             8:0    0   10G  0 disk  
sdb             8:16   0   10G  0 disk 
sdc             8:32   0   10G  0 disk 
sr0            11:0    1  7.3G  0 rom  
nvme0n1       259:0    0   80G  0 disk 
├─nvme0n1p1   259:1    0    1G  0 part /boot
└─nvme0n1p2   259:2    0   79G  0 part 
  ├─rhel-root 253:0    0   50G  0 lvm  /
  ├─rhel-swap 253:1    0    2G  0 lvm  [SWAP]
  └─rhel-home 253:2    0   27G  0 lvm  /home
  • 查看已经分区的三块10G的虚拟硬盘
[root@localhost ~]# fdisk -l |grep raid
/dev/sda1        2048 20971519 20969472  10G fd Linux raid autodetect
/dev/sdb1        2048 20971519 20969472  10G fd Linux raid autodetect
/dev/sdc1        2048 20971519 20969472  10G fd Linux raid autodetect
  1. 创建RAID1,并添加1个热备份盘
[root@localhost ~]# mdadm -C -v /dev/md1 -l1 -n2 /dev/sd{a,b}1 -x1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 10475520K
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
  1. 查看 raidstat 状态
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1] 
md1 : active raid1 sdc1[2](S) sdb1[1] sda1[0]
      10475520 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>
  1. 查看 RAID1 的详细信息
[root@localhost ~]# mdadm -D /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Tue Dec 15 14:19:47 2020
        Raid Level : raid1
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Dec 15 14:20:40 2020
             State : clean 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

              Name : ansible:1  (local to host ansible)
              UUID : 219d1f6f:bf936912:6d94ec5c:a630c146
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

       2       8       33        -      spare   /dev/sdc1
  1. 格式化 /dev/md1 ,类型为xfs
[root@localhost ~]# mkfs.xfs /dev/md1 
meta-data=/dev/md1               isize=512    agcount=4, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=2618880, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@localhost ~]# blkid /dev/md1 
/dev/md1: UUID="d3ff27dc-c136-4c1c-8539-382832122242" TYPE="xfs"
  1. 挂载 /dev/md1
[root@localhost ~]# mkdir /raid1
[root@localhost ~]# mount /dev/md1 /raid1/
[root@localhost ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               886M     0  886M   0% /dev
tmpfs                  903M     0  903M   0% /dev/shm
tmpfs                  903M   17M  886M   2% /run
tmpfs                  903M     0  903M   0% /sys/fs/cgroup
/dev/mapper/rhel-root   50G  4.5G   46G   9% /
/dev/mapper/rhel-home   27G  225M   27G   1% /home
/dev/nvme0n1p1        1014M  173M  842M  18% /boot
tmpfs                  181M     0  181M   0% /run/user/0
/dev/md1                10G  104M  9.9G   2% /raid1
  1. 创建测试文件
[root@localhost ~]# touch /raid1/file{1..10}
[root@localhost ~]# cd /raid1/
[root@localhost raid1]# ls
file1  file10  file2  file3  file4  file5  file6  file7  file8  file9
  1. 故障模拟,其中 /dev/sda1 损坏
[root@localhost ~]# mdadm -f /dev/md1 /dev/sda1
mdadm: set /dev/sda1 faulty in /dev/md1
  1. 查看测试文件是否完整
[root@localhost ~]# ls /raid1/
file1  file10  file2  file3  file4  file5  file6  file7  file8  file9
  1. 再次查看 RAID1 状态
[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md1 : active raid1 sdc1[2] sdb1[1] sda1[0](F)
      10475520 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

[root@localhost ~]# mdadm -D /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Tue Dec 15 14:19:47 2020
        Raid Level : raid1
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Dec 15 14:33:04 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : resync

              Name : ansible:1  (local to host ansible)
              UUID : 219d1f6f:bf936912:6d94ec5c:a630c146
            Events : 36

    Number   Major   Minor   RaidDevice State
       2       8       33        0      active sync   /dev/sdc1
       1       8       17        1      active sync   /dev/sdb1

       0       8        1        -      faulty   /dev/sda1
  1. 移除损坏的磁盘 /dev/sda1
[root@ansible ~]# mdadm -r /dev/md1 /dev/sda1
mdadm: hot removed /dev/sda1 from /dev/md1
  • 再次查看 RAID1 状态
[root@ansible ~]# mdadm -D /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Tue Dec 15 14:19:47 2020
        Raid Level : raid1
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Dec 15 14:41:22 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : ansible:1  (local to host ansible)
              UUID : 219d1f6f:bf936912:6d94ec5c:a630c146
            Events : 37

    Number   Major   Minor   RaidDevice State
       2       8       33        0      active sync   /dev/sdc1
       1       8       17        1      active sync   /dev/sdb1
  1. 重新添加热备份硬盘 /dev/sda1
[root@ansible ~]# mdadm -a /dev/md1 /dev/sda1
mdadm: added /dev/sda1
  • 再次查看 RAID1 状态
[root@ansible ~]# mdadm  -D /dev/md1 
/dev/md1:
           Version : 1.2
     Creation Time : Tue Dec 15 14:19:47 2020
        Raid Level : raid1
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Dec 15 14:44:19 2020
             State : clean 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

              Name : ansible:1  (local to host ansible)
              UUID : 219d1f6f:bf936912:6d94ec5c:a630c146
            Events : 38

    Number   Major   Minor   RaidDevice State
       2       8       33        0      active sync   /dev/sdc1
       1       8       17        1      active sync   /dev/sdb1

       3       8        1        -      spare   /dev/sda1

posted @ 2020-12-15 14:52  阮小言  阅读(524)  评论(0编辑  收藏  举报