linux系统中 raid10磁盘阵列损坏修复

linux系统中部署RAID10磁盘阵列的目的是:

提高硬盘的读写速度(IO性能)

提高数据的安全性(冗余数据备份机制)

linux系统中raid10磁盘阵列中,当发现一块硬盘出现损坏而不能继续使用时,应当使用mdadm命令将其移除(期间不影响硬盘的使用),然后替换上新的硬盘

1、查看当前系统的raid10 磁盘阵列的下详细信息

[root@linuxprobe /]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Oct 28 21:58:25 2020
     Raid Level : raid10
     Array Size : 41909248 (39.97 GiB 42.92 GB)
  Used Dev Size : 20954624 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Oct 28 22:38:56 2020
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : linuxprobe.com:0  (local to host linuxprobe.com)
           UUID : 468018e0:1f90d057:a9944b8e:ab4e3e85
         Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

 

2、假设 硬盘/dev/sdc损坏,先将其移除

[root@linuxprobe /]# mdadm /dev/md0 -f /dev/sdc  ## 移除 /dev/sdc硬盘
mdadm: set /dev/sdc faulty in /dev/md0
[root@linuxprobe /]# mdadm -D /dev/md0  ## 查看raid10 磁盘阵列,发现已经移除 /dev/sdc
/dev/md0:
        Version : 1.2
  Creation Time : Wed Oct 28 21:58:25 2020
     Raid Level : raid10
     Array Size : 41909248 (39.97 GiB 42.92 GB)
  Used Dev Size : 20954624 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Oct 28 22:50:22 2020
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : linuxprobe.com:0  (local to host linuxprobe.com)
           UUID : 468018e0:1f90d057:a9944b8e:ab4e3e85
         Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       0        0        1      removed
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

       1       8       32        -      faulty   /dev/sdc

 

 

 

3、在RAID10级别的磁盘阵列中,当RAID1磁盘阵列中存在一个故障盘时并不影响RAID10磁盘阵列的使用。当购买了新的硬盘设备后再使用mdadm命令来予以替换即可,在此期间可以在/RAID目录中

正常滴创建和删除文件。

4、重启系统。

5、卸载 raid10磁盘阵列

[root@linuxprobe ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   18G  2.9G   15G  17% /
devtmpfs               985M     0  985M   0% /dev
tmpfs                  994M   84K  994M   1% /dev/shm
tmpfs                  994M  8.9M  986M   1% /run
tmpfs                  994M     0  994M   0% /sys/fs/cgroup
/dev/md0                40G   49M   38G   1% /RAID
/dev/sda1              497M  119M  379M  24% /boot
/dev/sr0               3.5G  3.5G     0 100% /run/media/root/RHEL-7.0 Server.x86_64
[root@linuxprobe ~]# umount /RAID
[root@linuxprobe ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   18G  2.9G   15G  17% /
devtmpfs               985M     0  985M   0% /dev
tmpfs                  994M   84K  994M   1% /dev/shm
tmpfs                  994M  8.9M  986M   1% /run
tmpfs                  994M     0  994M   0% /sys/fs/cgroup
/dev/sda1              497M  119M  379M  24% /boot
/dev/sr0               3.5G  3.5G     0 100% /run/media/root/RHEL-7.0 Server.x86_64

 

6、 添加新的硬盘,假设为/dev/sdc

[root@linuxprobe ~]# umount /RAID
[root@linuxprobe ~]# df -h

 

7、查看raid10磁盘阵列详细信息

[root@linuxprobe ~]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Oct 28 21:58:25 2020
     Raid Level : raid10
     Array Size : 41909248 (39.97 GiB 42.92 GB)
  Used Dev Size : 20954624 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Wed Oct 28 23:02:56 2020
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : linuxprobe.com:0  (local to host linuxprobe.com)
           UUID : 468018e0:1f90d057:a9944b8e:ab4e3e85
         Events : 50

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       4       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       3       8       64        3      active sync   /dev/sde

 

8、重新挂载

[root@linuxprobe ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   18G  2.9G   15G  17% /
devtmpfs               985M     0  985M   0% /dev
tmpfs                  994M   84K  994M   1% /dev/shm
tmpfs                  994M  8.9M  986M   1% /run
tmpfs                  994M     0  994M   0% /sys/fs/cgroup
/dev/sda1              497M  119M  379M  24% /boot
/dev/sr0               3.5G  3.5G     0 100% /run/media/root/RHEL-7.0 Server.x86_64
[root@linuxprobe ~]# mount -a  ## 挂载配置文件/etc/fstab中的硬盘
[root@linuxprobe ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/rhel-root   18G  2.9G   15G  17% /
devtmpfs               985M     0  985M   0% /dev
tmpfs                  994M   84K  994M   1% /dev/shm
tmpfs                  994M  8.9M  986M   1% /run
tmpfs                  994M     0  994M   0% /sys/fs/cgroup
/dev/sda1              497M  119M  379M  24% /boot
/dev/sr0               3.5G  3.5G     0 100% /run/media/root/RHEL-7.0 Server.x86_64
/dev/md0                40G   49M   38G   1% /RAID

 

posted @ 2020-10-28 22:52  小鲨鱼2018  阅读(729)  评论(0编辑  收藏  举报