linux下lvm逻辑卷配置

linux下lvm逻辑卷配置_逻辑卷
演示环境:
阿里ecs服务器或者物理服务器对新挂载硬盘设备进行逻辑卷分区和挂载
系统是CentOS Linux release 7.6.1810 (Core) x86_64位最小化安装

1、安装虚拟化命令:

yum -y install lvm2
  • 1.

2、新挂载的物理硬盘分区/dev/vdc初始化为物理卷,以便LVM使用:
pv-s

[root@testdb ~]# pvcreate /dev/vdc 
  Physical volume "/dev/vde" successfully created.
  • 1.
  • 2.

3、创建LVM卷组vg_data,并将sdb放进卷组vg_data中:

vg-s

[root@testdb ~]# vgcreate /dev/vg_data /dev/vdc
  Volume group "vg_data" successfully created
[root@testdb ~]# 
[root@testdb ~]# 
  • 1.
  • 2.
  • 3.
  • 4.

4、vgdisplay 显示LVM卷组的信息:

[root@testdb ~]# vgdisplay 
  --- Volume group ---
  VG Name               vg_data
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <100.00 GiB
  PE Size               4.00 MiB
  Total PE              25599
  Alloc PE / Size       0 / 0   
  Free  PE / Size       25599 / <100.00 GiB
  VG UUID               Fi0gw1-3Dx4-e2ad-bC4f-ps2H-yrsS-SXeSpB
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
[root@testdb ~]# lvcreate -n /dev/vg_data/lv_data -l 25599 
  Logical volume "lv_data" created.
[root@testdb ~]# 
  • 1.
  • 2.
  • 3.

特别说明:
逻辑卷是创建在卷组之上的。逻辑卷对应的设备文件保存在卷组目录下,例如:在卷组"vg_data"上创建一个逻辑卷"lv_data",则此逻辑卷对应的设备文件为"/dev/vg_data/lv_data"。
“-l"参数是为了指定逻辑卷的大小(逻辑卷的数量)。”-n"是指定逻辑卷名称。

5、创建文件系统:

mkfs.xfs /dev/mapper/vg_data-lv_data  或者 mkfs.ext4  /dev/mapper/vg_data-lv_data
  • 1.
[root@testdb ~]# mkfs.xfs /dev/mapper/vg_data-lv_data 
meta-data=/dev/mapper/vg_data-lv_data isize=512    agcount=4, agsize=6553344 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=26213376, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=12799, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@testdb ~]# 
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.

6、挂载文件:

m-v-s

[root@testdb ~]# mount /dev/mapper/vg_data-lv_data /data2 
[root@testdb ~]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/vda1                     40G   28G   11G  73% /
devtmpfs                     3.9G     0  3.9G   0% /dev
tmpfs                        3.9G     0  3.9G   0% /dev/shm
tmpfs                        3.9G  616K  3.9G   1% /run
tmpfs                        3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/vdb1                    500G  131G  370G  27% /data1
tmpfs                        783M     0  783M   0% /run/user/0
ossfs                        256T     0  256T   0% /aliyun_oss
/dev/mapper/vg_data-lv_data  100G   33M  100G   1% /data2
[root@testdb ~]# 
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.

7、写入/etc/fstab文件进行开机重启自动挂载设备

[root@testdb vg_data]# cat /etc/fstab |grep data2
/dev/mapper/vg_data-lv_data   /data2 xfs defaults 0 0 
  • 1.
  • 2.

手动执行命令进行挂载:

执行如下命令,将/dev/mapper/vg_data-lv_data 挂载到/data2目录下
mount /dev/mapper/vg_data-lv_data /data2
  • 1.
  • 2.

提示:
挂载文件信息一定要填写对,否则服务器重启失败,导致登陆不了服务器,物理服务器需要单用户进行操作删除填写错误的/etc/fstab文件。阿里云ECS的话 只能是提交工单处理了

8、添加新硬盘/dev/vdd扩充已经存在的逻辑卷分区lv_data的存储空间

a、查看现有的物理卷信息:

[root@testdb ~]#  pvdisplay
  --- Physical volume ---
  PV Name               /dev/vdc
  VG Name               vg_data
  PV Size               100.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              25599
  Free PE               0
  Allocated PE          25599
  PV UUID               n6HJ5u-E9Ld-4WGc-1RRq-uyL9-Vr1i-dIP6Ai
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.

b、创建物理卷和查看当前物理卷:

[root@testdb data2]# pvcreate /dev/vdd
[root@testdb data2]# pvdisplay
  • 1.
  • 2.

c、扩展卷组

现有卷组:

[root@testdb data2]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg_data/lv_data
  LV Name                lv_data
  VG Name                vg_data
  LV UUID                3zjVzF-JvDH-FuAY-nABr-LWW3-n4eu-k6nWr8
  LV Write Access        read/write
  LV Creation host, time testdb, 2022-03-23 12:05:49 +0800
  LV Status              available
  # open                 1
  LV Size                <100.00 GiB
  Current LE             25599
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:0
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.

执行扩展卷组命令把新设备/dev/vdd加入到vg_data卷组:

[root@testdb data2]# vgextend  vg_data /dev/vdd 
  Volume group "vg_data" successfully extended
[root@testdb data2]# 
  • 1.
  • 2.
  • 3.

再次查看物理卷组信息:

[root@testdb ~]#  pvdisplay
  --- Physical volume ---
  PV Name               /dev/vdc
  VG Name               vg_data
  PV Size               100.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              25599
  Free PE               0
  Allocated PE          25599
  PV UUID               n6HJ5u-E9Ld-4WGc-1RRq-uyL9-Vr1i-dIP6Ai
   
  --- Physical volume ---
  PV Name               /dev/vdd
  VG Name               vg_data
  PV Size               50.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              12799
  Free PE               0
  Allocated PE          12799
  PV UUID               Fu3oyF-lGDw-Mo4b-OJFH-YZle-0H84-kZ3UEa
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.

扩展/dev/vg_data/lv_data 卷(/lv_data分区):

[root@testdb data2]# lvextend -l +100%FREE  /dev/vg_data/lv_data 
  Size of logical volume vg_data/lv_data changed from <100.00 GiB (25599 extents) to 149.99 GiB (38398 extents).
  Logical volume vg_data/lv_data successfully resized.
[root@testdb data2]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg_data/lv_data
  LV Name                lv_data
  VG Name                vg_data
  LV UUID                3zjVzF-JvDH-FuAY-nABr-LWW3-n4eu-k6nWr8
  LV Write Access        read/write
  LV Creation host, time testdb, 2022-03-23 12:05:49 +0800
  LV Status              available
  # open                 1
  LV Size                149.99 GiB
  Current LE             38398
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:0
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.

d、采用xfs_growfs进行扩展逻辑卷组分区

[root@testdb data2]# xfs_growfs /dev/vg_data/lv_data 
meta-data=/dev/mapper/vg_data-lv_data isize=512    agcount=4, agsize=6553344 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=26213376, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=12799, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 26213376 to 39319552
[root@testdb data2]# 
[root@testdb data2]# df -hT
Filesystem                  Type      Size  Used Avail Use% Mounted on
/dev/vda1                   ext4       40G   28G   11G  74% /
devtmpfs                    devtmpfs  3.9G     0  3.9G   0% /dev
tmpfs                       tmpfs     3.9G     0  3.9G   0% /dev/shm
tmpfs                       tmpfs     3.9G  584K  3.9G   1% /run
tmpfs                       tmpfs     3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/vdb1                   xfs       500G  131G  370G  27% /data1
tmpfs                       tmpfs     783M     0  783M   0% /run/user/996
tmpfs                       tmpfs     783M     0  783M   0% /run/user/0
/dev/mapper/vg_data-lv_data xfs       150G   33M  150G   1% /data2
[root@testdb data2]# cat test
2344
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.

注意:
xfs文件系统使用 xfs_growfs,ext文件系统使用 resize2fs;xfs文件系统只支持增大不支持缩小

posted @ 2022-03-24 13:09  勤奋的蓝猫  阅读(6)  评论(0编辑  收藏  举报  来源