www.cnblogs.com/ruiyqinrui

开源、架构、Linux C/C++/python AI BI 运维开发自动化运维。 春风桃李花 秋雨梧桐叶。“力尽不知热 但惜夏日长”。夏不惜,秋不获。@ruiY--秦瑞

python爬虫,C编程,嵌入式开发.hadoop大数据,桉树,onenebula云计算架构.linux运维及驱动开发.

  博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

 

centos7的xfs配置

 

XFS是扩展性高、高性能的文件系统。也是rhel7/centos7的默认文件系统。
XFS支持metadata journaling,这使其能从crash中更快速的恢复。
它也支持在挂载和活动的状态下进行碎片整理和扩容。
通过延迟分配,XFS 赢得了许多机会来优化写性能。
可通过工具xfsdump和xfsrestore来备份和恢复xfs文件系统,
xfsdump可使用dump级别来完成增量备份,还可通过size,subtree,inode flags来排除文件。
也支持user、group、project配额。

下面将介绍如何创建xfs文件系统,分配配额以及对其扩容:
###############################################################################
将/dev/sdb分区(2G),并启动LVM功能

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@localhost zhongq]#parted /dev/sdb                              
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mkpart primary 4 2048
(parted) set 1 lvm on                                                   
(parted) p                                                            
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 2147MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
 
Number  Start   End     Size    File system  Name     Flags
 1      4194kB  2048MB  2044MB               primary  lvm

 

###############################################################################
创建PV

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@localhost zhongq]# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created
 
[root@localhost zhongq]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               centos
  PV Size               24.51 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              6274
  Free PE               0
  Allocated PE          6274
  PV UUID               9hp8U7-IJM6-bwbP-G9Vn-IVuJ-yvE8-AkFjcB
    
  "/dev/sdb1" is a new physical volume of "1.90 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name              
  PV Size               1.90 GiB
  Allocatable           NO
  PE Size               0  
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               bu7yIH-1440-BPy1-APG2-FpvX-ejLS-2MIlA8

###############################################################################
将/dev/sdb1分配到名为xfsgroup00的VG

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
[root@localhost zhongq]# vgcreate  xfsgroup00 /dev/sdb1
 Volume group "xfsgroup00" successfully created
[root@localhost zhongq]# vgdisplay
 --- Volume group ---
  VG Name               centos
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               24.51 GiB
  PE Size               4.00 MiB
  Total PE              6274
  Alloc PE / Size       6274 / 24.51 GiB
  Free  PE / Size       0 / 0  
  VG UUID               T3Ryyg-R0rn-2i5r-7L5o-AZKG-yFkh-CDzhKm
    
  --- Volume group ---
  VG Name               xfsgroup00
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.90 GiB
  PE Size               4.00 MiB
  Total PE              487
  Alloc PE / Size       0 / 0  
  Free  PE / Size       487 / 1.90 GiB
  VG UUID               ejuwcc-sVES-MWWB-3Mup-n1wB-Kd0g-u7jm0H

###############################################################################
使用命令lvcreate来创建xfsgroup00组大小为1G的名为xfsdata的LV

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[root@localhost zhongq]# lvcreate -L 1024M -n xfsdata xfsgroup00
WARNING: xfs signature detected on /dev/xfsgroup00/xfsdata at offset 0. Wipe it? [y/n] y
  Wiping xfs signature on /dev/xfsgroup00/xfsdata.
  Logical volume "xfsdata" created
[root@localhost zhongq]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/centos/swap
  LV Name                swap
  VG Name                centos
  LV UUID                EnW3at-KlFG-XGaQ-DOoH-cGPP-8pSf-teSVbh
  LV Write Access        read/write
  LV Creation host, time localhost, 2014-08-18 20:15:25 +0800
  LV Status              available
  # open                 2
  LV Size                2.03 GiB
  Current LE             520
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
    
  --- Logical volume ---
  LV Path                /dev/centos/root
  LV Name                root
  VG Name                centos
  LV UUID                zmZGkv-Ln4W-B8AY-oDnD-BEk2-6VWL-L0cZOv
  LV Write Access        read/write
  LV Creation host, time localhost, 2014-08-18 20:15:26 +0800
  LV Status              available
  # open                 1
  LV Size                22.48 GiB
  Current LE             5754
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1
    
  --- Logical volume ---
  LV Path                /dev/xfsgroup00/xfsdata
  LV Name                xfsdata
  VG Name                xfsgroup00
  LV UUID                O4yvoY-XGcD-0zPm-eilR-3JJP-updU-rRCSlJ
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2014-09-23 15:50:19 +0800
  LV Status              available
  # open                 0
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

###############################################################################
格式化分区为xfs文件系统。
注意:xfs被创建后,其size将无法缩小,但可以通过xfs_growfs来增大

1
2
3
4
5
6
7
8
9
10
[root@localhost zhongq]# mkfs.xfs /dev/xfsgroup00/xfsdata
meta-data=/dev/xfsgroup00/xfsdata isize=256    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

###############################################################################
挂载xfs系统分区到指定目录,并通过参数uquota,gquota开启文件系统配额。

1
2
3
4
5
[root@localhost zhongq]# mkdir /xfsdata
[root@localhost zhongq]# mount -o uquota,gquota /dev/xfsgroup00/xfsdata /xfsdata
[root@localhost zhongq]# chmod 777 /xfsdata
[root@localhost zhongq]# mount|grep xfsdata
/dev/mapper/xfsgroup00-xfsdata on /xfsdata type xfs (rw,relatime,attr2,inode64,usrquota,grpquota)

###############################################################################
使用xfs_quota命令来查看配额信息以及为用户和目录分配配额,并验证配额限制是否生效。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
[root@localhost zhongq]# xfs_quota -x -c 'report' /xfsdata
User quota on /xfsdata (/dev/mapper/xfsgroup00-xfsdata)
                               Blocks                    
User ID          Used       Soft       Hard    Warn/Grace    
---------- --------------------------------------------------
root                0          0          0     00 [--------]
 
Group quota on /xfsdata (/dev/mapper/xfsgroup00-xfsdata)
                               Blocks                    
Group ID         Used       Soft       Hard    Warn/Grace    
---------- --------------------------------------------------
root                0          0          0     00 [--------]
 
[root@localhost zhongq]# xfs_quota -x -c 'limit bsoft=100M bhard=120M zhongq' /xfsdata
[root@localhost zhongq]#xfs_quota -x -c 'report' /xfsdata
User quota on /xfsdata (/dev/mapper/xfsgroup00-xfsdata)
                               Blocks                    
User ID          Used       Soft       Hard    Warn/Grace    
---------- --------------------------------------------------
root                0          0          0     00 [--------]
zhongq              0     102400     122880     00 [--------]
 
Group quota on /xfsdata (/dev/mapper/xfsgroup00-xfsdata)
                               Blocks                    
Group ID         Used       Soft       Hard    Warn/Grace    
---------- --------------------------------------------------
root                0          0          0     00 [--------]
 
[root@localhost zhongq]# su zhongq
[zhongq@localhost ~]$ dd if=/dev/zero of=/xfsdata/zq00 bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 28.9833 s, 3.6 MB/s
[zhongq@localhost ~]$ dd if=/dev/zero of=/xfsdata/zq01 bs=1M count=100
dd: error writing ‘/xfsdata/zq01’: Disk quota exceeded
21+0 records in
20+0 records out
20971520 bytes (21 MB) copied, 4.18921 s, 5.0 MB/s
 
[zhongq@localhost ~]$ exit
 
[root@localhost zhongq]# xfs_quota
xfs_quota> help
df [-bir] [-hn] [-f file] -- show free and used counts for blocks and inodes
help [command] -- help for one or all commands
print -- list known mount points and projects
quit -- exit the program
quota [-bir] [-gpu] [-hnNv] [-f file] [id|name]... -- show usage and limits
 
Use 'help commandname' for extended help.
xfs_quota> print
Filesystem          Pathname
/                   /dev/mapper/centos-root
/boot               /dev/sda1
/var/lib/docker     /dev/mapper/centos-root
/xfsdata            /dev/mapper/xfsgroup00-xfsdata (uquota, gquota)
xfs_quota> quota -u zhongq
Disk quotas for User zhongq (1000)
Filesystem                        Blocks      Quota      Limit  Warn/Time      Mounted on
/dev/mapper/xfsgroup00-xfsdata    122880     102400     122880   00  [6 days]   /xfsdata

###############################################################################
先使用命令lvextend将LV扩展为1.5G(初始容量是1G),然后使用命令xfs_growfs来对xfs文件系统扩容(这里以block计数)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@localhost zhongq]# lvextend -L 1.5G /dev/xfsgroup00/xfsdata
  Extending logical volume xfsdata to 1.50 GiB
  Logical volume xfsdata successfully resized
   
[root@localhost zhongq]# xfs_growfs /dev/xfsgroup00/xfsdata -D 393216
meta-data=/dev/mapper/xfsgroup00-xfsdata isize=256    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 262144 to 393216
   
[root@localhost zhongq]# df -h|grep xfsdata
/dev/mapper/xfsgroup00-xfsdata  1.5G  153M  1.4G  10% /xfsdata
posted on 2017-08-29 09:30  秦瑞It行程实录  阅读(8011)  评论(0编辑  收藏  举报
www.cnblogs.com/ruiyqinrui