Linux系统管理上机作业6

chapter06

1.为主机增加80G SCSI 接口硬盘

 

2.划分三个各20G的主分区

[root@localhost ~]# fdisk -l /dev/sde

磁盘 /dev/sde:85.9 GB, 85899345920 字节,167772160 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
磁盘标签类型:dos
磁盘标识符:0xeb49d65d

设备 Boot Start End Blocks Id System
/dev/sde1 2048 41945087 20971520 83 Linux
/dev/sde2 41945088 83888127 20971520 83 Linux
/dev/sde3 83888128 125831167 20971520 83 Linux

3.将三个主分区转换为物理卷(pvcreate),扫描系统中的物理卷

[root@localhost ~]# pvcreate /dev/sdb1
Physical volume "/dev/sde1" successfully created
[root@localhost ~]# pvcreate /dev/sdb2
Physical volume "/dev/sde2" successfully created
[root@localhost ~]# pvcreate /dev/sdb3
Physical volume "/dev/sde3" successfully created

[root@localhost ~]# pvscan
PV /dev/sda2 VG centos lvm2 [39.51 GiB / 44.00 MiB free]
PV /dev/sdb1 lvm2 [20.00 GiB]
PV /dev/sdb2 lvm2 [20.00 GiB]
PV /dev/sdb3 lvm2 [20.00 GiB]
Total: 4 [99.51 GiB] / in use: 1 [39.51 GiB] / in no VG: 3 [60.00 GiB]

4.使用两个物理卷创建卷组,名字为myvg,查看卷组大小

[root@localhost ~]# vgcreate myvg /dev/sdb[12]
Volume group "myvg" successfully created
[root@localhost ~]# vgdisplay myvg
--- Volume group ---
VG Name myvg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 39.99 GiB
PE Size 4.00 MiB
Total PE 10238
Alloc PE / Size 0 / 0
Free PE / Size 10238 / 39.99 GiB
VG UUID o0hTSa-LaKS-w0r7-H2I7-p14g-JO3D-Sh7j0H

5.创建逻辑卷mylv,大小为30G

[root@localhost ~]# lvcreate -L 30G -n mylv myvg
Logical volume "mylv" created.

6.将逻辑卷格式化成xfs文件系统,并挂载到/data目录上,创建文件测试

[root@localhost ~]# mkdir /data
[root@localhost ~]# mkfs -t xfs /dev/myvg/mylv
meta-data=/dev/myvg/mylv isize=256 agcount=4, agsize=1966080 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=7864320, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=3840, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@localhost ~]# echo "1234567" >/data/text.txt
[root@localhost ~]# cat /data/text.txt
1234567
[root@localhost ~]# mount /dev/myvg/mylv /data
[root@localhost ~]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/mapper/centos-root xfs 38G 4.0G 34G 11% /
devtmpfs devtmpfs 985M 0 985M 0% /dev
tmpfs tmpfs 994M 84K 994M 1% /dev/shm
tmpfs tmpfs 994M 8.9M 985M 1% /run
tmpfs tmpfs 994M 0 994M 0% /sys/fs/cgroup
/dev/sda1 xfs 497M 107M 391M 22% /boot
/dev/sr0 iso9660 4.1G 4.1G 0 100% /run/media/root/CentOS 7 x86_64
/dev/mapper/myvg-mylv xfs 30G 33M 30G 1% /data

7.增大逻辑卷到35G

[root@localhost ~]# lvextend -L +5G /dev/myvg/mylv
Size of logical volume myvg/mylv changed from 30.00 GiB (7680 extents) to 35.00 GiB (8960 extents).
Logical volume mylv successfully resized
[root@localhost ~]# xfs_growfs /dev/myvg/mylv
meta-data=/dev/mapper/myvg-mylv isize=256 agcount=4, agsize=1966080 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=7864320, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=3840, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 7864320 to 9175040
[root@localhost ~]# lvdisplay /dev/myvg/mylv
--- Logical volume ---
LV Path /dev/myvg/mylv
LV Name mylv
VG Name myvg
LV UUID W3cZJj-qEjB-F3Fa-depC-sMSQ-tVhk-1dP1u1
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2019-08-02 12:24:59 +0800
LV Status available
# open 1
LV Size 35.00 GiB
Current LE 8960
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:2

[root@localhost ~]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
/dev/mapper/centos-root xfs 38G 4.7G 33G 13% /
devtmpfs devtmpfs 985M 0 985M 0% /dev
tmpfs tmpfs 994M 84K 994M 1% /dev/shm
tmpfs tmpfs 994M 9.0M 985M 1% /run
tmpfs tmpfs 994M 0 994M 0% /sys/fs/cgroup
/dev/sda1 xfs 497M 107M 391M 22% /boot
/dev/sr0 iso9660 4.1G 4.1G 0 100% /run/media/root/CentOS 7 x86_64
/dev/mapper/myvg-mylv xfs 35G 33M 35G 1% /data

8.编辑/etc/fstab文件挂载逻辑卷,并支持磁盘配额选项

[root@localhost ~]#

/dev/myvg1/mylv             /data            xfs          defaults,usrquota,grpquota     00 

[root@localhost ~]# mount /data
mount: /dev/mapper/myvg1-mylv 已经挂载或 /data 忙
/dev/mapper/myvg1-mylv 已经挂载到 /data 上

[root@localhost ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/mapper/centos-root 38G 5.4G 33G 15% /
devtmpfs 985M 0 985M 0% /dev
tmpfs 994M 84K 994M 1% /dev/shm
tmpfs 994M 9.0M 985M 1% /run
tmpfs 994M 0 994M 0% /sys/fs/cgroup
/dev/sdb2 5.0G 33M 5.0G 1% /data2
/dev/mapper/myvg-mylv 34G 33M 34G 1% /lvm
/dev/sdb1 4.8G 20M 4.6G 1% /data1
/dev/sda1 497M 107M 391M 22% /boot
/dev/sr0 4.1G 4.1G 0 100% /run/media/root/CentOS 7 x86_64
/dev/mapper/myvg1-mylv 35G 33M 35G 1% /data

9.创建磁盘配额,crushlinux用户在/data目录下文件大小软限制为80M,硬限制为100M,
crushlinux用户在/data目录下文件数量软限制为80个,硬限制为100个。

 [root@localhost ~]# useradd crushlinux

▽oot@localhost ~]# mkdir /data1
[root@localhost ~]# mkfs.ext4 /dev/sdb3

mke2fs 1.42.9 (28-Dec-2013)
文件系统标签=
OS type: Linux
块大小=4096 (log=2)
分块大小=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5242880 blocks
262144 blocks (5.00%) reserved for the super user
第一个数据块=0
Maximum filesystem blocks=2153775104
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000

Allocating group tables: 完成
正在写入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information: 完成

[root@localhost ~]# mount /dev/sdb3 /data1

[root@localhost ~]# mount -o remount,usrquota,grpquota /data1
[root@localhost ~]# vim /etc/fstab

/dev/sdb3                /data1          ext4         defaults,usrquota,grpquota          0 0

[root@localhost ~]# quotacheck -avug

quotacheck: Your kernel probably supports journaled quota but you are not using it. Consider switching to journaled quota to avoid running quotacheck after an unclean shutdown.
quotacheck: Scanning /dev/sdb3 [/data1] done
quotacheck: Cannot stat old user quota file /data1/aquota.user: 没有那个文件或目录. Usage will not be subtracted.
quotacheck: Cannot stat old group quota file /data1/aquota.group: 没有那个文件或目录. Usage will not be subtracted.
quotacheck: Cannot stat old user quota file /data1/aquota.user: 没有那个文件或目录. Usage will not be subtracted.
quotacheck: Cannot stat old group quota file /data1/aquota.group: 没有那个文件或目录. Usage will not be subtracted.
quotacheck: Checked 2 directories and 0 files
quotacheck: Old file not found.
quotacheck: Old file not found.

[root@localhost ~]# quotaon -auvg
/dev/sdb3 [/data1]: group quotas turned on
/dev/sdb3 [/data1]: user quotas turned on
[root@localhost ~]# edquota -u crushlinux
Disk quotas for user crushlinux (uid 1001):
Filesystem blocks soft hard inodes soft hard
/dev/sdb3 0 8000 10000 0 80 100

10.使用touch dd 命令在/data目录下测试

 [root@localhost ~]# chmod -R 777 /data1

chmod: 更改"/data1/aquota.user" 的权限: 不允许的操作
chmod: 更改"/data1/aquota.group" 的权限: 不允许的操作

[root@localhost ~]# ll /data1
总用量 32
-rw------- 1 root root 6144 8月 3 15:12 aquota.group
-rw------- 1 root root 7168 8月 3 15:12 aquota.user
drwxrwxrwx 2 root root 16384 8月 3 15:10 lost+found

[root@localhost ~]# su crushlinux
[crushlinux@localhost root]$ dd if=/dev/zero of=/data1/ceshi bs=1M count=90
sdb3: warning, user block quota exceeded.
sdb3: write failed, user block limit reached.
dd: 写入"/data1/ceshi" 出错: 超出磁盘限额
记录了10+0 的读入
记录了9+0 的写出
10240000字节(10 MB)已复制,0.00650797 秒,1.6 GB/秒

[crushlinux@localhost root]$ touch /data1/{1..90}
sdb3: warning, user file quota exceeded.

11.查看配额的使用情况:用户角度

[root@localhost ~]# quota -uvs crushlinux
Disk quotas for user crushlinux (uid 1001):
Filesystem space quota limit grace files quota limit grace
/dev/sdb3 10000K* 8000K 10000K 6days 91* 80 100 6days

12.查看配额的使用情况:文件系统角度
[root@localhost ~]# repquota -auvs
*** Report for user quotas on device /dev/sdb3
Block grace time: 7days; Inode grace time: 7days
Space limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 20K 0K 0K 2 0 0
crushlinux ++ 10000K 8000K 10000K 6days 91 80 100 6days

Statistics:
Total blocks: 7
Data blocks: 1
Entries: 2
Used average: 2.000000

posted @ 2019-08-01 19:40  bbh&ymy  阅读(270)  评论(0编辑  收藏  举报