第五周作业,LVM和TCP

1、磁盘lvm管理,完成下面要求,并写出详细过程:

  1) 创建一个至少有两个PV组成的大小为20G的名为testvg的VG;要求PE大小 为16MB, 而后在卷组中创建大小为5G的逻辑卷testlv;挂载至/users目录

准备两个10G的磁盘sdb,sdc或分区
[root@linux-node2-202 ~]# fdisk -l|grep sd
Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors
/dev/sda1   *        2048      411647      204800   83  Linux
/dev/sda2          411648   104857599    52222976   8e  Linux LVM
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors

使用sdb和sdc两个磁盘创建PV
[root@linux-node2-202 ~]# pvcreate /dev/sd{b,c}
  Physical volume "/dev/sdb" successfully created.
  Physical volume "/dev/sdc" successfully created.

[root@linux-node2-202 ~]# pvs
  PV         VG     Fmt  Attr PSize  PFree 
  /dev/sda2  centos lvm2 a--  49.80g     0 
  /dev/sdb          lvm2 ---  10.00g 10.00g
  /dev/sdc          lvm2 ---  10.00g 10.00g

创建名为testvg的VG,要求PE大小 为16MB
[root@linux-node2-202 ~]# vgcreate -s 16M testvg /dev/sd{b,c}
  Volume group "testvg" successfully created
[root@linux-node2-202 ~]# vgs
  VG     #PV #LV #SN Attr   VSize   VFree  
  centos   1   2   0 wz--n-  49.80g      0 
  testvg   2   0   0 wz--n- <19.97g <19.97g

在卷组中创建大小为5G的逻 辑卷testlv
[root@linux-node2-202 ~]# lvcreate -n testlv -L 5G testvg
  Logical volume "testlv" created.
[root@linux-node2-202 ~]# lvs
  LV     VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root   centos -wi-ao---- 47.80g                                                    
  swap   centos -wi-ao----  2.00g                                                    
  testlv testvg -wi-a-----  5.00g     

挂载至/users目录
[root@linux-node2-202 ~]# mkdir /users
[root@linux-node2-202 ~]# mkfs.xfs /dev/mapper/testvg-testlv
[root@linux-node2-202 ~]# mount /dev/mapper/testvg-testlv /users

永久挂载
[root@linux-node2-202 ~]# blkid
/dev/mapper/testvg-testlv: UUID="c20abbbc-6714-4bb3-b6b3-039e49ebb1ad" TYPE="xfs"

[root@linux-node2-202 ~]# vim /etc/fstab
添加
UUID=c20abbbc-6714-4bb3-b6b3-039e49ebb1ad /users xfs    defaults        0 0

[root@linux-node2-202 ~]# mount -a
[root@linux-node2-202 ~]# df -h
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/centos-root     48G  9.4G   39G  20% /
devtmpfs                   899M     0  899M   0% /dev
tmpfs                      911M     0  911M   0% /dev/shm
tmpfs                      911M  9.6M  902M   2% /run
tmpfs                      911M     0  911M   0% /sys/fs/cgroup
/dev/sda1                  197M  120M   77M  61% /boot
tmpfs                      183M     0  183M   0% /run/user/0
/dev/mapper/testvg-testlv  5.0G   33M  5.0G   1% /users

 [root@linux-node2-202 ~]# vgdisplay testvg|egrep -io "(pe|vg).*size.*"
  VG Size <19.97 GiB
  PE Size 16.00 MiB
  PE / Size 320 / 5.00 GiB
  PE / Size 958 / <14.97 GiB

  2) 扩展testlv至7G,要求archlinux用户的文件不能丢失

创建archlinux用户
[root@linux-node2-202 ~]# useradd archlinux -d /users/archlinux
[root@linux-node2-202 ~]# su - archlinux
[archlinux@linux-node2-202 ~]$ pwd
/users/archlinux
[archlinux@linux-node2-202 ~]$ cp -R /var/log .
[archlinux@linux-node2-202 ~]$ tree
.
└── log
    ├── anaconda
    ├── audit
    ├── cobbler
    │   ├── anamon
    │   ├── cobbler.log
    │   ├── install.log
    │   ├── kicklog
    │   ├── syslog
    │   └── tasks
    │       ├── 2019-07-22_195442_get_loaders.log
    │       ├── 2019-07-22_195512_sync.log
    │       ├── 2019-07-22_195653_sync.log
    │       ├── 2019-07-22_195950_import.log
    │       ├── 2019-07-22_200256_import.log
    │       ├── 2019-07-22_200523_import.log
    │       └── 2019-07-22_200652_sync.log
    ├── dmesg
    ├── dmesg.old
    ├── firewalld
    ├── grubby_prune_debug
    ├── httpd
    ├── lastlog
    ├── rhsm
    ├── tuned
    │   └── tuned.log
    ├── vmware-vgauthsvc.log.0
    ├── vmware-vmsvc.log
    └── wtmp

11 directories, 18 files
[archlinux@linux-node2-202 ~]$ du -sh log
2.2M    log

扩展testlv至7G,要求archlinux用户的文件不能丢失

[root@linux-node2-202 ~]# vgdisplay testvg|grep Size
  VG Size               <19.97 GiB
  PE Size               16.00 MiB
  Alloc PE / Size       320 / 5.00 GiB
  Free  PE / Size       958 / <14.97 GiB
[root@linux-node2-202 ~]# lvdisplay /dev/testvg/testlv|grep Size
  LV Size                5.00 GiB
可以看到当前的LV大小为5G,VG可分配的大小为15G

需要再扩容2G
[root@linux-node2-202 ~]# lvextend -L +2G /dev/mapper/testvg-testlv 
  Size of logical volume testvg/testlv changed from 5.00 GiB (320 extents) to 7.00 GiB (448 extents).
  Logical volume testvg/testlv successfully resized.
[root@linux-node2-202 ~]# lvs
  LV     VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root   centos -wi-ao---- 47.80g                                                    
  swap   centos -wi-ao----  2.00g                                                    
  testlv testvg -wi-ao----  7.00g

[root@linux-node2-202 ~]# vgs
  VG     #PV #LV #SN Attr   VSize   VFree  
  centos   1   2   0 wz--n-  49.80g      0 
  testvg   2   1   0 wz--n- <19.97g <12.97g

[root@linux-node2-202 ~]# tree /users/archlinux/
/users/archlinux/
└── log
    ├── anaconda
    ├── audit
    ├── cobbler
    │   ├── anamon
    │   ├── cobbler.log
    │   ├── install.log
    │   ├── kicklog
    │   ├── syslog
    │   └── tasks
    │       ├── 2019-07-22_195442_get_loaders.log
    │       ├── 2019-07-22_195512_sync.log
    │       ├── 2019-07-22_195653_sync.log
    │       ├── 2019-07-22_195950_import.log
    │       ├── 2019-07-22_200256_import.log
    │       ├── 2019-07-22_200523_import.log
    │       └── 2019-07-22_200652_sync.log
    ├── dmesg
    ├── dmesg.old
    ├── firewalld
    ├── grubby_prune_debug
    ├── httpd
    ├── lastlog
    ├── rhsm
    ├── tuned
    │   └── tuned.log
    ├── vmware-vgauthsvc.log.0
    ├── vmware-vmsvc.log
    └── wtmp

11 directories, 18 files

[root@linux-node2-202 ~]# du -sh /users/archlinux/
2.3M    /users/archlinux/

  3) 收缩testlv至3G,要求archlinux用户的文件不能丢失

缩减逻辑卷需要先取消挂载,再进行操作
[root@linux-node2-202 ~]# umount /users
[root@linux-node2-202 ~]# lvs
  LV     VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root   centos -wi-ao---- 47.80g                                                    
  swap   centos -wi-ao----  2.00g                                                    
  testlv testvg -wi-a-----  7.00g  

#Xfs系统不支持缩减

检查文件完整性
[root@linux-node2-202 ~]# e2fsck -f -y /dev/mapper/testvg-testlv 
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/testvg-testlv: 11/458752 files (0.0% non-contiguous), 69631/1835008 blocks

收缩testlv至3G,要求archlinux用户的文件不能丢失

先设置文件系统大小
[root@linux-node2-202 ~]# resize2fs /dev/mapper/testvg-testlv 3G
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/mapper/testvg-testlv to 786432 (4k) blocks.
The filesystem on /dev/mapper/testvg-testlv is now 786432 blocks long.

缩减逻辑卷
[root@linux-node2-202 ~]# lvreduce -L 3G /dev/mapper/testvg-testlv
  WARNING: Reducing active logical volume to 3.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce testvg/testlv? [y/n]: y
  Size of logical volume testvg/testlv changed from 7.00 GiB (448 extents) to 3.00 GiB (192 extents).
  Logical volume testvg/testlv successfully resized.
[root@linux-node2-202 ~]# lvs
  LV     VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root   centos -wi-ao---- 47.80g                                                    
  swap   centos -wi-ao----  2.00g                                                    
  testlv testvg -wi-a-----  3.00g                                                    

[root@linux-node2-202 ~]# lvdisplay /dev/testvg/testlv|grep Size
  LV Size                3.00 GiB

[root@linux-node2-202 ~]# tree /users/archlinux/
/users/archlinux/
└── log
    ├── anaconda
    ├── audit
    ├── cobbler
    │   ├── anamon
    │   ├── cobbler.log
    │   ├── install.log
    │   ├── kicklog
    │   ├── syslog
    │   └── tasks
    │       ├── 2019-07-22_195442_get_loaders.log
    │       ├── 2019-07-22_195512_sync.log
    │       ├── 2019-07-22_195653_sync.log
    │       ├── 2019-07-22_195950_import.log
    │       ├── 2019-07-22_200256_import.log
    │       ├── 2019-07-22_200523_import.log
    │       └── 2019-07-22_200652_sync.log
    ├── dmesg
    ├── dmesg.old
    ├── firewalld
    ├── grubby_prune_debug
    ├── httpd
    ├── lastlog
    ├── rhsm
    ├── tuned
    │   └── tuned.log
    ├── vmware-vgauthsvc.log.0
    ├── vmware-vmsvc.log
    └── wtmp

11 directories, 18 files


[root@linux-node2-202 ~]# du -sh /users/archlinux/
2.3M    /users/archlinux/
 

  4) 对testlv创建快照,并尝试基于快照备份数据,验证快照的功能

[root@linux-node2-202 ~]# lvcreate -n testlv_snap -s -p r -L 5G  /dev/mapper/testvg-testlv
lvcreate 创建 名称为testlv_snap 快照 属性-只读 容量5G 目标/dev/mapper/testvg-testlv
  Reducing COW size 5.00 GiB down to maximum usable size <3.02 GiB.
  Logical volume "testlv_snap" created.

[root@linux-node2-202 ~]# lvs
  LV          VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root        centos -wi-ao---- 47.80g                                                    
  swap        centos -wi-ao----  2.00g                                                    
  testlv      testvg owi-aos---  3.00g                                                    
  testlv_snap testvg sri-a-s--- <3.02g      testlv 0.01  

[root@linux-node2-202 ~]# lvdisplay /dev/test*|grep snap
  LV snapshot status     source of
                         testlv_snap [active]
  LV Path                /dev/testvg/testlv_snap
  LV Name                testlv_snap
  LV snapshot status     active destination for testlv
  Allocated to snapshot  0.01%

 [root@linux-node2-202 ~]# blkid |grep testvg
/dev/mapper/testvg-testlv: UUID="fe421f83-f353-4ad3-a571-47e808c5dd5d" TYPE="ext4" 
/dev/mapper/testvg-testlv_snap: UUID="fe421f83-f353-4ad3-a571-47e808c5dd5d" TYPE="ext4" 
可以看到两个uuid相同

验证快照的功能
挂载快照
[root@linux-node2-202 ~]# mount /dev/mapper/testvg-testlv_snap /mnt/snap
mount: /dev/mapper/testvg-testlv_snap is write-protected, mounting read-only

[root@linux-node2-202 ~]# tree /mnt/snap
/mnt/snap
├── archlinux
│   └── log
│       ├── anaconda
│       ├── audit
│       ├── cobbler
│       │   ├── anamon
│       │   ├── cobbler.log
│       │   ├── install.log
│       │   ├── kicklog
│       │   ├── syslog
│       │   └── tasks
│       │       ├── 2019-07-22_195442_get_loaders.log
│       │       ├── 2019-07-22_195512_sync.log
│       │       ├── 2019-07-22_195653_sync.log
│       │       ├── 2019-07-22_195950_import.log
│       │       ├── 2019-07-22_200256_import.log
│       │       ├── 2019-07-22_200523_import.log
│       │       └── 2019-07-22_200652_sync.log
│       ├── dmesg
│       ├── dmesg.old
│       ├── firewalld
│       ├── grubby_prune_debug
│       ├── httpd
│       ├── lastlog
│       ├── rhsm
│       ├── tuned
│       │   └── tuned.log
│       ├── vmware-vgauthsvc.log.0
│       ├── vmware-vmsvc.log
│       └── wtmp
└── lost+found

13 directories, 18 files

[root@linux-node2-202 ~]# du -sh /mnt/snap
2.3M    /mnt/snap

删除源目录中的文件
[root@linux-node2-202 ~]# rm -rf /users/archlinux/log/httpd/
[root@linux-node2-202 ~]# rm -f /users/archlinux/log/*
[root@linux-node2-202 ~]# tree /users/archlinux/
/users/archlinux/
└── log
    ├── anaconda
    ├── audit
    ├── cobbler
    │   ├── anamon
    │   ├── cobbler.log
    │   ├── install.log
    │   ├── kicklog
    │   ├── syslog
    │   └── tasks
    │       ├── 2019-07-22_195442_get_loaders.log
    │       ├── 2019-07-22_195512_sync.log
    │       ├── 2019-07-22_195653_sync.log
    │       ├── 2019-07-22_195950_import.log
    │       ├── 2019-07-22_200256_import.log
    │       ├── 2019-07-22_200523_import.log
    │       └── 2019-07-22_200652_sync.log
    ├── rhsm
    └── tuned
        └── tuned.log

10 directories, 10 files

[root@linux-node2-202 ~]# du -sh /users/archlinux/
2.0M    /users/archlinux/

还原快照
取消源和快照的挂载
[root@linux-node2-202 ~]# umount /users
[root@linux-node2-202 ~]# umount /mnt/snap
还原快照
[root@linux-node2-202 ~]# lvconvert --merge /dev/mapper/testvg-testlv_snap
  Merging of volume testvg/testlv_snap started.
  testvg/testlv: Merged: 100.00%
验证数据
[root@linux-node2-202 ~]# du -sh /users/archlinux/
2.3M    /users/archlinux/

[root@linux-node2-202 ~]# du -sh /users/archlinux/
2.3M    /users/archlinux/
[root@linux-node2-202 ~]# tree /users/archlinux/
/users/archlinux/
└── log
    ├── anaconda
    ├── audit
    ├── cobbler
    │   ├── anamon
    │   ├── cobbler.log
    │   ├── install.log
    │   ├── kicklog
    │   ├── syslog
    │   └── tasks
    │       ├── 2019-07-22_195442_get_loaders.log
    │       ├── 2019-07-22_195512_sync.log
    │       ├── 2019-07-22_195653_sync.log
    │       ├── 2019-07-22_195950_import.log
    │       ├── 2019-07-22_200256_import.log
    │       ├── 2019-07-22_200523_import.log
    │       └── 2019-07-22_200652_sync.log
    ├── dmesg
    ├── dmesg.old
    ├── firewalld
    ├── grubby_prune_debug
    ├── httpd
    ├── lastlog
    ├── rhsm
    ├── tuned
    │   └── tuned.log
    ├── vmware-vgauthsvc.log.0
    ├── vmware-vmsvc.log
    └── wtmp

11 directories, 18 files

创建快照(XFS)系统    
    lvcreate -n lv_mysql_snap -s -L 1G -p r /dev/mapper/testvg-testlv
    mount /dev/mapper/testvg-testlv /mnt/snap/                #直接挂载不上去
    mount -o nouuid /dev/mapper/testvg-testlv_snap /mnt/snap/    #加上nouuid选项
    
还原快照(XFS)
    取消所有相关挂载
    lvconvert --merge /dev/mapper/testvg-testlv_snap

2、创建一个可用空间为1G的RAID1设备,文件系统为ext4,有一个空闲盘,开机可自动挂载至/backup目录

创建3个1G的分区
[root@linux-node2-202 ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xdf25804a

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     2099199     1048576   83  Linux
/dev/sdb2         2099200     4196351     1048576   83  Linux
/dev/sdb3         4196352     6293503     1048576   83  Linux

创建raid 1 sdb3 作为空闲盘
[root@linux-node2-202 ~]# mdadm -C -a yes /dev/md0 -l 1 -n 2 /dev/sdb{1,2} -x 1 -c 1024 /dev/sdb3

[root@linux-node2-202 ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Jul 23 19:25:54 2019
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Jul 23 19:25:59 2019
             State : clean 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

              Name : linux-node2-202:0  (local to host linux-node2-202)
              UUID : 15c85ba6:1df98782:c64f8908:3eee284d
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       18        1      active sync   /dev/sdb2

       2       8       19        -      spare   /dev/sdb3

模拟sdb1故障
[root@linux-node2-202 ~]# mdadm /dev/md0 -f /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
[root@linux-node2-202 ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Jul 23 19:25:54 2019
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Jul 23 19:27:52 2019
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : resync

              Name : linux-node2-202:0  (local to host linux-node2-202)
              UUID : 15c85ba6:1df98782:c64f8908:3eee284d
            Events : 36

    Number   Major   Minor   RaidDevice State
       2       8       19        0      active sync   /dev/sdb3
       1       8       18        1      active sync   /dev/sdb2

       0       8       17        -      faulty   /dev/sdb1

将sdb1从raid1中删除
[root@linux-node2-202 ~]# mdadm /dev/md0 -r /dev/sdb1 
mdadm: hot removed /dev/sdb1 from /dev/md0
[root@linux-node2-202 ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Jul 23 19:25:54 2019
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Jul 23 19:28:30 2019
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : linux-node2-202:0  (local to host linux-node2-202)
              UUID : 15c85ba6:1df98782:c64f8908:3eee284d
            Events : 37

    Number   Major   Minor   RaidDevice State
       2       8       19        0      active sync   /dev/sdb3
       1       8       18        1      active sync   /dev/sdb2

将sdb1添加回raid1
[root@linux-node2-202 ~]# mdadm /dev/md0 -a /dev/sdb1
mdadm: added /dev/sdb1
[root@linux-node2-202 ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Jul 23 19:25:54 2019
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Tue Jul 23 19:28:41 2019
             State : clean 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

              Name : linux-node2-202:0  (local to host linux-node2-202)
              UUID : 15c85ba6:1df98782:c64f8908:3eee284d
            Events : 38

    Number   Major   Minor   RaidDevice State
       2       8       19        0      active sync   /dev/sdb3
       1       8       18        1      active sync   /dev/sdb2

       3       8       17        -      spare   /dev/sdb1

3、简述TCP链接建立和断开过程

 https://www.cnblogs.com/bj-mr-li/p/11106397.html

https://www.cnblogs.com/bj-mr-li/p/11106390.html

TCP建立连接的过程叫三次握手
三次握手
第一次,客户端发起SYN=1,seq=x,给服务端,并将状态为SYN-SEND
第二次,服务端收到第一次请求,回应SYN=1,ACK=1,seq=x,状态为SYN-RECV
第三次,客户端收到服务端的回包,回应ACK=1,seq=x+1 seq=y+1,状态为ESTAB
客户端收到回包后连接建立,开始发送数据

TCP断开连接的过程叫四次挥手,比三次握手多了一个确认过程
第一次,客户端发送FIN=1,seq=x给服务端,状态由ESTAB修改为FIN-WAIT1
第二次,服务端收到后,回应ACK=1, seq=y,ack=x+1,状态为CLOSE-WAIT,通知进程关闭连接,客户端状态为FIN-WAIT2
第三次,服务端收完所有数据后,发送FIN=1,ACK=1,seq=z,ack=x+1,状态为LAST-ACK
第四次,客户端收到后,等待2MSL(2倍传送时间)后,发送ACK=1,seq=x+1,ack=z+1,连接关闭

4、简述TCP和UDP的区别

1:TCP基于连接,UDP基于无连接。
2:TCP对系统资源要求高,UDP少。
3:TCP是基于字节流的,UDP是数据报文模式。
4:TCP复杂,UDP简单。
5:TCP有验证过程,是可靠的连接,UDP无验证过程,是不可靠的连接

 

posted @ 2019-07-23 11:38  李卓航  阅读(304)  评论(0编辑  收藏  举报