storcli/percli在线配置raid
storcli/percli常用场景
查看帮助信息
storcli64 help
查看控制器数量
storcli64 show ctrlcount
[root@SZVPN-2 ~]# storcli64 show ctrlcount
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Status Code = 0
Status = Success
Description = None
Controller Count = 1
Note:
说明只有一个控制器,也就对应为/c0
一般每台服务器一到两个控制器
在线做raid卷组
storcli64 /c0/eall/sall show
[root@SZVPN-2 ~]# storcli64 /c0/eall/sall show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = Show Drive Information Succeeded.
Drive Information :
=================
------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
------------------------------------------------------------------------------
252:0 14 Onln 0 278.464 GB SAS HDD N N 512B ST9300603SS U -
252:1 21 Onln 0 278.464 GB SAS HDD N N 512B MK3001GRRB U -
252:2 20 Onln 1 557.861 GB SAS HDD N N 512B MBF2600RC U -
252:3 17 Onln 1 557.861 GB SAS HDD N N 512B MBF2600RC U -
252:4 18 Onln 1 557.861 GB SAS HDD N N 512B MBF2600RC U -
252:5 22 Onln 1 557.861 GB SAS HDD N N 512B MBF2600RC U -
252:6 23 Onln 1 557.861 GB SAS HDD N N 512B MBF2600RC U -
252:7 24 UGood - 557.861 GB SAS HDD N N 512B MBF2600RC U -
------------------------------------------------------------------------------
EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info
SeSz-Sector Size|Sp-Spun|U-Up|D-Down/PowerSave|T-Transition|F-Foreign
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded
Note:
此时硬盘252:7 也就是插槽号为7的硬盘是刚插上没有raid状态的硬盘,此时对该硬盘做raid0.
为252:7做raid0的VD:
storcli64 /c0 add vd r0 size=all drives=252:7 wb direct strip=128
Note:
r0 是raid0的意思,默认ceph里我们选用单盘做raid0,还有r1 r5 等raid的level.
size=all 所有的空间都用来做该vd;.
drives=252:7 对应新盘的eid/slt,如果是多个盘作为一个VD,可以写252:7,8,9(或者252:7-9)的格式对应slt.
wb 代表write_back模式,wt代表write_through模式.
direct 代表DirectIO 读操作不缓存到raid卡cache ,相对应的是CacheIO会把读操作热数据缓存到raid卡的cache.
strip=128 代表条带大小128kb,单盘无区别,多盘做raid0需要考虑,此处只要跟其他盘保持一致即可,
storcli64 /c0/vall show all
能看到其他vd的strip size.
完成vd添加后,可以看到vd已经存在storcli64 /c0/vall show
[root@SZVPN-2 ~]# storcli64 /c0/vall show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = None
Virtual Drives :
==============
---------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
---------------------------------------------------------------
0/0 RAID1 Optl RW No RWBD - ON 278.464 GB
1/1 RAID5 Optl RW Yes RWBD - ON 2.178 TB
2/2 RAID0 Optl RW Yes RWBD - ON 557.861 GB
---------------------------------------------------------------
Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency
Note:
DG/VD 2/2即是新添加的VD
系统里lsblk可以看到已经存在:
[root@SZVPN-2 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 278.5G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 277.5G 0 part ├─cl-root 253:0 0 50G 0 lvm / ├─cl-swap 253:1 0 4G 0 lvm [SWAP] └─cl-home 253:2 0 223.5G 0 lvm /home sdb 8:16 0 2.2T 0 disk └─sdb1 8:17 0 2.2T 0 part /data sdc 8:32 0 557.9G 0 disk
sdc即为新盘.
删除卷组
通过命令storcli64 /c0/vall show
获取到所有VD信息
[root@SZVPN-2 ~]# storcli64 /c0/vall show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = None
xiang
Virtual Drives :
==============
---------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
---------------------------------------------------------------
0/0 RAID1 Optl RW No RWBD - ON 278.464 GB
1/1 RAID5 Optl RW Yes RWBD - ON 2.178 TB
2/2 RAID0 Optl RW Yes RWBD - ON 557.861 GB
---------------------------------------------------------------
Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency
此时如果要删除VD2,要确认VD2的对应的系统分区,此处为sdc及该设备的分区sdc1...没有被挂载使用,
删除VD命令: storcli64 /c0/v2 del force
删除后VD即不见了.
[root@SZVPN-2 ~]# storcli64 /c0/vall show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = None
Virtual Drives :
==============
---------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
---------------------------------------------------------------
0/0 RAID1 Optl RW No RWBD - ON 278.464 GB
1/1 RAID5 Optl RW Yes RWBD - ON 2.178 TB
---------------------------------------------------------------
Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency
修改VD的属性
查看VD的属性
[root@SZVPN-2 ~]# storcli64 /c0/vall show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = None
Virtual Drives :
==============
---------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
---------------------------------------------------------------
0/0 RAID1 Optl RW No RWBD - ON 278.464 GB
1/1 RAID5 Optl RW Yes RWBD - ON 2.178 TB
2/2 RAID0 Optl RW Yes RWBD - ON 557.861 GB
---------------------------------------------------------------
Cac=CacheCade|Rec=Recovery|OfLn=OffLine|Pdgd=Partially Degraded|Dgrd=Degraded
Optl=Optimal|RO=Read Only|RW=Read Write|HD=Hidden|TRANS=TransportReady|B=Blocked|
Consist=Consistent|R=Read Ahead Always|NR=No Read Ahead|WB=WriteBack|
AWB=Always WriteBack|WT=WriteThrough|C=Cached IO|D=Direct IO|sCC=Scheduled
Check Consistency
VD2的Cache属性是RWBD
,分别是输出下方的注释即: Read Ahead, WriteBack,Direct IO.
-
将VD2的cache策略=修改为WriteThrough模式:
storcli64 /c0/v2 set wrcache=wt
此时再看v2的状态 Cache一栏将会是RWTD
Note:
set的帮助
storcli64 /c0/v2 set help
[root@SZVPN-2 ~]# storcli64 /c0/v2 set help Storage Command Line Tool Ver 007.0415.0000.0000 Feb 13, 2018 (c)Copyright 2018, AVAGO Technologies, All Rights Reserved. storcli /cx/vx set ssdcaching=on|off storcli /cx/vx set hidden=on|off storcli /cx/vx set fshinting=<value> storcli /cx/vx set emulationType=0|1|2 storcli /cx/vx set cbsize=0|1|2 cbmode=0|1|2|3|4|7 storcli /cx/vx set wrcache=WT|WB|AWB storcli /cx/vx set rdcache=RA|NoRA storcli /cx/vx set iopolicy=Cached|Direct storcli /cx/vx set accesspolicy=RW|RO|Blocked|RmvBlkd storcli /cx/vx set pdcache=On|Off|Default storcli /cx/vx set name=<NameString> storcli /cx/vx set HostAccess=ExclusiveAccess|SharedAccess storcli /cx/vx set ds=Default|Auto|None|Max|MaxNoCache storcli /cx/vx set autobgi=On|Off storcli /cx/vx set pi=Off storcli /cx/vx set bootdrive=<on|off>
可以设置各种配置项
-
将VD2的cache预读策略=修改为NR模式:
storcli64 /c0/v2 set rdcache=NoRA
此时再看v2的状态 Cache一栏将会是NRWTD
在线设置硬盘为直通(jbod)模式
1.确认raid卡支持jbod模式并开启jbod模式:
storcli64 /c0 show all |grep -i jbod
[root@SZVPN-2 ~]# storcli64 /c0 show all|grep -i jbod
Support JBOD = Yes
Support SecurityonJBOD = No
Support JBOD Write cache = No
Enable JBOD = No
Note:
可以看到support JBOD = Yes , 也就是说raid卡支持jbod模式
但是 Enable JBOD = No , 说明当前raid卡没有开启jbod模式,此时需要手工开启
[root@SZVPN-2 ~]# storcli64 /c0 set jbod=on CLI Version = 007.0415.0000.0000 Feb 13, 2018 Operating system = Linux 3.10.0-862.11.6.el7.x86_64 Controller = 0 Status = Success Description = None Controller Properties : ===================== ---------------- Ctrl_Prop Value ---------------- JBOD ON ----------------
[root@SZVPN-2 ~]# storcli64 /c0 show all |grep -i jbod Support JBOD = Yes Support SecurityonJBOD = No Support JBOD Write cache = No Enable JBOD = Yes
enable JBOD = Yes ,已经开启了jbod模式
2.把指定设备设置成jbod模式:
storcli64 /c0/e252/s7 set jbod
[root@SZVPN-2 ~]# storcli64 /c0/e252/s7 set jbod
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = Set Drive JBOD Succeeded.
[root@SZVPN-2 ~]# storcli64 /c0/e252/s7 show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = Show Drive Information Succeeded.
Drive Information :
=================
------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
------------------------------------------------------------------------------
252:7 24 JBOD - 557.861 GB SAS HDD N N 512B MBF2600RC U -
------------------------------------------------------------------------------
Note:
此时该设备已经是jbod模式了.
3.修改jbod模式为UG模式:
如果要将该设备的jbod模式撤销掉storcli64 /c0/e252/s7 set good force
[root@SZVPN-2 ~]# storcli64 /c0/e252/s7 set good force
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = Set Drive Good Succeeded.
[root@SZVPN-2 ~]# storcli64 /c0/e252/s7 show
CLI Version = 007.0415.0000.0000 Feb 13, 2018
Operating system = Linux 3.10.0-862.11.6.el7.x86_64
Controller = 0
Status = Success
Description = Show Drive Information Succeeded.
Drive Information :
=================
------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
------------------------------------------------------------------------------
252:7 24 UGood - 557.861 GB SAS HDD N N 512B MBF2600RC U -
------------------------------------------------------------------------------
Note:
设备变回UGood状态,可以重新配置raid卷组了.