hp机器均已在装OS之前划好raid,统一规格为2*480G SSD, 12*4T SATA ,2*1.6T SSD,其中2*480G SSD做系统盘,划分raid1
已知disk controller为
[root@192e168e100e27 yum.repos.d]# hpssacli ctrl slot=3 ld all show Smart HBA H240 in Slot 3 array A logicaldrive 1 (447.1 GB, RAID 1, OK) [root@192e168e100e27 yum.repos.d]# hpssacli ctrl slot=3 pd all show status physicaldrive 1I:4:1 (port 1I:box 4:bay 1, 480.1 GB): OK physicaldrive 1I:4:2 (port 1I:box 4:bay 2, 480.1 GB): OK
那么target_raid_config可以配置为
{ "logical_disks": [ { "size_gb": "MAX", "raid_level": "1", "controller": "Smart HBA H240 in Slot 3", "physical_disks": ["1I:4:1", "1I:4:2"], "is_root_volume": true } ] }
openstack baremetal node set <node-uuid-or-name> --target-raid-config raid.json
创建一个JSON文件,包含了在手动清理的时候要执行的RAID清理步骤。根据需要添加其他清理步骤:
[{ "interface": "raid", "step": "delete_configuration" }, { "interface": "raid", "step": "create_configuration" }]
切换节点装到manageable状态,然后执行清理动作
openstack baremetal node clean <node-uuid-or-name> --clean-steps clean.json
手动清理完成后,当前的RAID配置可以用下面的命令查看
openstack baremetal node show <node-uuid-or-name>
在nova flavor中配置RAID, 用作调度,用户可以指定nova flavor的raid_level特性,在调度的时候做选择的依据
nova flavor-key my-baremetal-flavor set capabilities:raid_level="1" nova boot --flavor ironic-test --image test-image instance-1
但谈何容易。