随笔 - 3,  文章 - 0,  评论 - 0,  阅读 - 804

一、系统的安装及设置

1.1 系统安装

1、制作主机模板template,后续用到的主机均以此主机为模板进行克隆

系统安装要求

在vmware workstation 中安装,需要勾选对应虚拟化选项

image-20220319151254142

image-20220319151422568

  • 删除不用硬件设备,例如打印机、声卡。vxlan 和ceph 集群网卡可以安装完系统初始化后时再添加

  • 操作系统镜像:CentOS-7-x86_64-DVD-2009.iso

  • 时区修改为:Asia/Shanghai

  • 语言勾选支持:中文

  • 软件包选择:Development and Create Workstation

  • Addiional Development

  • Compatibility Libraries

  • Development Tools

  • Hardware Monitoring Utilities

  • Large Systems Performance

  • Platform Development

  • System Administration Tools

分区如图所示

image-20220319153720602

1.2 系统初始化参数设置

1、设置主机名及hosts

hostnamectl set-hostname template
cat >>/etc/hosts <<'EOF'
# openstack
192.168.10.20 controller
192.168.10.31 compute01
192.168.10.32 compute02
# cpeh-public
192.168.10.11 ceph01
192.168.10.12 ceph02
192.168.10.13 ceph03
# ceph-cluster
172.16.10.11 ceph01-cluster
172.16.10.12 ceph02-cluster
172.16.10.13 ceph03-cluster
EOF

2、配置IP 地址并且关闭NetworkMnanager
各主机第一块网卡按规划用作Provider IP (使用NAT 网络,可出外网)

cat  > /etc/sysconfig/network-scripts/ifcfg-ens32 <<'EOF'
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
IPV6INIT=no
NAME=ens32
DEVICE=ens32
ONBOOT=yes
# ip 地址,不同主机修改成不同IP
IPADDR=192.168.10.200
NETMASK=255.255.255.0
GATEWAY=192.168.10.254
DNS1=223.5.5.5
EOF

3、修改yum源地址为阿里云

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache

5、关闭防火墙和selinux

systemctl restart network
systemctl stop NetworkManager && systemctl disable NetworkManager
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

5、设置内核参数及文件句柄等资源限制参数

cat  >> /etc/sysctl.conf <<EOF
## default 1
net.ipv4.tcp_syncookies = 1
## default 0
net.ipv4.tcp_tw_reuse = 1
## default 0
net.ipv4.tcp_tw_recycle = 1
## default 60
net.ipv4.tcp_fin_timeout = 30
## default 256
net.ipv4.tcp_max_syn_backlog = 4096
## default 32768 60999
net.ipv4.ip_local_port_range = 1024 65535
## default 129=8
net.core.somaxconn = 32768
EOF
cat  >> /etc/security/limits.conf <<EOF
* hard nofile 655360
* soft nofile 655360
* hard nproc 655360
* soft nproc 655360
* soft core 655360
* hard core 655360
EOF
sed -i 's/4096/unlimited/g' /etc/security/limits.d/20-nproc.conf

image-20220319161713364

6、设置CPU 嵌套虚拟化,让虚拟机中再支持虚拟化

cat /sys/module/kvm_amd/parameters/nested     #amd的cpu
cat /sys/module/kvm_intel/parameters/nested   #intel的cpu
cat  > /etc/modprobe.d/kvm-nested.conf <<EOF
options kvm_amd nested=1 ept=0 unrestricted_guest=0
EOF
rmmod kvm_amd
modprobe kvm_amd ept=0 unrestricted_guest=0
cat /sys/module/kvm_amd/parameters/nested

7、安装OpenStack 客户端

yum -y install python-openstackclient

1.3 克隆第1 台虚拟机

克隆虚拟机操作步骤如下图所示
右键点击模板主机,选择“管理”-->“克隆”

image-20220319164621167

1.4 完成克隆后,启动克隆好的虚拟机,修改hostname、IP

以controller 主机为例

hostnamectl set-hostname controller
sed -i 's/10.200/10.20/g' /etc/sysconfig/network-scripts/ifcfg-ens32
systemctl restart network
ping 192.168.10.254

1.5添加网卡和硬盘

按照规划,为compute01、compute02 添加硬盘,controller、compute01、compute02 各增加2 块网卡,此处以controller 截图为例

编辑controller 虚拟机,添加网络适配器

image-20220319171642620

添加Lan 区段,即自定义网段(类似hub 或网桥),也可以通过添加网络,vmware 全局支持添加20 个虚拟网络(VMNet0-VMNet19)。

image-20220319171859200

image-20220319171950190

image-20220319172053365

按上述操作完成compute01 和compute02 的2 块网卡添加。
compute01、compute02 各增加一块200G 硬盘(用作cinder 存储)。此处仅以compute01 截图为例

image-20220319172335749

检查并完成上述操作后,启动controller、compute01、compute02,开始进入openstack 部署环节

二、openstack初始化环境安装

1、网卡IP 配置。

配置controller、compute01、compute02 上的第2、3 块网卡IP

在所有节点上分别执行dhclient 命令后使用ip a查看网卡名称

image-20220319173431960

配置ens33和ens34网卡IP,以controller为例:

cat  > /etc/sysconfig/network-scripts/ifcfg-ens33 <<EOF
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
IPV6INIT=no
NAME=ens33
DEVICE=ens33
ONBOOT=yes
IPADDR=10.10.10.20
NETMASK=255.255.255.0
EOF
cat  > /etc/sysconfig/network-scripts/ifcfg-ens34 <<EOF
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
IPV6INIT=no
NAME=ens34
DEVICE=ens34
ONBOOT=yes
IPADDR=172.16.10.20
NETMASK=255.255.255.0
EOF

重启网络

systemctl restart network

2、NTP 安装及配置

1、统一时区,在所有节点(controller、compute01、compute02)上执行

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

注:安装时若有选择正确时区,可忽略上述设置

2、在控制节点controller 安装chrony,将控制节点作为NTP 服务端,默认已经安装。

yum -y install chrony

修改配置/etc/chrony.conf
指定时间同步服务器,这里采用阿里云时钟源。如果无外网,则指向本机主机名

sed -i 's/0.centos.pool.ntp.org/ntp1.aliyun.com/g' /etc/chrony.conf
sed -i '4,6d' /etc/chrony.conf
sed -i 's\#allow 192.168.0.0/16\allow all\' /etc/chrony.conf
sed -i 's\#local stratum 10\local stratum 10\' /etc/chrony.conf
#启动服务
systemctl restart chronyd && systemctl enable chronyd.service

3、在计算节点computer01 安装chrony,将控制节点作为NTP 服务端,默认已经安装。

yum -y install chrony

compute和其他节点也安装服务器,删除已有的无用server,配置服务器为controller节点

sed -i '3,6d' /etc/chrony.conf
sed -i '3a\server controller iburst\' /etc/chrony.conf

#启动服务
systemctl enable chronyd.service
systemctl restart chronyd.service

#查看同步情况,显示controller即为成功
chronyc sources -v

3、安装openstack源

yum install centos-release-openstack-train  -y
yum install python-openstackclient openstack-utils openstack-selinux -y

4、MariaDB 安装及配置

1、安装与配置数据库
在控制节点(controller)执行以下操作。

yum -y install mariadb mariadb-server python2-PyMySQL

2、配置数据库

cat > /etc/my.cnf.d/openstack.cnf << EOF
[mysqld]
bind-address = 0.0.0.0
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 8192
collation-server = utf8_general_ci
character-set-server = utf8
EOF

4、修改“/usr/lib/systemd/system/mariadb.service”文件,在文件[Service]下添加如下内容:

sed -i '33a\LimitNOFILE=65535\' /usr/lib/systemd/system/mariadb.service
sed -i '34a\LimitNPROC=65535\' /usr/lib/systemd/system/mariadb.service

5、设置开机自启动

systemctl daemon-reload
systemctl enable mariadb.service
systemctl start mariadb.service

6、数据库初始化设置(可选设置)。

#初始化数据库,并设置root密码
mysql_secure_installation			
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] n
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y

5、RabbitMQ 安装及配置

1、安装与配置rabbitMQ
在控制节点(controller)执行以下操作

yum -y install rabbitmq-server

2、配置rabbitmq,修改rabbitmq 默认打开文件句柄数限制

sed -i '12a\LimitNOFILE=32768\' /usr/lib/systemd/system/rabbitmq-server.service

3、设置开机自启动

systemctl daemon-reload
systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service

4、在rabbitMQ 中添加用于openstack 的用户并授予管理员权限。

rabbitmqctl add_user openstack openstack
rabbitmqctl set_user_tags openstack administrator
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

也可在web 控制台完成用户添加
注:为openstack 用户设置的密码,密码不要包含字符"#"

5、启用rabbitmq-manager 插件,开启Web 控制台

rabbitmq-plugins enable rabbitmq_management

6、登录验证

控制台登录:http://controller:15672
guest/guest (缺省账户)
openstack/openstack

6、Memcache 安装及配置

1、安装与配置memcached
在控制节点(controller)执行以下操作

yum -y install memcached python-memcached

2、修改配置

sed -i 's\OPTIONS="-l 127.0.0.1,::1"\OPTIONS="-l 127.0.0.1,::1,controller"\' /etc/sysconfig/memcached
sed -i 's\CACHESIZE="64"\CACHESIZE="256"\' /etc/sysconfig/memcached
sed -i 's\MAXCONN="1024"\MAXCONN="4096"\' /etc/sysconfig/memcached

注意监听IP 地址,也可以设置成192.168.10.20

3、设置服务开机自启

systemctl enable memcached.service
systemctl restart memcached.service

4、验证

~]# systemctl status memcached
● memcached.service - memcached daemon
Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-12-28 15:59:20 CST; 26s ago
Main PID: 7451 (memcached)
Tasks: 10
CGroup: /system.slice/memcached.service
└─7451 /usr/bin/memcached -p 11211 -u memcached -m 64 -c 4096 -l 0.0.0.0,::1
Dec 28 15:59:20 controller systemd[1]: Started memcached daemon.
~]# netstat -atnp |grep 11211
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN 7451/memcached
tcp6 0 0 ::1:11211 :::* LISTEN 7451/memcached

7、etcd 安装及配置(可选安装)

1、安装与etcd
在控制节点(controller)执行以下操作

yum -y install etcd

2、配置etcd

sed -i 's\#ETCD_LISTEN_PEER_URLS="http://localhost:2380"\ETCD_LISTEN_PEER_URLS="http://192.168.10.20:2380"\' /etc/etcd/etcd.conf
sed -i 's\ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"\ETCD_LISTEN_CLIENT_URLS="http://192.168.10.20:2379"\' /etc/etcd/etcd.conf
sed -i 's\ETCD_NAME="default"\ETCD_NAME="controller"\' /etc/etcd/etcd.conf
sed -i 's\#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"\ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.20:2380"\' /etc/etcd/etcd.conf
sed -i 's\ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"\ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.20:2379"\' /etc/etcd/etcd.conf
sed -i 's\#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"\ETCD_INITIAL_CLUSTER="controller=http://192.168.10.20:2380"\' /etc/etcd/etcd.conf
sed -i 's\#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"\ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"\' /etc/etcd/etcd.conf
sed -i 's\#ETCD_INITIAL_CLUSTER_STATE="new"\ETCD_INITIAL_CLUSTER_STATE="new"\' /etc/etcd/etcd.conf

3、设置服务开机自启

systemctl enable etcd
systemctl start etcd

8、安装openstack 客户端

1、确保所有节点上已经安装了OpenStack 客户端

yum -y install python-openstackclient

在模板主上就已经安装,此处忽略该安装步骤。

三、openstack 业务组件安装

1、安装keystone(控制节点)

keystone 是OpenStack 的身份认证服务(Identity Service),OpenStack 中的认证、鉴权、角色管理都是由keystone 来完成,同时还提供服务目录注册功
能。keystone 是一个关键服务,同时也是安装Openstack 时第一个要安装的服务。

配置keystone 时,需要为用户创建合适的角色、服务、租户、用户账号和服务API 端点。

keystone 里有相关概念需要了解。比如,租户、角色、用户。

  • 租户对应项目,它有一些资源,包括用户、镜像、实例,还有仅对该项目可见的网络(如果创建网络时没有勾选“共享”)
  • 一个用户可隶属于一个或多个项目(租户),并且可以在这些项目间进行切换,部署Openstack 至少要创建admin 和service 二个项目。
  • service 项目是一个特殊的项目,Openstack 自身服务都配置在service 项目中,这样做的目的是提高安全性。
  • 一个用户可以被指定成多种角色,即一个用户可能会有多个项目,在不同项目中角色不同。例如,user1 同时在project1 和project2 中,在project1
    的角色是admin,在project2 的角色是user。
  • openstack 中默认只有4 种角色,包括admin(管理云环境)、member(云环境普通用户角色)、reader、user

OpenStack keystone 服务托管在httpd 上,修改配置后需要重启httpd

1、登录数据库创建keystone 数据库

~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE keystone default character set utf8;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'openstack';

2、安装Keystone 软件包并修改配置文件

yum -y install openstack-keystone httpd mod_wsgi

3、修改配置文件

sed -i 's\#connection = <None>\connection = mysql+pymysql://keystone:openstack@controller/keystone\' /etc/keystone/keystone.conf
sed -i '2475a\provider = fernet\' /etc/keystone/keystone.conf
sed -i '2476a\expiration = 86400\' /etc/keystone/keystone.conf

4、同步Identity 服务的初始数据到keystone 库

su -s /bin/sh -c "keystone-manage db_sync" keystone

5、初始化Fernet 密钥存储库

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

6、执行身份引导服务(会创建endpoint)

keystone-manage bootstrap --bootstrap-password openstack \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne

2、配置Apache 服务(控制节点controller)

1、修改apache 配置文件

sed -i '/#ServerName www.example.com:80/a\ServerName controller:80\' /etc/httpd/conf/httpd.conf

2、创建软链接

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

3、启动Apache HTTP 服务并配置其随系统启动

systemctl enable httpd.service
systemctl restart httpd.service

4、配置管理员帐户,并创建项目、域、用户、角色

cat > /etc/keystone/admin-openrc.sh <<EOF
#!/bin/bash
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
source /etc/keystone/admin-openrc.sh

5、创建service 项目、user 角色、域,default 域默认已经存在

~]# openstack domain list
+---------+---------+---------+--------------------+
| ID | Name | Enabled | Description |
+---------+---------+---------+--------------------+
| default | Default | True | The default domain |
+---------+---------+---------+--------------------+
openstack domain create --description "An Example Domain" example
openstack domain list
openstack project create --domain default --description "Demo Project" myproject
openstack user create --domain default --password-prompt myuser
openstack role create myrole
openstack role add --project myproject --user myuser myrole
openstack role list

一个租户在OpenStack 里就是一个项目,创建用户时必须先要有租户(项目),同时还需要一个能分配给该用户的角色,这样创建的用户才有意义。
创建service 项目(租户),service 项目将作为OpenStack 的系统项目,所有系统服务都要加入到service 项目中。

~]# openstack project create --domain default --description "Service Project" service
~]# openstack project list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 856a6d41ac044e889118cd96706b8847 | admin |
| c60e5165c7ed4f17b171a6d3f2187505 | service |
+----------------------------------+---------+

创建user 角色

~]# openstack role create user
~]# openstack role list
+----------------------------------+--------+
| ID | Name |
+----------------------------------+--------+
| 569e6552f6044d21bd333af3f041da54 | reader |
| a75928a0f41849819c3f6c58da497c3c | member |
| baa06ee9bb124c50b692f7169be2594f | admin |
| d1d6bed4c52146468736255155a50b6a | user |
+----------------------------------+--------+

6、验证Keystone

source /etc/keystone/admin-openrc.sh
openstack token issue

取消环境变量,验证

unset OS_AUTH_URL OS_PASSWORD

使用admin 用户,请求身份验证令牌。

openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default \
--os-user-domain-name Default \
--os-project-name admin \
--os-username admin token issue

输入admin 的密码:openstack

3、安装placement(控制节点)

Placement 在Openstack 中主要用于跟踪和监控各种资源的使用情况,例如,在Openstack 中包括计算资源、存储资源、网络等各种资源。Placement
用来跟踪管理每种资源的当前使用情况。
Placement 服务在Openstack 14.0.0 Newton 版本中被引入到nova 库,并在19.0.0 Stein 版本中被独立到Placement 库中,即在stein 版被独立成组件。
Placement 服务提供REST API 堆栈和数据模型,用于跟踪资源提供者不同类型的资源的库存和使用情况。资源提供者可以是计算资源、共享存储池、
IP 池等。例如,创建一个实例时会消耗计算节点的CPU、内存,会消耗存储节点的存储;同时还会消耗网络节点的IP 等等,所消耗资源的类型被跟踪为
类。Placement 提供了一组标准资源类(例如DISK_GB、MEMORY_MB 和VCPU),并提供了根据需要定义自定义资源类的能力。
placement 服务托管在httpd 上,修改配置需重启httpd。

1、登录数据库创建placement 数据库

~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE placement default character set utf8;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'openstack';

2、创建用户和API 服务端点

source /etc/keystone/admin-openrc.sh
openstack user create --domain default --password openstack placement

将placement 加入到service 项目以及admin 角色

openstack role add --project service --user placement admin

创建服务实体

openstack service create --name placement --description "Placement API" placement

创建Placement API 服务端点

openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778

3、安装配置Placement

yum -y install openstack-placement-api

https://docs.openstack.org/api-ref/placement/?expanded=

修改配置文件/etc/placement/placement.conf

openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:openstack@controller/placement
openstack-config --set /etc/placement/placement.conf api auth_strategy keystone
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_url  http://controller:5000/v3
openstack-config --set /etc/placement/placement.conf keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_type  password
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_domain_name  default
openstack-config --set /etc/placement/placement.conf keystone_authtoken user_domain_name  default
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_name  service
openstack-config --set /etc/placement/placement.conf keystone_authtoken username  placement
openstack-config --set /etc/placement/placement.conf keystone_authtoken password  openstack

修改配置文件/etc/httpd/conf.d/00-placement-api.conf

cat >>/etc/httpd/conf.d/00-placement-api.conf <<EOF
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
EOF

4、同步placement 数据库

su -s /bin/sh -c "placement-manage db sync" placement

注:忽略同步数据时报的Warning

5、重启httpd 服务并验证Placement

systemctl restart httpd

执行placement-status 命令检查状态

~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
+----------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
+----------------------------------+

安装pip(如果无外网则跳过该设置,使用离线安装)

yum install -y epel-release
yum install -y python-pip
rm -rf /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel-testing.repo

配置国内pip 源

cat > /etc/pip.conf <<EOF
[global]
index-url = https://pypi.douban.com/simple/
[install]
trusted-host = https://pypi.douban.com
EOF

安装osc-placement 插件

pip install osc-placement==2.2.0

离线安装
在yum01 主机上通过pip download osc-placement==2.2.0 下载离线包放入/var/www/html/yumrepos/pip 目录下

yum install -y python-pip
wget http://yum01/yumrepos/pip/osc-placement-2.2.0.tar.gz
pip install osc-placement-2.2.0.tar.gz

列出可用的资源类和特征。

openstack --os-placement-api-version 1.2 resource class list --sort-column name
openstack --os-placement-api-version 1.6 trait list --sort-column name

4、安装配置并验证Glance(控制节点)

OpenStack 镜像服务就是Glance,可以让用户上传、导出、修改、删除虚拟机镜像。另外,openstack 镜像服务支持将镜像文件存储在各种类型的存储
中,如本地文件系统、Openstack 对象存储服务swift、ceph 等分布式存储中。
OpenStack 镜像服务包括glance-api、glance-registry 两个子服务。glance-api 对外提供通信接口,与其他服务交互;glance-registry 用于管理存储在硬
盘或glance 数据库中的对象。
修改好配置后,需要重启openstack-glance-api.service 或openstack-glance-registry.service

1、登录数据库创建glance 数据库

~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE glance default character set utf8;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'openstack';

2、创建用户和API 服务端点

source /etc/keystone/admin-openrc.sh
openstack user create --domain default --password openstack glance

将glance 加入到service 项目以及admin 角色

openstack role add --project service --user glance admin

创建glance 服务实体

openstack service create --name glance --description "OpenStack Image" image

创建Glance API 服务端点

openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

3、安装配置Glance

yum -y install openstack-glance

修改/etc/glance/glance-api.conf

openstack-config --set /etc/glance/glance-api.conf openstack-config --set /etc/glance/glance-api.conf  DEFAULT show_image_direct_url True
openstack-config --set /etc/glance/glance-api.conf openstack-config --set /etc/glance/glance-api.conf  DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:openstack@controller/glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password openstack
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores  file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

修改/etc/glance/glance-registry.conf

openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:openstack@controller/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password openstack
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

4、同步Glance 数据库

su -s /bin/sh -c "glance-manage db_sync" glance

5、启动Glance 服务并设置成开机自启动

systemctl enable openstack-glance-api.service openstack-glance-registry.service && systemctl start openstack-glance-api.service openstack-glance-registry.service

6、验证Glance 服务

source /etc/keystone/admin-openrc.sh

下载测试镜像cirros 上传到Glance

wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

上传镜像到glance

openstack image create "cirros-0.4.0-x86_64" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public

上传的镜像分私有镜像和公有镜像。私有镜像只有上传者才拥有权限使用;如果要设置成公有镜像,则只需要加上--public 参数即可

确认上传的镜像和属性。

openstack image list
glance image-list
openstack image show ${image_id}
glance image-show ${image_id}

设置镜像为公有镜像

openstack image set cirros-0.4.0-x86_64 --public

删除镜像

openstack image delete ${image_name}

5、安装并验证Nova(计算节点+控制节点)

Nova 是OpenStack 中的计算服务。OpenStack 中虚拟机实例(instance)生命周期都是由Nova 服务来管理完成,包括实例创建、调度、删除等。
nova 服务包含一系列组件,其中有nova-api、nova-conductor、nova-scheduler、nova-novncproxy 、nova-compute
nova-scheduler:把nova-api 调用请求映射为OpenStack 将要调度哪个服务器来响应运行实例的请求,会根据诸如CPU 构架、可用域、内存、负载等作
出调度决策。
nova-api:对外提供API 接口来管理内部基础设施,例如启动停止实例。
nova-conductor:nova-compute 和数据库之间的一个组件,nova-conductor 建立的初衷是基于安全考虑,避免nova-compute 直接访问数据库。
nova-novncproxy:提供控制台服务,允许最终用户以vnc 方式访问实例控制台,后续如果使用spice-server,需要停止。
nova-compute:用于管理实例生命周期。通过消息队列接收请求,并承担操作工作。
综合对上面组件的介绍,可以看出Nova 也是一个非常重要的核心组件,且对应子模块非常多,配置也会变得杂。

在控制节点(controller)安装nova 服务

1、登录数据库创建nova,nova_api,nova_cell0 数据库

~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE nova_api default character set utf8;
MariaDB [(none)]> CREATE DATABASE nova default character set utf8;
MariaDB [(none)]> CREATE DATABASE nova_cell0 default character set utf8;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'openstack';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'openstack';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'openstack';

说明:OpenStack Rocky 版本需添加Placement 数据库,在Stein 版本之后已在单独的Placement 组件添加,请忽略以下操作。
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'openstack'

2、创建用户和API 服务端点

source /etc/keystone/admin-openrc.sh
openstack user create --domain default --password openstack nova

将nova 加入到service 项目以及admin 角色

openstack role add --project service --user nova admin

创建nova 服务实体

openstack service create --name nova --description "OpenStack Compute" compute

创建nova API 服务端点。

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

使用命令查看

openstack endpoint list

说明:OpenStack Rocky 版本需添加placement 用户,Stein 版本已在单独的Placement 组件添加,请忽略以下操作(可以使用openstack endpoint list 查
看)。
创建用户placement 并设置密码。
openstack user create --domain default --password-prompt placement
添加角色。
openstack role add --project service --user placement admin
创建Placement API 用户和服务端点。
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778

3、安装配置Nova

安装组件

yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler

说明:
OpenStack Rocky 版本需在这里安装placement,Stein 版本之后已在单独的placement 组件安装,请忽略。
yum -y install openstack-nova-placement-api

修改配置文件/etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.10.20
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT allow_resize_to_same_host true
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:openstack@controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:openstack@controller/nova
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf api token_cache_time 3600
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password openstack
openstack-config --set /etc/nova/nova.conf keystone_authtoken token_cache_time 3600
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen \$my_ip
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address \$my_ip
openstack-config --set /etc/nova/nova.conf vnc novncproxy_host 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc novncproxy_port 6080
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password openstack
openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 180

4、同步nova 数据库并验证

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova

说明:忽略Warning
验证cell0 和cell1 是否正确注册。

su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

6、启动nova 服务并设置为开机自启动

systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl status openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

在计算节点安装配置Nova

1、安装组件

yum -y install openstack-nova-compute

2、修改配置文件/etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.10.31
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password openstack
openstack-config --set /etc/nova/nova.conf keystone_authtoken token_cache_time 3600
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address \$my_ip
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address \$my_ip
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password openstack
openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 180
openstack-config --set /etc/nova/nova.conf libvirt hw_machine_type x86_64=pc-i440fx-rhel7.2.0
openstack-config --set /etc/nova/nova.conf libvirt cpu_mode host-passthrough

启动计算服务(包括其依赖项),并将其配置为在系统引导时自动启动

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

3、验证Nova
在控制节点执行以下操作进行验证。

查看计算服务组件状态

openstack compute service list

查看已注册的计算节点

openstack compute service list --service nova-compute

主机发现 每添加主机就需要执行主机发现:

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

列出keystone 服务中的API 端点以验证与Identity 服务的连接。

 ~]# openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name      | Type      | Endpoints                               |
+-----------+-----------+-----------------------------------------+
| nova      | compute   | RegionOne                               |
|           |           |   internal: http://controller:8774/v2.1 |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:8774/v2.1    |
|           |           | RegionOne                               |
|           |           |   public: http://controller:8774/v2.1   |
|           |           |                                         |
| glance    | image     | RegionOne                               |
|           |           |   internal: http://controller:9292      |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:9292         |
|           |           | RegionOne                               |
|           |           |   public: http://controller:9292        |
|           |           |                                         |
| keystone  | identity  | RegionOne                               |
|           |           |   internal: http://controller:5000/v3/  |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:5000/v3/     |
|           |           | RegionOne                               |
|           |           |   public: http://controller:5000/v3/    |
|           |           |                                         |
| placement | placement | RegionOne                               |
|           |           |   admin: http://controller:8778         |
|           |           | RegionOne                               |
|           |           |   public: http://controller:8778        |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:8778      |
|           |           |                                         |
+-----------+-----------+-----------------------------------------+

6、安装配置Neutron(控制节点)

OpenStack 网络使用的是一个SDN(Software Defined Networking)组件,即Neutron,SDN 是一个可插拔的架构,支持插入交换机、防火墙、负载均
衡器等,这些都定义在软件中,从而实现对整个云基础设施的精细化管控。
前期规划,将ens33 网口作为外部网络(在Openstack 术语中,外部网络常被称之为Provider 网络),同时也用作管理网络,便于测试访问,生产环境
建议分开;ens37 网络作为租户网络,即vxlan 网络;ens38 作为ceph 集群网络。
OpenStack 网络部署方式可选的有OVS 和LinuxBridge。此处选择LinuxBridge 模式,部署大同小异。
在控制节点上要启动的服务neutron-server.service
neutron-linuxbridge-agent.service
neutron-dhcp-agent.service
neutron-metadata-agent.service
neutron-l3-agent.service

在控制节点(controller)安装Neutron 服务,同时配置支持vxlan

1、登录数据库创建neutron 数据库

~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE neutron default character set utf8;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'openstack';

2、创建用户和API 服务端点

source /etc/keystone/admin-openrc.sh

创建Neutron 用户,密码设置为openstack

openstack user create --domain default --password openstack neutron

说明:使用--password openstack2022 neutron 是非交互式方式,而--password-prompt 是交互式,需要在窗口输入2 次密码确认

将neutron 加入到service 项目以及admin 角色

openstack role add --project service --user neutron admin

创建Neutron 服务实体。

openstack service create --name neutron --description "OpenStack Networking" network

创建Neutron API 服务端点。

openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

3、安装配置Neutron
采用Provider-LinuxBridge 模式

yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

修改配置文件/etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:openstack@controller/neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password openstack
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken token_cache_time 3600
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password openstack
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

修改ML2 插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:3000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true

修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini 文件,配置Linux 桥代理

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens32
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.10.10.20
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

说明:local_ip 要修改成控制节点/计算节点实际用于vxlan IP。
provider 网络使用的是ens33 网口,部署时根据实际情况调整,provider 网络可以理解为能与外部互联网相通的网络,后面在创建Flat 类型网络时
--provider-physical-network 要指定是provider。

修改内核配置文件/etc/sysctl.conf,确保系统内核支持网桥过滤器

cat >> /etc/sysctl.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

执行以下命令,添加网桥过滤器,并设置开机加载

modprobe br_netfilter
cat > /etc/rc.sysinit <<EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
cat > /etc/sysconfig/modules/br_netfilter.modules <<EOF
modprobe br_netfilter
EOF
chmod 755 /etc/sysconfig/modules/br_netfilter.modules

修改/etc/neutron/dhcp_agent.ini 文件,配置DHCP 代理

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

修改/etc/neutron/metadata_agent.ini 文件,配置元数据代理

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret openstack

修改/etc/neutron/l3_agent.ini 文件,配置layer-3 代理

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge

配置计算服务以使用 Neutron 网络

前提:已经安装了 nova 服务 yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler

修改/etc/nova/nova.conf文件

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password openstack
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret openstack

4、初始化创建网络

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

5、同步 neutron 数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

6、启动 neutron 各服务并设置为开机自启动

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

在计算节点(compute)安装 Neutron 服务,同时配置支持 vxlan

说明:如果想把 controller 节点也作为计算节点,在 controller 节点上执行以下步骤

1、安装组件neutron

yum install openstack-neutron-linuxbridge ebtables ipset -y

2、修改配置文件/etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password openstack
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken token_cache_time 3600
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

3、修改配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini
为实例建立layer-2 虚拟网络并且处理安全组规则,并将Flat 网络和外部物理业务网络接口对应起来

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens32
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.10.10.31
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True

说明:local_ip 要修改成计算节点实际用于vxlan IP。
provider 网络使用的是ens33 网口,部署时根据实际情况调整,provider 网络可以理解为能与外部互联网相通的网络,后面在创建Flat 类型网络时物理
网络--provider-physical-network 要指定是provider。

3、修改配置文件/etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password openstack

修改内核配置文件/etc/sysctl.conf,确保系统内核支持网桥过滤器

cat >> /etc/sysctl.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

执行以下命令,添加网桥过滤器,并设置开机加载

modprobe br_netfilter
cat > /etc/rc.sysinit <<EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
cat > /etc/sysconfig/modules/br_netfilter.modules <<EOF
modprobe br_netfilter
EOF
chmod 755 /etc/sysconfig/modules/br_netfilter.modules

5、启动neutron-linuxbridge-agent 服务并设置开机自启动

systemctl enable neutron-linuxbridge-agent.service 
systemctl restart neutron-linuxbridge-agent.service

验证Neutron
在控制节点执行以下操作进行验证。

source /etc/keystone/admin-openrc.sh

1、列出成功启动的Neutron 代理

 ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 78cf5fd3-3330-47fe-876d-25cdd8a18c29 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 79563b4f-99b6-4ca9-97f1-a972d970e3d6 | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent          |
| 9ca07dd3-b66a-4a17-abff-75df0c173bf6 | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| ea4500c8-4fbb-4f4d-bcd5-608e410f1d38 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
| f5fe54c0-1662-4877-8ee5-873ea299b1a4 | Linux bridge agent | compute01  | None              | :-)   | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

2、创建一个Flat 网络

openstack network create --share --external --provider-network-type flat public --provider-physical-network provider

image-20220320115410901

说明:在linuxbridge_agent.ini 文件中的physical_interface_mappings = provider:ens33,所以当前创建的flat 类型的物理网络
--provider-physical-network 要指定为provider

创建子网
在类型为Flat 名称为public 网络下创建子网

openstack subnet create --network public \
--allocation-pool start=192.168.10.100,end=192.168.10.240 \
--dns-nameserver 223.5.5.5 \
--gateway 192.168.10.254 \
--subnet-range 192.168.10.0/24 flat_subnet

image-20220320115548242

创建网络接口

openstack port create --network public --fixed-ip subnet=flat_subnet ip-address=192.168.10.110

image-20220320115645674

查看网络

openstack network list

查看子网

openstack subnet list

查看网络接口

openstack port list

删除网络

openstack port delete ip-address=192.168.10.110
openstack subnet delete flat_subnet
openstack network delete public

说明:删除网络前,要先后顺序删除,即先要删除子网下的网络接口-->删除子网-->删除网络

7、安装配置Horizon

OpenStack Dashboard 就是Horizon,提供了基于Web 控制台,是一个运行在httpd 服务器上的Web 服务

在控制节点(controller)安装Horizon 服务

1、安装软件

yum -y install openstack-dashboard

2、修改配置文件/etc/openstack-dashboard/local_settings

sed -i '$a\WEBROOT = \'\'/dashboard/\' /etc/openstack-dashboard/local_settings

a、配置WEBROOT 路径和会话超时时间

sed -i '$a\SESSION_TIMEOUT = 86400\' /etc/openstack-dashboard/local_settings

b、更改OPENSTACK_HOST 值为"controller",若为127.0.0.1 则只能在当前主机访问

sed -i 's\OPENSTACK_HOST = "127.0.0.1"\OPENSTACK_HOST = "controller"\' /etc/openstack-dashboard/local_settings

c、允许所有主机访问,注意格式,逗号后面有一个空格

ALLOWED_HOSTS = ['*']

d、配置memcached 会话存储服务,请注释掉任何其他会话存储配置,请注意格式。

#配置memcached会话存储服务,在配置文件中添加如下内容:
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
#启用标识API版本3:
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

#启用对域的支持:
sed -i '$a\OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True\' /etc/openstack-dashboard/local_settings

#配置API版本,在文件末尾添加如下:
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}

#将Default配置为通过仪表板创建的用户的默认域:
sed -i '$a\OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"\' /etc/openstack-dashboard/local_settings

#将用户配置为通过仪表板创建的用户的默认角色:
sed -i '$a\OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"\' /etc/openstack-dashboard/local_settings

#(可选)配置时区:
sed -i 's\TIME_ZONE = "UTC"\TIME_ZONE = "Asia/Shanghai"\' /etc/openstack-dashboard/local_settings

j、开启对第3 层网络服务的支持。

OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    'enable_quotas': True,
    'enable_rbac_policy': True,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_ipv6': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': True,
    'default_dns_nameservers': [],
    'supported_provider_types': ['*'],
    'segmentation_id_range': {},
    'extra_provider_types': {},
    'supported_vnic_types': ['*'],
    'physical_networks': [],
}

k、开启卷备份特性,需要后端存储类型为swift 或ceph 等支持分布式对象存储

OPENSTACK_CINDER_FEATURES = {
    'enable_backup': True,
}

3、修改配置文件/etc/httpd/conf.d/openstack-dashboard.conf

sed -i '$a\WSGIApplicationGroup %{GLOBAL}\' /etc/httpd/conf.d/openstack-dashboard.conf

4、修改/usr/share/openstack-dashboard/ 文件夹属组、属主为apache

chown -R apache:apache /usr/share/openstack-dashboard/

5、重新启动httpd 服务器和Memcache 服务

systemctl restart httpd.service memcached.service

8、安装并验证cinder

块存储服务(Block Storage Service)cinder,作为OpenStack 的块存储服务,为Instance 提供虚拟磁盘,其类似AWS 的EBS(Elastic Block Storage),
它们之间的区别在于存储卷暴露给虚拟机实例的方式。在OpenStack 中,通过iSCSI 暴露LVM 卷组(VG)对卷进行管理,所以使用cinder 的存储节点都会
有VG。
cinder 在控制节点上需要安装openstack-cinder,对应要启动的服务有:cinder-api、cinder-scheduler。修改好配置后,需要启动:
openstack-cinder-api.service openstack-cinder-scheduler.service
cinder 在存储节点上需要安装openstack-cinder lvm2 device-mapper-persistent-data openstack-cinder targetcli python-keystone
cinder 在存储节点上运行的服务是:cinder-volume,当然还需启动其依赖的LVM、target 服务。修改好配置后,需要启动:lvm2-lvmetad.service、
openstack-cinder-volume.service target.service

说明:cinder-api 用来接收api 请求,然后将请求消息加工后放到MQ 里,cinder-scheduler 从MQ 中消费消息,通过调度算法选择相应的存储节点,然
后将消息放到MQ 里,cinder-volume 收到scheduler 消息后,开始对卷进行操作。cinder-volume 用来管理卷volume 生命令周期,cinder-volume 是运
行在存储节点上。
没有安装cinder 服务时,Openstack Dashboard 是不能对外提供卷服务操作的
控制节点和存储节点的cinder.conf 配置可以相同,前提是当控制节点不具备对应的VG 时,不要启动openstack-cinder-volume.service,否则会报错找
不到VG。

在控制节点(controller)安装配置Cinder 块存储服务

1、登录数据库创建cinder 数据库

~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE cinder default character set utf8;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'openstack';

2、创建用户和API 服务端点

source /etc/keystone/admin-openrc.sh

创建cinder 用户,密码设置为:openstack

openstack user create --domain default --password openstack cinder

将cinder 加入到service 项目以及admin 角色

openstack role add --project service --user cinder admin

创建cinderv2 和cinderv3 服务实体

openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

创建Block Storage API 服务端点

openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s

3、安装cinder 软件包并修改配置文件

yum -y install openstack-cinder

修改配置文件/etc/cinder/cinder.conf

openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.10.20
openstack-config --set /etc/cinder/cinder.conf DEFAULT default_volume_type hdd
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:openstack@controller/cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password openstack
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

修改配置文件/etc/nova/nova.conf,让计算服务支持块存储

openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne

4、同步cinder 数据库

su -s /bin/sh -c "cinder-manage db sync" cinder

说明:忽略带有“Deprecated: Option...” 输出
5、重启服务并设置成开机自启动
重启nova-api 服务

systemctl restart openstack-nova-api.service

启动cinder-api、cinder-scheduler 服务并将其设置为开机自启动。

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service 
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service

6、验证控制节点cinder-scheduler,状态为up 即正常

 ~]# openstack volume service list
+------------------+------------+------+---------+-------+----------------------------+
| Binary           | Host       | Zone | Status  | State | Updated At                 |
+------------------+------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up    | 2022-03-20T07:43:46.000000 |
+------------------+------------+------+---------+-------+----------------------------+

在存储节点(compute)安装配置Cinder 服务

cinder 在存储节点上运行的服务是:cinder-volume,当然其依赖LVM、target 服务。
修改好配置后,需要启动的服务有:lvm2-lvmetad.service、openstack-cinder-volume.service target.service
1、在存储节点安装和配置LVM 并设置为开机自启动

yum -y install lvm2 device-mapper-persistent-data
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

2、计算节点compute分别规划了1 个200GB 的硬盘用作虚机存储,在系统上对应是/dev/sdb

创建分区

 ~]# parted /dev/sdb

image-20220320155337858

创建PV

pvcreate /dev/sdb1

创建VG,VG 名称定义为cinder-volumes

vgcreate cinder-volumes /dev/sdb1

3、修改配置文件/etc/lvm/lvm.conf

vi /etc/lvm/lvm.conf

在devices 部分中,添加一个接受/dev/sdb 设备的过滤器并拒绝所有其他设备。
devices {

filter = [ "a/sda/", "a/sdb/", "r/.*/"]

}

说明:filter 过滤器阵列中的每个项目开头为“a”或者“r”,用于接受或用于拒绝某个设备,如果存储节点在操作系统磁盘上使用LVM,则还必须将关联的
系统盘设备添加到过滤器。同样,如果计算节点在操作系统磁盘上使用LVM,也需要修改这些节点上“/etc/lvm/lvm.conf”文件中的过滤器以包括操作系
统磁盘。例如,如果“/dev/sda”设备包含操作系统,则需要将“sda”添加到过滤器。
参考:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/cinder-storage-install.html

4、安装和配置存储节点
说明:存储节点的cinder.conf 包含了控制节点的cinder.conf,如果控制节点即是存储节点,可以直接用存储节点的cinder.conf安装软件包

yum -y install openstack-cinder targetcli python-keystone

修改配置文件/etc/cinder/cinder.conf

openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.10.31
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:openstack@controller/cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password openstack
openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
openstack-config --set /etc/cinder/cinder.conf lvm target_protocol iscsi
openstack-config --set /etc/cinder/cinder.conf lvm target_helper lioadm
openstack-config --set /etc/cinder/cinder.conf lvm volume_backend_name cinder-volumes
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

5、启动cinder-volume、target 服务,并设置开机自启动

systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service

6、在控制节点(controller)进行验证
列出存储服务组件,若状态均为up 即正常

~]# openstack volume service list
+------------------+---------------+------+---------+-------+----------------------------+
| Binary           | Host          | Zone | Status  | State | Updated At                 |
+------------------+---------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller    | nova | enabled | up    | 2022-03-20T08:19:27.000000 |
| cinder-volume    | compute01@lvm | nova | enabled | up    | 2022-03-20T08:19:29.000000 |
+------------------+---------------+------+---------+-------+----------------------------+

Cinder 常用命令
a、创建存储卷

openstack volume create --size 1 volume01

说明:创建一个卷名为volume01 大小为1GB 的卷
b、查看存储卷列表

openstack volume list

c、为存储卷创建快照

openstack volume snapshot create --volume volume01 snap-volume01

说明:为卷volume01 创建一个名为snap-volume01 的快照
d、查看快照列表

openstack volume snapshot list

e、挂载卷到实例

nova volume-attach vm01 ead55f47-a0f3-4eb9-854e-9dff638ff534 /dev/vdb

说明:挂载卷时,要指定卷id,此处要指定卷volume01 的id,然后是连接到实例vm01 的/dev/vdb 上
虚拟机卸载卷

nova volume-detach vm01 ead55f47-a0f3-4eb9-854e-9dff638ff534

说明:卸载卷时,请先在操作系统层停止正在使用要卸载的应用,umount 掉,然后再执行volume-detach
删除所选存储卷

openstack volume snapshot delete snap-volume01
openstack volume delete volume01

说明:删除卷要具备2 个条件,第一个是卷状态是可用(available)或其他而非使用中;第二个就是要删除的卷上没有快照,如果有要先删除快照。

openstack volume show volume01

禁用存储节点

openstack volume service set --disable compute02@lvm cinder-volume 

启用存储节点

openstack volume service set --enable compute02@lvm cinder-volume
posted on   Lanny_阆  阅读(218)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
· 【自荐】一款简洁、开源的在线白板工具 Drawnix
< 2025年3月 >
23 24 25 26 27 28 1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31 1 2 3 4 5

点击右上角即可分享
微信分享提示