openstack Train版本安装配置

声明

该文档在创建实例前都没问题,在创建实例时报错如下:
Expecting to find domain in user. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error
假设你刚好需要Train版本,该文档可作为参考,但可能最后这个小问题还需一同找找问题的所在

欢迎加入QQ群一起讨论Linux、开源等技术

硬件配置

192.168.50.133(外网、桥接)/192.168.1.133(内网、自定义网络)/4U/16G/100G/controller
192.168.50.134(外网、桥接)/192.168.1.134(内网、自定义网络)/4U/16G/100G/compute
192.168.1.135(内网、自定义网络)/4U/16G/100G/block

前期设置(所有节点执行)

# 设置主机名
hostnamectl set-hostname controller && reboot
hostnamectl set-hostname compute && reboot
hostnamectl set-hostname block && reboot
# 配置hosts
vim /etc/host
192.168.50.133 controller
192.168.50.134 compute
192.168.50.135 block
# 关闭防火墙和selinux
systemctl stop firewalld && systemctl disable firewalld
sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config
# 安装时间服务
controller节点配置:
yum -y install chrony
vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp1.aliyun.com iburst
systemctl restart chronyd && systemctl enable chronyd
# 其他节点配置
yum -y install chrony
vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 192.168.50.133 iburst
systemctl restart chronyd && systemctl enable chronyd
各节点验证
chronyc sources
# 各节点配置OpenStack源
vim /etc/yum.repos.d/openstack.repo
[base]
name=CentOS-$releasever - Base
baseurl=https://mirrors.aliyun.com/centos/$releasever/cloud/$basearch/
gpgcheck=0
[updates]
name=CentOS-$releasever - Updates
baseurl=https://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
gpgcheck=0
# 将repo文件发送到其他主机上
scp /etc/yum.repos.d/openstack.repo root@192.168.50.xxx:/etc/yum.repos.d/
yum clean all && yum makecache
# 各节点开始安装
yum -y install centos-release-openstack-train
yum -y upgrade
yum -y install python-openstackclient
yum -y install openstack-selinux

只在controll节点上执行

安装SQL数据库

yum -y install mariadb mariadb-server python2-PyMySQL
cp /etc/my.cnf.d/openstack.cnf{,.bak}
vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.50.133 # hosts文件中定义的那个解析controller就写哪个
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
systemctl start mariadb && systemctl enable mariadb
mysql_secure_installation
密码定义为: Aa1122**

安装消息队列

yum -y install rabbitmq-server
systemctl start rabbitmq-server && systemctl enable rabbitmq-server
# 添加openstack用户
rabbitmqctl add_user openstack RABBIT_PASS
# 给openstack用户授权并配置
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

安装memcached

yum -y install memcached python-memcached
cp /etc/sysconfig/memcached{,.bak}
sed -i 's/OPTIONS="-l 127.0.0.1,::1"/OPTIONS="-l 127.0.0.1,::1,controller"/' /etc/sysconfig/memcached
systemctl start memcached.service && systemctl enable memcached.service

安装etcd

yum -y install etcd
vim /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.1.133:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.133:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.133:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.133:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.1.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
systemctl start etcd && systemctl enable etcd

controller节点安装Identity 服务(代号keystone)

mysql -uroot -p
create database keystone;
grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'KEYSTONE_DBPASS';
grant all privileges on keystone.* to 'keystone'@'%' identified by 'KEYSTONE_DBPASS';
# 安装和配置前端组件
yum -y install openstack-keystone httpd mod_wsgi
vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
[token]
provider = fernet
# 填充identify服务数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
# 初始化 Fernet 密钥库
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
# 引导身份服务
keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
# 配置apache服务
vim /etc/httpd/conf/httpd.conf
ServerName controller # 没有则新加
....
....
<Directory />
AllowOverride none
Require all denied # 将denied改为granted
</Directory>
注:如果不改成granted则一旦新创建的实例有错误则无法删除(页面上和命令都不行),该问题解决方法来自:https://blog.csdn.net/qq_19007335/article/details/107568713
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl restart httpd && systemctl enable httpd
# 通过设置适当的环境变量来配置管理帐户
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
# 创建域、项目、用户和角色
注: 默认自带default域,如需创建域则执行下行命令
openstack domain create --description "An Example Domain" example
# 查看域
openstack domain list
# 创建service项目
openstack project create --domain default --description "Service Project" service
# 查看service项目中所有服务
openstack service list
# 验证
## 卸载OS_AUTH_URL OS_PASSWORD变量
unset OS_AUTH_URL OS_PASSWORD
## 作为管理员用户,请求身份验证令牌
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
注: 密码是上面定义的ADMIN_PASS
# 创建 OpenStack 客户端环境脚本
echo "export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2" > admin-openrc
# 加载下这个脚本
source admin-openrc
# 请求身份验证令牌
openstack token issue

controller节点上安装Image服务(代号glance)

mysql -u root -p
create database glance;
grant all privileges on glance.* to 'glance'@'localhost' identified by 'GLANCE_DBPASS';
grant all privileges on glance.* to 'glance'@'%' identified by 'GLANCE_DBPASS';
# 加载下这个脚本
source admin-openrc
# 创建服务凭证
## 创建glance用户
openstack user create --domain default --password GLANCE_PASS glance
# 在glance用户和服务项目中添加管理员角色
openstack role add --project service --user glance admin
# 创建glance服务实体
openstack service create --name glance --description "OpenStack Image" image
# 创建image服务API端点
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
# 安装并配置组件
yum -y install openstack-glance
vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
# 填充image服务数据库
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl start openstack-glance-api && systemctl enable openstack-glance-api
# 验证
source admin-openrc
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
# 使用 QCOW2 磁盘格式、裸容器格式和公开可见性将图像上传到 Image 服务,以便所有项目都可以访问它
glance image-create --name "cirros" --file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare --visibility public
# 公网下载Ubuntu21.10镜像
wget http://ftp.sjtu.edu.cn/ubuntu-cd/21.10/ubuntu-21.10-desktop-amd64.iso
# 上传Ubuntu21.10镜像至openstack上
glance image-create --name "ubuntu-21.10-desktop-amd64" \
--file ./ubuntu-21.10-desktop-amd64.iso \
--disk-format iso --container-format bare --visibility public
# 查看镜像
openstack image list

controller节点安装Placement 服务(代号Placement )

# 加载下脚本
source admin-openrc
# 登陆数据库
mysql -u root -p
create database placement;
grant all privileges on placement.* to 'placement'@'localhost' identified by 'PLACEMENT_DBPASS';
grant all privileges on placement.* to 'placement'@'%' identified by 'PLACEMENT_DBPASS';
openstack user create --domain default --password PLACEMENT_PASS placement
# Placement 用户添加到具有 admin 角色的服务项目
openstack role add --project service --user placement admin
# 在服务目录中创建 Placement API 条目
openstack service create --name placement --description "Placement API" placement
# 创建placement API服务端点
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
# 安装和配置前端组件
yum -y install openstack-placement-api
vim /etc/placement/placement.conf
[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
# 填充placement数据库
su -s /bin/sh -c "placement-manage db sync" placement
注: 忽略输出的消息
# 验证
source admin-openrc
# 执行状态检查以确保一切正常
placement-status upgrade check

compute服务(代号nova)

controller节点安装compute服务

mysql -u root -p
create database nova_api;
create database nova;
create database nova_cell0;
grant all privileges on nova_api.* TO 'nova'@'localhost' identified by 'NOVA_DBPASS';
grant all privileges on nova_api.* TO 'nova'@'%' identified by 'NOVA_DBPASS';
grant all privileges on nova.* TO 'nova'@'localhost' identified by 'NOVA_DBPASS';
grant all privileges on nova.* TO 'nova'@'%' identified by 'NOVA_DBPASS';
grant all privileges on nova_cell0.* TO 'nova'@'localhost' identified by 'NOVA_DBPASS';
grant all privileges on nova_cell0.* TO 'nova'@'%' identified by 'NOVA_DBPASS';
exit
source admin-openrc
# 创建计算服务凭证
## 创建nova用户
openstack user create --domain default --password NOVA_PASS nova
# 将 admin 角色添加到 nova 用户
openstack role add --project service --user nova admin
# 创建nova服务实体
openstack service create --name nova --description "OpenStack Compute" compute
# 创建compute API服务端点
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
# 安装和配置组件
yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip=192.168.1.133 # controller节点内网IP
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
auth_type = password
auth_url = http://controller:5000/v3
project_name = service
project_domain_name = Default
username = placement
user_domain_name = Default
password = PLACEMENT_PASS
region_name = RegionOne
# 多长时间运行 nova-manage cell_v2discover_hosts 注册新发现的计算节点
[scheduler]
discover_hosts_in_cells_interval = 300
# 填充nova-api数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
# 注册 cell0 数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
# 创建 cell1 单元格
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
# 填充nova数据库
su -s /bin/sh -c "nova-manage db sync" nova
注: 这里可能会输出一些东西,填充完后建议查nova库中是否有数据
# 验证 nova cell0 和 cell1 是否正确注册
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
# 启动 Compute 服务并加入开机自启
systemctl start \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
systemctl enable \
openstack-nova-api.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

compute节点安装compute服务

yum -y install openstack-nova-compute
vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip = 192.168.1.134 # compute节点的内网的地址
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
auth_type = password
auth_url = http://controller:5000/v3
project_name = service
project_domain_name = Default
username = placement
user_domain_name = Default
password = PLACEMENT_PASS
region_name = RegionOne
# 确定当前计算节点是否支持虚拟机的硬件加速
egrep -c '(vmx|svm)' /proc/cpuinfo
注:
如果返回值为1或更大值则当前计算节点支持硬件加速,不需要额外配置。
如果返回值为零,则当前计算节点不支持硬件加速,必须将 libvirt 配置为使用 QEMU 而不是 KVM。
[root@compute ~]# vim /etc/nova/nova.conf
[libvirt]
virt_type = qemu # 如在虚拟机中做则需要修改成qemu,如在实体中做则改成kvm!!!!!
# 启动服务
systemctl start libvirtd openstack-nova-compute
systemctl enable libvirtd openstack-nova-compute
systemctl status libvirtd openstack-nova-compute
# 将计算节点添加到单元数据库(回到controller节点上执行)
source admin-openrc
openstack compute service list --service nova-compute
# 发现compute主机(controller节点上执行)
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
回显如下:
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': bc775592-3458-43be-b664-6e0775a07a1e
Checking host mapping for compute host 'compute': 795d4810-e3a0-4f9e-af5b-94443a681f03
Creating host mapping for compute host 'compute': 795d4810-e3a0-4f9e-af5b-94443a681f03
Found 1 unmapped computes in cell: bc775592-3458-43be-b664-6e0775a07a1e

controller节点上安装Networking服务(代号neutron)

mysql -u root -p
create database neutron;
grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'NEUTRON_DBPASS';
grant all privileges on neutron.* to 'neutron'@'%' identified by 'NEUTRON_DBPASS';
source admin-openrc
# 创建服务凭证
## 创建 neutron 用户
openstack user create --domain default --password NEUTRON_PASS neutron
# 为 neutron 用户添加 admin 角色
openstack role add --project service --user neutron admin
# 创建neutron服务实体
openstack service create --name neutron --description "OpenStack Networking" network
# 创建networking服务API端点
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
## Networking Option 2: Self-service networks
# 安装组件
yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
# 配置服务组件
vim /etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
# 在 [DEFAULT] 部分,启用模块化第 2 层 (ML2) 插件、路由器服务和重叠 IP 地址(新添加)
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
transport_url = rabbit://openstack:RABBIT_PASS@controller
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
# 新添加[nova]段并在该段中的内容需要额外新添加
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
# 配置模块化第 2 层 (ML2) 插件
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
# 新增[ml2]段
[ml2]
# 在 [ml2] 部分,启用flat、VLAN 和 VXLAN 网络
type_drivers = flat,vlan,vxlan
# 启用 VXLAN 自助服务网络
tenant_network_types = vxlan
# 启用 Linux 桥接和第 2 层填充机制
mechanism_drivers = linuxbridge,l2population
# 启用端口安全扩展驱动
extension_drivers = port_security
# VLAN的id,一定要记得写且必须写在这里
[ml2_type_vxlan]
vni_ranges = 1:1000
# 配置linux bridge agent
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens192 # 外网网卡名
[vxlan]
enable_vxlan = true
local_ip = 192.168.1.113 # 内网网卡IP
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[root@controller ~]# echo 'net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1' >> /etc/sysctl.conf
[root@controller ~]# modprobe br_netfilter
[root@controller ~]# echo "modprobe br_netfilter" >> /etc/rc.d/rc.local
[root@controller ~]# sysctl -p
# 配置3层 agent
[root@controller ~]# vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
# 配置DHCP agent
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
# 配置metadata agent
[root@controller ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 407997e2017d7b2c8670 # 用openssl rand -hex 10命令生成
# 配置计算服务以使用网络服务
[root@controller ~]# vim /etc/nova/nova.conf
[neutron]
service_metadata_proxy = true
metadata_proxy_shared_secret = 407997e2017d7b2c8670
auth_type = password
auth_url = http://controller:5000
project_name = service
project_domain_name = default
username = neutron
user_domain_name = default
password = NEUTRON_PASS
region_name = RegionOne
# 网络服务初始化脚本需要一个符号链接 /etc/neutron/plugin.ini 指向 ML2 插件配置文件 /etc/neutron/plugins/ml2/ml2_conf.ini
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
# 填充数据库
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
# 重启Compute API服务
[root@controller ~]# systemctl restart openstack-nova-api.service
# 启动网络服务并加入开机自启
[root@controller ~]# systemctl restart neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
[root@controller ~]# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
[root@controller ~]# systemctl status neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# 如果使用的是网络的选项2,还需要开启3层服务
[root@controller ~]# systemctl start neutron-l3-agent.service
[root@controller ~]# systemctl enable neutron-l3-agent.service
[root@controller ~]# systemctl status neutron-l3-agent.service -l
遇到问题1:
cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory
解决:
[root@controller ~]# yum -y install libibverbs
[root@controller ~]# systemctl restart neutron-l3-agent.service
[root@controller ~]# systemctl status neutron-l3-agent.service -l

compute节点安装并配置

[root@compute ~]# yum -y install openstack-neutron-linuxbridge ebtables ipset
[root@compute ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
auth_strategy = keystone # 该项是新加
transport_url = rabbit://openstack:RABBIT_PASS@controller
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp/lock
# 配置网络选项2,自定义网络
## 配置linux桥接agent
[root@compute ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens192 # compute节点外网网卡名
## 新添加[vxlan]段,启用 VXLAN 覆盖网络,配置处理覆盖网络的物理网络接口的 IP 地址,并启用第 2 层填充:
[vxlan]
enable_vxlan = true
local_ip = 192.168.50.134 # compute节点外网IP
l2_population = true
# 新添加 [securitygroup] 段,启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序:
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
# 确保 Linux 操作系统内核支持网桥过滤器
[root@compute ~]# echo 'net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1' >> /etc/sysctl.conf
[root@compute ~]# modprobe br_netfilter
[root@compute ~]# echo "modprobe br_netfilter" >> /etc/rc.d/rc.local
[root@compute ~]# sysctl -p
# 配置计算服务以使用网络服务
[root@compute ~]# vim /etc/nova/nova.conf
[neutron]
auth_type=password
auth_url = http://controller:5000
project_name = service
project_domain_name = default
username = neutron
project_domain_name = default
password = NEUTRON_PASS
region_name = RegionOne
# 重启compute服务
[root@compute ~]#
systemctl restart openstack-nova-compute.service
systemctl start neutron-linuxbridge-agent.service
systemctl enable neutron-linuxbridge-agent.service
systemctl status neutron-linuxbridge-agent.service

## Dashboard 服务(代号horizon)

在controller节点上执行

yum -y install openstack-dashboard
vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "127.0.0.1"
改为
OPENSTACK_HOST = "controller"
# 开启identity API的3版本
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
ALLOWED_HOSTS = ['horizon.example.com', 'localhost']
改为
ALLOWED_HOSTS = ['*']
SESSION_ENGINE = 'django.contrib.sessions.backends.signed_cookies'
改为
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
},
}
改为
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
},
}
# 启用对域的支持(该项新添加)
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
# 配置API版本(该项新添加)
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
# 将 Default 配置为您通过仪表板创建的用户的默认域(该项新添加)
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
# 将用户配置为您通过仪表板创建的用户的默认角色(该项新添加)
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
# 配置时间时区
TIME_ZONE = "Asia/Shanghai"
# 检查配置文件是否有如下项,如无则需手动添加
[root@controller ~]# vim /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
# 重启web服务和会话存储
[root@controller ~]# systemctl restart httpd memcached
[root@controller ~]# systemctl status httpd memcached
# 浏览器验证(暂打不开,下面有解决方法)
http://172.16.186.5/dashboard
[root@controller ~]# cat admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default # 登陆域
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin # 登陆账号
export OS_PASSWORD=ADMIN_PASS # 登陆密码
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
注: 截止到现在并登陆不上去,官网没有给出解决办法,解决办法如下:
[root@controller ~]# cd /usr/share/openstack-dashboard/
# 重建dashboard配置
[root@controller openstack-dashboard]# cp -R /etc/httpd/conf.d/openstack-dashboard.conf{,.bak}
[root@controller openstack-dashboard]# python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf
# 登录到dashboard将出现权限错误和显示混乱,需要建立策略的软链接
[root@controller openstack-dashboard]# ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf
# 在local_settings最后新增根目录指向
[root@controller openstack-dashboard]# vim /etc/openstack-dashboard/local_settings
WEBROOT = '/dashboard/' # 新添加该项
[root@controller openstack-dashboard]# vim /etc/httpd/conf.d/openstack-dashboard.conf
WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py
改为
WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
Alias /dashboard/static /usr/share/openstack-dashboard/static
# 重启生效
[root@controller openstack-dashboard]# systemctl restart httpd memcached
[root@controller openstack-dashboard]# systemctl status httpd memcached
# 浏览器再次验证
http://http://192.168.50.133/dashboard

.
.
.
.
.

-------------------------------------------------

以下创建主机操作不变,只是IP不同

上面环境的IP:
192.168.50.133(外网、桥接)/192.168.1.133(内网、自定义网络)/4U/16G/100G/controller
192.168.50.134(外网、桥接)/192.168.1.134(内网、自定义网络)/4U/16G/100G/compute
192.168.1.135(内网、NAT网络)/4U/16G/100G/block

下图环境和上图对应关系:
controller节点:
192.168.50.133(外网、桥接)/192.168.1.133(内网、自定义网络)
对应
192.168.1.111(外网、桥接)/172.16.186.131/24(内网、NAT网络)

compute节点:
192.168.50.134(外网、桥接)/192.168.1.134(内网、NAT网络)
对应
192.168.1.112/24(外网、桥接)/172.16.186.132/24(内网、NAT网络)

block节点:
192.168.1.135/24
改为
172.16.186.133/24

创建虚拟机前的操作

以下创建虚拟机是单独创建的域、用户等所有

# 创建域
[root@controller ~]# openstack domain create 210Demo
# 创建项目
[root@controller ~]#
openstack project create --domain 210Demo Engineering
openstack project create --domain 210Demo Production
# 其他项目类参考命令
openstack project list
openstack project delete <[Project_ID] | [Name]>
# 创建用户
[root@controller ~]#
openstack user create --domain 210Demo --project Engineering --password redhat --email zhangsan@lab.example.com zhangsan
openstack user create --domain 210Demo --project Production --password redhat --email lisi@lab.example.com lisi
# 设置zhangsan用户的角色
[root@controller ~]# openstack role list
+----------------------------------+--------+
| ID | Name |
+----------------------------------+--------+
| b644d263f4264ae9b569f89f5ea07522 | reader |
| c528c456159e4d9a91f9c0ae18438905 | member |
| f844d50f331d41ac8f7d61479340ede2 | admin |
+----------------------------------+--------+
# 给zhangsan用户Engineering项目的普通用户角色
[root@controller ~]#
openstack role add --user zhangsan --user-domain 210Demo --project Engineering --project-domain 210Demo member
# 给zhangsan用户Engineering项目的管理员角色
[root@controller ~]#
openstack role add --user zhangsan --user-domain 210Demo --project Engineering --project-domain 210Demo admin
# 给lisi用户Production项目的普通用户角色
[root@controller ~]#
openstack role add --user lisi --user-domain 210Demo --project Production --project-domain 210Demo member
openstack role add --user lisi --user-domain 210Demo --project Production --project-domain 210Demo admin
# 创建Devops组并将所有用户添加到组中
[root@controller ~]#
openstack group create --domain 210Demo Devops
openstack group add user --group-domain 210Demo --user-domain 210Demo Devops zhangsan lisi
[root@controller ~]# openstack role assignment list --user-domain 210Demo --project Engineering --project-domain 210Demo --names
+--------+------------------+-------+---------------------+--------+--------+-----------+
| Role | User | Group | Project | Domain | System | Inherited |
+--------+------------------+-------+---------------------+--------+--------+-----------+
| member | zhangsan@210Demo | | Engineering@210Demo | | | False |
| admin | zhangsan@210Demo | | Engineering@210Demo | | | False |
+--------+------------------+-------+---------------------+--------+--------+-----------+
[root@controller ~]# openstack role assignment list --user-domain 210Demo --project Production --project-domain 210Demo --names
+--------+--------------+-------+--------------------+--------+--------+-----------+
| Role | User | Group | Project | Domain | System | Inherited |
+--------+--------------+-------+--------------------+--------+--------+-----------+
| member | lisi@210Demo | | Production@210Demo | | | False |
| admin | lisi@210Demo | | Production@210Demo | | | False |
+--------+--------------+-------+--------------------+--------+--------+-----------+
[root@controller ~]# cp admin-openrc zhangsanrc
[root@controller ~]# cat zhangsanrc
export OS_PROJECT_DOMAIN_NAME=210Demo # 要改
export OS_PROJECT_NAME=Engineering # 要改
export OS_USER_DOMAIN_NAME=210Demo # 要改
export OS_USERNAME=zhangsan # 要改
export OS_PASSWORD=redhat # 要改
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
# 切换用户
[root@controller ~]# source zhangsanrc





创建虚拟机

1、创建名字叫web且是public类型的镜像


**注:如使用以下方式创建镜像则安装虚拟机时的所有空间将使用的是该镜像中的,这点需要理解

2、创建名字是m1.petitle且是public类型的实例类型(红帽OpenStack中叫flavor)

3、创建安全组



4、为Engineering项目创建密钥对

注:创建的该密钥会下载到你现在用的电脑中,即本地

5、配置网络
创建内部网络




创建外部网络




6、创建route







7、创建实例







.
.
.
.
游走在各发行版间老司机QQ群:905201396
不要嫌啰嗦的新手QQ群:756805267
Debian适应QQ群:912567610

posted @   Linux大魔王  阅读(2280)  评论(3编辑  收藏  举报
编辑推荐:
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
点击右上角即可分享
微信分享提示