openeuler安装 w版opensack
1. 基础环境与openstack源
虚拟机的话建议开启硬件虚拟化
systemctl disable --now firewalld setenforce 0 sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config echo "192.168.112.125 $HOSTNAME" >> /etc/hosts yum list|grep -i openstack yum -y install openstack-release-wallaby.noarch yum update -y
2. 安装基础插件
2.1. rabbitmq
yum -y install rabbitmq-server systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service rabbitmq-plugins enable rabbitmq_management #创建rabbitmq用户 rabbitmq_openstack_passwd=openstack_rabbitpasswd rabbitmqctl add_user openstack $rabbitmq_openstack_passwd rabbitmqctl set_permissions openstack ".*" ".*" ".*" # 验证 rabbitmqctl list_users
2.2. memcache
yum install -y memcached python3-memcached sed -i 's/OPTIONS.*/OPTIONS="-l 127.0.0.1,controller"/' /etc/sysconfig/memcached systemctl enable memcached&&systemctl restart memcached
2.3. etcd
yum -y install etcd cp -a /etc/etcd/etcd.conf{,.bak} IP=$(grep $HOSTNAME /etc/hosts|awk '{print $1}') cat > /etc/etcd/etcd.conf <<EOF_etcd [member] ETCD_NAME=default ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="http://$IP:2380" ETCD_LISTEN_CLIENT_URLS="http://$IP:2379" [cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://$IP:2380" ETCD_INITIAL_CLUSTER="http://$IP:2380" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://$IP:2379" EOF_etcd systemctl restart etcd&&systemctl enable etcd
2.5. openstackclient
yum install -y python3-openstackclient openstack-selinux
2.6. 数据库
一般用mysql或mariadb,实际上我在此处用的是一个glaera集群(mariadb),数据库搭建就不说了,反正找个能提供openstack库的就行,注意一下数据库配置就行
vim /etc/my.cnf.d/openstack.cnf [mysqld] bind-address = 0.0.0.0 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8
echo "192.168.112.29 galera" >> /etc/hosts
2.7. 存储单元
一般使用硬盘,swift,或者ceph,我在此处使用的是ceph。
3. openstack单节点部署
注:各组件启用日志示例
debug = False可屏蔽debug信息
log_dir = /var/log/nova
[DEFAULT] debug = False log_dir = /var/log/nova
3.1. keystone
创建数据库
mysql -uroot -proot_dbpasswd -hgalera -e"
drop database if exists keystone; create database keystone; grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'keystone_dbpasswd'; grant all privileges on keystone.* to 'keystone'@'%' identified by 'keystone_dbpasswd'; flush privileges; "
装包
yum install -y openstack-keystone httpd python3-mod_wsgi python3-PyMySQL
配置
[database]部分,配置数据库入口
[token]部分,配置token provider
cat > /etc/keystone/keystone.conf <<EOF_keystone [database] connection = mysql+pymysql://keystone:keystone_dbpasswd@galera/keystone [token] provider = fernet EOF_keystone
同步数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
初始化Fernet密钥仓库
启动服务
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
keystone-manage bootstrap --bootstrap-password admin_passwd \ --bootstrap-admin-url http://controller:5000/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne
环境变量配置
cat > /root/.bashrc << EOF export OS_USERNAME=admin export OS_PASSWORD=admin_passwd export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 EOF
配置Apache HTTP server
sed -i '/^ServerName/d' /etc/httpd/conf/httpd.conf echo "ServerName controller" >> /etc/httpd/conf/httpd.conf ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ systemctl enable httpd&&systemctl restart httpd
创建project service
,其中 domain default
在 keystone-manage bootstrap 时已创建
openstack domain create --description "An Example Domain" example openstack project create --domain default --description "Service Project" service
示例:创建(non-admin)project myproject
,user myuser
和 role myrole
,为 myproject
和 myuser
添加角色myrole
openstack project create --domain default --description "Demo Project" myproject openstack user create --domain default --password-prompt myuser openstack role create myrole openstack role add --project myproject --user myuser myrole
验证,为admin用户请求token
source ~/.bashrc unset OS_AUTH_URL OS_PASSWORD openstack --os-auth-url http://controller:5000/v3 \ --os-project-domain-name Default --os-user-domain-name Default \ --os-project-name admin --os-username admin token issue
3.2. glance
创建数据库
mysql -u root -proot_dbpasswd -hgalera << EOF
drop database if exists glance; create database glance; grant all privileges on glance.* to 'glance'@'localhost' identified by 'glance_dbpasswd'; grant all privileges on glance.* to 'glance'@'%' identified by 'glance_dbpasswd'; flush privileges; EOF
创建服务凭证
openstack user create --domain default --password glance_passwd glance openstack role add --project service --user glance admin openstack service create --name glance --description "OpenStack Image" image
创建镜像服务API端点
openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292
装包
yum install -y openstack-glance
配置
[database]部分,配置数据库入口
[keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口
[glance_store]部分,配置本地文件系统存储和镜像文件的位置
cat > /etc/glance/glance-api.conf <<EOF [database] connection = mysql+pymysql://glance:glance_dbpasswd@galera/glance [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = glance_passwd [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/ EOF
同步数据库
su -s /bin/sh -c "glance-manage db_sync" glance
启服务
systemctl enable openstack-glance-api&&systemctl start openstack-glance-api
3.3. placement
创建数据库
mysql -uroot -proot_dbpasswd -hgalera -e"
drop database if exists placement; create database placement; GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'placement_dbpasswd'; GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement_dbpasswd';"
创建Placement API服务
openstack user create --domain default --password placement_passwd placement openstack role add --project service --user placement admin openstack service create --name placement --description "Placement API" placement
创建placement服务API端点
openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778
装包
yum install -y openstack-placement-api
配置
[placement_database]部分,配置数据库入口
[api] [keystone_authtoken]部分,配置身份认证服务入口
cat > /etc/placement/placement.conf << EOF [placement_database] connection = mysql+pymysql://placement:placement_dbpasswd@galera/placement [api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = placement_passwd
EOF
同步数据库
su -s /bin/sh -c "placement-manage db sync" placement
启服务
验证,安装osc-placement,列出可用的资源类别及特性
yum install python3-osc-placement openstack --os-placement-api-version 1.2 resource class list --sort-column name openstack --os-placement-api-version 1.6 trait list --sort-column name
注:openeuler的openstack修复了placement文件的IfVersion bug,不用手动去调整/etc/httpd/conf.d/00-placement-api.conf文件
cat /etc/httpd/conf.d/00-placement-api.conf Listen 8778 <VirtualHost *:8778> WSGIProcessGroup placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess placement-api processes=3 threads=1 user=placement group=placement WSGIScriptAlias / /usr/bin/placement-api <IfVersion >= 2.4> ErrorLogFormat "%M" </IfVersion> ErrorLog /var/log/placement/placement-api.log #SSLEngine On #SSLCertificateFile ... #SSLCertificateKeyFile ... <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory> </VirtualHost> Alias /placement-api /usr/bin/placement-api <Location /placement-api> SetHandler wsgi-script Options +ExecCGI WSGIProcessGroup placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On </Location>
3.4. nova
注:CTL为controller控制节点
注:CPT为compute计算节点
创建数据库
mysql -uroot -proot_dbpasswd -hgalera << EOF drop database if exists nova; drop database if exists nova_api; drop database if exists nova_cell0; create database if not exists nova; create database if not exists nova_api; create database if not exists nova_cell0; grant all privileges on nova_api.* to nova@'localhost' identified by 'nova_dbpasswd'; grant all privileges on nova_api.* to nova@'%' identified by 'nova_dbpasswd'; grant all privileges on nova.* to nova@'localhost' identified by 'nova_dbpasswd'; grant all privileges on nova.* to nova@'%' identified by 'nova_dbpasswd'; grant all privileges on nova_cell0.* to nova@'localhost' identified by 'nova_dbpasswd'; grant all privileges on nova_cell0.* to nova@'%' identified by 'nova_dbpasswd'; flush privileges; EOF
创建nova服务凭证
openstack user create --domain default --password nova_passwd nova # (CTL) openstack role add --project service --user nova admin # (CTL) openstack service create --name nova --description "OpenStack Compute" compute # (CTL)
创建nova API端点
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 #(CTL) openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 #(CTL) openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 #(CTL)
装包
CTL
yum install -y openstack-nova-api openstack-nova-conductor \ openstack-nova-novncproxy openstack-nova-scheduler
CPT
yum install -y openstack-nova-compute
如果是arm服务器,计算节点需额外安装edk2-aarch64
CPT
yum install -y edk2-aarch64
配置
[default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;
[api_database] [database]部分,配置数据库入口;
[api] [keystone_authtoken]部分,配置身份认证服务入口;
[vnc]部分,启用并配置远程控制台入口;
[glance]部分,配置镜像服务API的地址;
[oslo_concurrency]部分,配置lock path;
[placement]部分,配置placement服务的入口。
METADATA_SECRET
为元数据代理secret
my_ip=$(grep $HOSTNAME /etc/hosts|awk '{print $1}')
cat > /etc/nova/nova.conf <<"EOF" [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:openstack_rabbitpasswd@controller:5672/ my_ip = $my_ip use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver=libvirt.LibvirtDriver #(CPT) instances_path = /var/lib/nova/instances/ #(CPT) lock_path = /var/lib/nova/tmp #(CPT) [api_database] connection = mysql+pymysql://nova:nova_dbpasswd@galera/nova_api #(CTL) [database] connection = mysql+pymysql://nova:nova_dbpasswd@galera/nova #(CTL) [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = nova_passwd [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html #(CPT) [libvirt] #virt_type = qemu #(CPT)
virt_type = kvm #(CPT) #cpu_mode = custom #(CPT) #cpu_model = cortex-a72 #(CPT) [glance] api_servers = http://controller:9292 [oslo_concurrency] lock_path = /var/lib/nova/tmp #(CTL) [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = placement_passwd [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron_passwd service_metadata_proxy = true #(CTL) metadata_proxy_shared_secret = METADATA_SECRET #(CTL)
EOF
crudini --set /etc/nova/nova.conf DEFAULT my_ip $my_ip
sed -i 's/#.*//g' /etc/nova/nova.conf
确定是否支持虚拟机硬件加速(x86架构)
egrep -c '(vmx|svm)' /proc/cpuinfo #(CPT)
如果计算节点返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM
vim /etc/nova/nova.conf #(CPT)
[libvirt]
virt_type = qemu
如果计算节点为arm64结构,qemu模式还需要执行以下命令
vim /etc/libvirt/qemu.conf nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \ /usr/share/AAVMF/AAVMF_VARS.fd", \ "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \ /usr/share/edk2/aarch64/vars-template-pflash.raw"] vim /etc/qemu/firmware/edk2-aarch64.json { "description": "UEFI firmware for ARM64 virtual machines", "interface-types": [ "uefi" ], "mapping": { "device": "flash", "executable": { "filename": "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw", "format": "raw" }, "nvram-template": { "filename": "/usr/share/edk2/aarch64/vars-template-pflash.raw", "format": "raw" } }, "targets": [ { "architecture": "aarch64", "machines": [ "virt-*" ] } ], "features": [ ], "tags": [ ] } #(CPT)
同步nova-api数据库
su -s /bin/sh -c "nova-manage api_db sync" nova #(CTL)
注册cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova #(CTL)
创建cell1 cell
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova #(CTL)
同步nova数据库
su -s /bin/sh -c "nova-manage db sync" nova #(CTL)
验证cell0和cell1注册正确
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova #(CTL)
添加计算节点到openstack集群
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova #(CPT)
启服务
#(CTL)
systemctl enable \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service systemctl restart \ openstack-nova-api.service \ openstack-nova-scheduler.service \ openstack-nova-conductor.service \ openstack-nova-novncproxy.service
#(CPT)
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl restart libvirtd.service openstack-nova-compute.service
验证
列出服务组件,验证每个流程都成功启动和注册:
openstack compute service list #(CTL)
列出身份服务中的API端点,验证与身份服务的连接:
openstack catalog list #(CTL)
列出镜像服务中的镜像,验证与镜像服务的连接:
openstack image list #(CTL)
检查cells是否运作成功,以及其他必要条件是否已具备。
nova-status upgrade check #(CTL)
3.5. neutron
创建数据库
mysql -uroot -proot_dbpasswd -hgalera << "EOF" drop database if exists neutron; create database if not exists neutron; grant all privileges on neutron.* to neutron@'localhost' identified by 'neutron_dbpasswd'; grant all privileges on neutron.* to neutron@'%' identified by 'neutron_dbpasswd'; EOF
创建neutron服务凭证
openstack user create --domain default --password neutron_passwd neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description "OpenStack Networking" network
创建Neutron服务API端点
openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696
装包
#(CTL)
yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2
#(CPT)
yum install openstack-neutron-linuxbridge ebtables ipset
配置主体服务
[database]部分,配置数据库入口;(CTL)
[default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;(除了transport_url与auth_strategy,其他都是CTL节点独有的配置)
[default] [keystone]部分,配置身份认证服务入口;
[default] [nova]部分,配置网络来通知计算网络拓扑的变化; (CTL)
[oslo_concurrency]部分,配置lock path。
cat > /etc/neutron/neutron.conf <<EOF [database] connection = mysql+pymysql://neutron:neutron_dbpasswd@galera/neutron [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:openstack_rabbitpasswd@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true api_workers = 3 [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = neutron password = neutron_passwd [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = nova password = nova_passwd [oslo_concurrency] lock_path = /var/lib/neutron/tmp
EOF
配置ML2插件
[ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;
[ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;
[ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;
[securitygroup]部分,配置允许 ipset。
补充
l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge
cat > /etc/neutron/plugins/ml2/ml2_conf.ini <<EOF [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = true EOF
配置 Linux bridge 代理
[linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;
[vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;
[securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。
cat > /etc/neutron/plugins/ml2/linuxbridge_agent.ini <<EOF [linux_bridge] physical_interface_mappings = provider:ens192 [vxlan] enable_vxlan = true local_ip = 192.168.112.125 l2_population = true [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver EOF
配置Layer-3代理
在[default]部分,配置接口驱动为linuxbridge
cat > /etc/neutron/l3_agent.ini << EOF [DEFAULT] interface_driver = linuxbridge EOF
配置DHCP代理
[default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据
cat > /etc/neutron/dhcp_agent.ini <<EOF [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true EOF
配置metadata代理
[default]部分,配置元数据主机和shared secret
注意
替换METADATA_SECRET
为合适的元数据代理secret。
cat > /etc/neutron/metadata_agent.ini << EOF [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET EOF
配置nova:/etc/nova/nova.conf [neutron]部分
[neutron]部分,配置访问参数,启用元数据代理,配置secret。
注:CTL节点/etc/nova/nova.conf [neutron]的metadata_proxy_shared_secret与/etc/neutron/metadata_agent.ini [DEFAULT]中的metadata_proxy_shared_secret是对应的
vim /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = service username = neutron password = NEUTRON_PASS service_metadata_proxy = true (CTL) metadata_proxy_shared_secret = METADATA_SECRET (CTL)
同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重启计算API服务
systemctl restart openstack-nova-api.service
启动网络服务
CTL
systemctl enable neutron-server.service neutron-linuxbridge-agent.service \ neutron-dhcp-agent.service neutron-metadata-agent.service \ neutron-l3-agent.service systemctl restart openstack-nova-api.service neutron-server.service \ neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ neutron-metadata-agent.service neutron-l3-agent.service
CPT
systemctl enable neutron-linuxbridge-agent.service
systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service
3.6. cinder
建库
mysql -uroot -proot_dbpasswd -hgalera << 'EOF' drop database if exists cinder; create database cinder; grant all privileges on cinder.* to cinder@localhost identified by 'cinder_dbpassswd'; grant all privileges on cinder.* to cinder@'%' identified by 'cinder_dbpasswd'; EOF
创建cinder服务凭证
openstack user create --domain default --password cinder_passwd cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
创建块存储服务API端点
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
装包
CTL:(controller控制节点)
yum install -y openstack-cinder-api openstack-cinder-scheduler
STG:(storage存储节点)
yum install -y lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \ openstack-cinder-volume openstack-cinder-backup
准备存储设备示例
vgcreate cinder-volumes /dev/sdb
devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备,【避免影响到其他服务,主要是防止影响系统盘】(此配置使得在lvm中,只能看到sdb,也只有sdb能继续做lvm,其他的都不行)
vim /etc/lvm/lvm.conf devices { ... filter = [ "a/sdb/", "r/.*/"]
准备NFS
任意一个固定节点,比如galera1节点
客户机地址可以是主机名、IP 地址、网段地址,允许使用“*”、 “?”通配符。
“rw” 表示允许读写,“ro” 表示为只读。
sync :表示同步写入到内存与硬盘中。
no_root_squash : 表示当客户机以root身份访问时赋予本地root权限(默认是root_squash)。
root_squash :表示客户机用root用户访问该共享目录时,将root用户映射成匿名用户。
其它常用选项
all_squash :所有访问用户都映射为匿名用户或用户组。
async :将数据先保存在内存缓冲区中,必要时才写入磁盘。
subtree_check(默认):若输出目录是一个子目录,则nfs服务器将检查其父目录的权限。
no_subtree_check :即使输出目录是一个子目录,nfs服务器也不检查其父目录的权限,这样可以提高效率。
#nfs挂载示例,服务自动挂载(命令出自日志):mount -t nfs 192.168.112.24:/root/cinder/backup /var/lib/cinder/backup_mount/abe87fe67e3bf33285e7cdf2749eefe7
yum install nfs-utils rpcbind
mkdir -p /root/cinder/backup&&chmod 777 /root/cinder/backup cat << EOF >> /etc/exports /root/cinder/backup 192.168.112.0/24(rw,sync,no_root_squash,no_all_squash) EOF
systemctl restart rpcbind
systemctl restart nfs-server
systemctl enable rpcbind nfs-server nfs-utils
配置cinder
[database]部分,配置数据库入口;
[DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;
[DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;
[oslo_concurrency]部分,配置lock path。
MY_IP=$(grep $HOSTNAME /etc/hosts|awk '{print $1}')
cat > /etc/cinder/cinder.conf << EOF [DEFAULT] transport_url = rabbit://openstack:openstack_rabbitpasswd@controller auth_strategy = keystone my_ip = $MY_IP
# 以下三行为STG节点配置 enabled_backends = lvm backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver backup_share=192.168.112.24:/root/cinder/backup [database] connection = mysql+pymysql://cinder:cinder_dbpasswd@galera/cinder [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = cinder password = cinder_passwd [oslo_concurrency] lock_path = /var/lib/cinder/tmp
#[lvm]为STG节点配置 [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = tgtadm EOF
同步数据库
su -s /bin/sh -c "cinder-manage db sync" cinder
配置nova【CTL节点】
vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
重启计算API服务
systemctl restart openstack-nova-api.service
注:当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。
vi /etc/tgt/tgtd.conf include /var/lib/cinder/volumes/*
启动cinder服务
CTL节点
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
STG节点
systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \ openstack-cinder-volume.service \ openstack-cinder-backup.service systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \ openstack-cinder-volume.service \ openstack-cinder-backup.service
验证
openstack volume service list
3.7. horizon
装包
yum install openstack-dashboard
改配置
vim /etc/openstack-dashboard/local_settings OPENSTACK_HOST = "controller" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.signed_cookies' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 3, }
cat /etc/httpd/conf.d/openstack-dashboard.conf <VirtualHost *:80> ServerAdmin webmaster@openstack.org ServerName openstack_dashboard DocumentRoot /usr/share/openstack-dashboard/ LogLevel warn ErrorLog /var/log/httpd/openstack_dashboard-error.log CustomLog /var/log/httpd/openstack_dashboard-access.log combined WSGIScriptReloading On WSGIDaemonProcess openstack_dashboard_website processes=17 WSGIProcessGroup openstack_dashboard_website WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py <Location "/"> Require all granted </Location> Alias /static /usr/share/openstack-dashboard/static <Location "/static"> SetHandler None </Location> </Virtualhost>
启服务
systemctl restart httpd.service memcached.service
如果web无法访问,尝试以下操作
也就是改一次配置文件,需要执行一次如下命令,然而dashboard的代码有bug。。。提示web界面登陆出错。
cd /usr/share/openstack-dashboard/
python3 manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf
换新面板试试
mysql -uroot -proot_dbpasswd -hgalera <<EOF CREATE DATABASE skyline DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; GRANT ALL PRIVILEGES ON skyline.* TO 'skyline'@'localhost' IDENTIFIED BY 'skyline_dbpasswd'; GRANT ALL PRIVILEGES ON skyline.* TO 'skyline'@'%' IDENTIFIED BY 'skyline_dbpasswd'; EOF
openstack user create --domain default --password skyline_passwd skyline
openstack role add --project service --user skyline admin
yum -y install docker docker pull 99cloud/skyline:latest
#docker pull registry.cn-shanghai.aliyuncs.com/99cloud-sh/skyline:latest mkdir -p /etc/skyline /var/log/skyline /var/lib/skyline /var/log/nginx
注:原docker镜像没做域名解析,所以需要在配置文件中写入IP地址,而非域名
最简配置文件示例
cat /etc/skyline/skyline.yaml default: database_url: mysql://skyline:skyline_dbpasswd@192.168.112.29:3306/skyline debug: true log_dir: /var/log/skyline openstack: keystone_url: http://192.168.112.125:5000/v3/ system_user_password: skyline_passwd
启服务
docker run -d --name skyline_bootstrap -v /etc/skyline/skyline.yaml:/etc/skyline/skyline.yaml -v /var/log/skyline:/var/log/skyline --net=host 99cloud/skyline:latest
web访问:https://IP:9999
如果启服务失败,可能是skyline的配置文件问题,已发现在新版的skyline的nginx服务中设置的有域名需要解析,controller与vip两个,需要在宿主机上hosts文件中解析这两个地址或者使用docker cp修改容器中的nginx配置文件
停服务示例
注:由于只有dashboard服务在容器中运行,所以可以自由删停,多容器时,注意不用干掉太多任务
docker stop $(docker ps -q)
启服务
docker start $(docker ps -aq)
卸载服务示例
docker rm $(docker ps -aq) -f
参考连接:https://www.bookstack.cn/read/openeuler-21.09-zh/thirdparty_migration-OpenStack-wallaby.md?wd=memcached#Keystone%20%E5%AE%89%E8%A3%85