OpenStack云平台搭建

OpenStack--开源的云计算管理平台项目

openstack

OpenStack是一个开源的云计算管理平台项目,是一系列软件开源项目的组合。由NASA(美国国家航空航天局)和Rackspace合作研发并发起,以Apache许可证(Apache软件基金会发布的一个自由软件许可证)授权。

OpenStack为私有云和公有云提供可扩展的弹性的云计算服务。项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。

OpenStack 简介

Openstack是一个云平台管理的项目,它不是一个软件。这个项目由几个主要的组件组合起来完成一些具体的工作。Openstack是一个旨在为公共及私有云的建设与管理提供软件的开源项目。它的社区拥有超过130家企业及1350位开发者,这些机构与个人将 Openstack作为基础设施即服务资源的通用前端。Openstack项目的首要任务是简化云的部署过程并为其带来良好的可扩展性。本文希望通过提供必要的指导信息,帮助大家利用 Openstack前端来设置及管理自己的公共云或私有云。

Openstack是由 Rackspace和NASA共同开发的云计算平台,帮助服务商和企业内部实现类似于 Amazon ec2和S3的云基础架构服务( Infrastructure as a Service)。Openstack包括两个主要模块:Nova和 Swift。前者是NASA开发的虚拟服务器部署和业务计算模块;后者是 Backpack开发的分布式云存储模块,两者可以一起用,也可以分开单独用。Openstack是开源项目,除了有 Rackspace和NASA的大力支持外,后面还有包括Dell、 Citrix、Cisco Canonical这些重量级公司的贡献和支持,发展速度非常快,有取代另一个业界领先开源云台 Eucalyptus的态势。

OpenStack 核心项目

OpenStack覆盖了网络、虚拟化、操作系统、服务器等各个方面。它是一个正在开发中的云计算平台项目,根据成熟及重要程度的不同,被分解成核心项目、孵化项目,以及支持项目和相关项目。每个项目都有自己的委员会和项目技术主管,而且每个项目都不是一成不变的,孵化项目可以根据发展的成熟度和重要性。 动漫女1
  1. 计算(Compute):Nova,一套控制器,用于为单个用户或使用群组管理虚拟机实例的整个生命周期,根据用户需求来提供虚拟服务。负责虚拟机创建、开机、关机、挂起、暂停、调整、迁移、重启、销毁等操作,配置CPU、内存等信息规格,自Austin版本集成到项目中。
  2. 对象存储(Object Storage):Swift,一套用于在大规模可扩展系统中通过内置冗余及高容错机制实现对象存储的系统,允许进行存储或者检索文件。可为Glance提供镜像存储,为Cinder提供卷备份服务,自Austin版本集成到项目中。
  3. 镜像服务(Image Service):Glance,一套虚拟机镜像查找及检索系统,支持多种虚拟机镜像格式(AKI、AMI、ARI、ISO、QCOW2、Raw、VDI、VHD、VMDK),有创建上传镜像、删除镜像、编辑镜像基本信息的功能,自Bexar版本集成到项目中。
  4. 身份服务(Identity Service):Keystone,为OpenStack其他服务提供身份验证、服务规则和服务令牌的功能,管理Domains、Projects、Users、Groups、Roles,自Essex版本集成到项目中。
  5. 网络&地址管理(Network):Neutron,提供云计算的网络虚拟化技术,为OpenStack其他服务提供网络连接服务。为用户提供接口,可以定义Network、Subnet、Router,配置DHCP、DNS、负载均衡、L3服务,网络支持GRE、VLAN。插件架构支持许多主流的网络厂家和技术,如OpenvSwitch,自Folsom版本集成到项目中。
  6. 块存储 (Block Storage):Cinder,为运行实例提供稳定的数据块存储服务,它的插件驱动架构有利于块设备的创建和管理,如创建卷、删除卷,在实例上挂载和卸载卷,自Folsom版本集成到项目中。
  7. UI 界面 (Dashboard):Horizon,OpenStack中各种服务的Web管理门户,用于简化用户对服务的操作,例如:启动实例、分配IP地址、配置访问控制等,自Essex版本集成到项目中。
  8. 测量 (Metering):Ceilometer,像一个漏斗一样,能把OpenStack内部发生的几乎所有的事件都收集起来,然后为计费和监控以及其它服务提供数据支撑,自Havana版本集成到项目中。
  9. 部署编排 (Orchestration):Heat ,提供了一种通过模板定义的协同部署方式,实现云基础设施软件运行环境(计算、存储和网络资源)的自动化部署,自Havana版本集成到项目中。
  10. 数据库服务(Database Service):Trove,为用户在OpenStack的环境提供可扩展和可靠的关系和非关系数据库引擎服务,自Icehouse版本集成到项目中。

OpenStack云平台搭建说明

本次部署采用系统的是Centos 8-Stream版,存储库为OpenStack-Victoria版,除基础配置,五大服务中的时间同步服务,七大组件中的nova服务,neutron服务,cinder服务需要在双节点配置外,其他服务配置均在控制节点,neutron配置从公有网络私有网络中选择一种即可,大多数情况还是选公有网络的配置,此次部署所有密码均为111111,可按自身需要自行配置 猫

安装环境

  • 采用虚拟化软件:VMware Workstation 16 Pro
  • 操作系统:Centos 8-Stream
  • 控制节点配置:内存4G,CPU4核,磁盘100G,启用虚拟化引擎
  • 计算节点配置:内存4G,CPU4核,磁盘100G,启用虚拟化引擎

基础配置(双节点)

Yum源仓库配置

           注意:以下配置yum是为了更快的更新源和下载安装包,但为了节省更多时间(1)和(2)都可跳过!!!!!

(1)阿里云镜像仓库地址:https://mirrors.aliyun.com

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo

sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

(2)配置Centos 8的源只需改yum仓库.repo文件参数即可

                            注意!!!此方法可能会出现gpgcheck检查失败的问题,推荐(1)方法比较保险
#更改CentOS-Stream-AppStream.repo文件,将baseurl参数中的地址改为https://mirrors.aliyun.com

[root@localhost ~]# cd /etc/yum.repos.d/
[root@localhost yum.repos.d]# vi CentOS-Stream-AppStream.repo 
[appstream]
name=CentOS Stream $releasever - AppStream
#mirrorlist=http://mirrorlist.centos.org/?   release=$stream&arch=$basearch&repo=AppStream&infra=$infra
baseurl=https://mirrors.aliyun.com/$contentdir/$stream/AppStream/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial


#更改CentOS-Stream-BaseOS.repo 文件,将baseurl参数中的地址改为https://mirrors.aliyun.com

[root@localhost yum.repos.d]# vi CentOS-Stream-BaseOS.repo 
[baseos]
name=CentOS Stream $releasever - BaseOS
#mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=BaseOS&infra=$infra
baseurl=https://mirrors.aliyun.com/$contentdir/$stream/BaseOS/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial


#更改CentOS-Stream-Extras.repo 文件,将baseurl参数中的地址改为https://mirrors.aliyun.com

[root@localhost yum.repos.d]# vi CentOS-Stream-Extras.repo 
[extras]
name=CentOS Stream $releasever - Extras
#mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=extras&infra=$infra
baseurl=https://mirrors.aliyun.com/$contentdir/$stream/extras/$basearch/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial

(3)配置openstack源

#在yum仓库文件夹下面创建openstack-victoria.repo文件

[root@localhost ~]# vi /etc/yum.repos.d/openstack-victoria.repo 
#写入以下内容
[virctoria]
name=virctoria
baseurl=https://mirrors.aliyun.com/centos/8-stream/cloud/x86_64/openstack-victoria/
gpgcheck=0
enabled=1

(4)清除缓存,重建缓存

[root@controller ~]# yum clean all
[root@controller ~]# yum makecache

网络配置

  • 控制节点双网卡-------> 仅主机IP:10.10.10.10 Net外网IP:10.10.20.10
  • 计算节点双网卡-------> 仅主机IP:10.10.10.20 Net外网IP:10.10.20.20

(1)安装network网络服务

#安装network,由于8系统自带的服务为NetworkManager,它会与neutron服务有冲突,所以安装network,关闭NetworkManager,并设置disable状态

[root@localhost ~]# dnf -y install network-scripts
[root@localhost ~]# systemctl disable --now NetworkManager

#先设置开机自启动,再启动network服务
[root@localhost ~]# systemctl enable network
[root@localhost ~]# systemctl start network

(2) 配置静态IP

#ens33,以控制节点为例
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO=static				#修改
ONBOOT=yes						#修改
IPADDR=10.10.10.10				#添加
NETMASK=255.255.255.0			#添加

#ens34,以控制节点为例
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens34
BOOTPROTO=static				#修改
ONBOOT=yes						#修改
IPADDR=10.10.20.10				#添加
NETMASK=255.255.255.0			#添加
GATEWAY=10.10.20.2				#添加
DNS1=8.8.8.8					#添加
DNS2=114.114.114.114			#添加

(3)重启网络,测试外网连通性

[root@localhost ~]# systemctl restart network
[root@localhost ~]# ping -c 3 www.baidu.com

主机配置

(1)修改主机名

#控制节点
[root@localhost ~]# hostnamectl set-hostname controller
[root@localhost ~]# bash
[root@controller ~]#

#计算节点
[root@localhost ~]# hostnamectl set-hostname compute
[root@localhost ~]# bash
[root@compute ~]#

(2)关闭防火墙

#关防火墙并设置disable开机禁启动
[root@controller ~]# systemctl disable --now firewalld

(3)关闭selinux安全子系统

#设置selinux并设置disable开机禁启动
[root@controller ~]# vi /etc/selinux/config 
SELINUX=disabled

#重启虚拟机
[root@controller ~]# reboot

#可通过getenforce命令查看selinux状态
[root@controller ~]# getenforce 
Disabled

(4)配置host主机映射

#控制节点
[root@controller ~]# cat >>/etc/hosts<<EOF
> 10.10.10.10    controller
> 10.10.10.20    compute
> EOF

#计算节点
[root@compute ~]# cat >>/etc/hosts<<EOF
> 10.10.10.10    controller
> 10.10.10.20    compute
> EOF

openstack存储库

#安装openstack-victoria版存储库
[root@controller ~]# dnf -y install centos-release-openstack-victoria

#升级节点上所有的安装包
[root@controller ~]# dnf -y upgrade 

#安装openstack客户端和openstack-selinux
[root@controller ~]# dnf -y install python3-openstackclient openstack-selinux 

五大服务

Chrony时间同步(双节点)

(1)查看系统是否安装chrony

[root@controller ~]# rpm -qa |grep chrony

#没有的话就安装
[root@controller ~]# dnf -y install chrony 

(2)编辑chrony配置文件

#控制节点
[root@controller ~]# vim /etc/chrony.conf
server ntp6.aliyun.com iburst		#添加与阿里云时间同步
allow 10.10.10.0/24			#添加

#计算节点
[root@controller ~]# vim /etc/chrony.conf
server controller iburst		#添加与控制节点时间同步

(3)重启时间同步服务,设置开机自启

[root@controller ~]# systemctl restart chronyd && systemctl enable chronyd

Mariadb数据库

(1)安装mariadb数据库

[root@controller ~]# dnf -y install mariadb mariadb-server python3-PyMySQL 

#启动mariadb数据库
[root@controller ~]# systemctl start mariadb

(2)创建openstack.cnf文件,编辑它

[root@controller ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 10.10.10.10		#绑定IP,如果后面换IP,这行可以删掉
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

(3)初始化数据库

[root@controller ~]# mysql_secure_installation
Enter current password for root (enter for none):    #输入当前用户root密码,若为空直接回车
OK, successfully used password, moving on...
Set root password? [Y/n] y				# 是否设置root密码
New password:					# 输入新密码
Re-enter new password:			# 再次输入新密码
Remove anonymous users? [Y/n] y				# 是否删除匿名用户
Disallow root login remotely? [Y/n] n			# 是否禁用远程登录
Remove test database and access to it? [Y/n] y			# 是否删除数据库并访问它	
Reload privilege tables now? [Y/n] y		# 是否重新加载权限表

(4)重启数据库服务并设置开机自启

[root@controller ~]# systemctl restart mariadb && systemctl enable mariadb

RabbitMQ消息队列

注意:安装rabbitmq-server时,可能会报错,这是安装源里面没有libSDL,下载所需包,再安装rabbitmq-server就行了

下载命令:wget http://rpmfind.net/linux/centos/8-stream/PowerTools/x86_64/os/Packages/SDL2-2.0.10-2.el8.x86_64.rpm

安装命令:dnf -y install SDL2-2.0.10-2.el8.x86_64.rpm

(1)安装rabbitmq软件包

[root@controller ~]# dnf -y install rabbitmq-server 

(2)启动消息队列服务并设置开机自启动

[root@controller ~]# systemctl start rabbitmq-server && systemctl enable rabbitmq-server

(3)添加openstack用户并设置密码

[root@controller ~]# rabbitmqctl add_user openstack 111111

(4)配置openstack用户权限

[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

(5)启用消息队列Web界面管理插件

[root@controller ~]# rabbitmq-plugins enable rabbitmq_management

#这一步启动后,ss -antlu命令查看端口会有一个15672的端口开启,可通过web界面登录RabbitMQ查看
#网站地址:http://10.10.10.10:15672,用户和密码默认都是guest

Memcached缓存

(1)安装memcache软件包

[root@controller ~]# dnf -y install memcached python3-memcached 

(2)编辑memcache配置文件

[root@controller ~]# vim /etc/sysconfig/memcached
..........
OPTIONS="-l 127.0.0.1,::1,controller"          #修改这一行

(3)重启缓存服务并设置开机自启

[root@controller ~]# systemctl start memcached && systemctl enable memcached

Etcd集群

(1)安装etcd软件包

[root@controller ~]# dnf -y install etcd 

(2)编辑etcd配置文件

[root@controller ~]# vim /etc/etcd/etcd.conf
#修改如下
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.10.10.10:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.10.10.10:2379"
ETCD_NAME="controller"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.10.10.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.10.10.10:2379"
ETCD_INITIAL_CLUSTER="controller=http://10.10.10.10:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

(3)启动etcd服务并设置开机自启动

[root@controller ~]# systemctl start etcd && systemctl enable etcd

七大组件

Keystone认证

(1)数据库创库授权

#进入数据库
[root@controller ~]# mysql -u root -p111111

#创建keystone数据库
MariaDB [(none)]> CREATE DATABASE keystone;

#授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '111111';

(2)安装keystone软件包

[root@controller ~]# dnf -y install openstack-keystone httpd python3-mod_wsgi 

(3)编辑配置文件

#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/keystone/keystone.conf.bak >/etc/keystone/keystone.conf

#编辑
[root@controller ~]# vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:111111@controller/keystone

[token]
provider = fernet

(4)数据库初始化

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

(5)查看keystone数据库表信息

[root@controller ~]# mysql -uroot -p111111

MariaDB [(none)]> use keystone;
MariaDB [keystone]> show tables;
MariaDB [keystone]> quit

(6)初始化Fernet

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

(7)引导身份认证

[root@controller ~]# keystone-manage bootstrap --bootstrap-password 111111 \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne

(8)配置Apache HTTP服务

#编辑httpd.conf文件
[root@controller ~]# vim /etc/httpd/conf/httpd.conf
ServerName controller		#添加这一行

<Directory />
    AllowOverride none
    Require all granted				#这一行改成这样
</Directory>

#创建wsgi-keystone.conf文件链接
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

(9)重启httpd服务并设置开机自启动

[root@controller ~]# systemctl restart httpd && systemctl enable httpd

(10)创建admin环境变量脚本

[root@controller ~]# vim /admin-openrc.sh
export OS_USERNAME=admin
export OS_PASSWORD=111111
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

#可通过source /admin-openrc.sh命令来导入环境变量,或./admin-openrc.sh命令,如果不想每次手动导入,可以修改.bashrc配置文件实现开机启动导入
[root@controller ~]# vim .bashrc 
source /admin-openrc.sh			#添加这一行

(11)创建域,项目,用户和角色

#导入环境变量
[root@controller ~]# source /admin-openrc.sh

#创建域,已有默认域default,自己可随便创一个
[root@controller ~]# openstack domain create --description "An Example Domain" example

#创建service项目
[root@controller ~]# openstack project create --domain default --description "Service Project" service

#创建测试项目
[root@controller ~]# openstack project create --domain default --description "Demo Project" myproject

#创建用户,此命令执行会要求输入密码,输两次即可
[root@controller ~]# openstack user create --domain default --password-prompt myuser

#创建角色
[root@controller ~]# openstack role create myrole

#添加角色与项目,用户绑定
[root@controller ~]# openstack role add --project myproject --user myuser myrole

(12)验证token令牌

[root@controller ~]# openstack token issue

Glance镜像

(1) 数据库创库授权

#进入数据库
[root@controller ~]# mysql -u root -p111111

#创建glance数据库
MariaDB [(none)]> CREATE DATABASE glance;

#授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '111111';

(2) 安装glance软件包

注:安装报错,修改CentOS-Stream-PowerTools.repo源为enable=1,重新安装,记得顺便改一下compute节点上的,不然等下compute节点装nova报错

[root@controller ~]# dnf install -y openstack-glance 

(3) 编辑配置文件

#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf


#编辑
[root@controller ~]# vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:111111@controller/glance

[keystone_authtoken]
www_authenticate_uri  = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 111111

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

(4) 数据库初始化

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

(5) 查看glance数据库表信息

[root@controller ~]# mysql -uroot -p111111

MariaDB [(none)]> use glance;
MariaDB [keystone]> show tables;
MariaDB [keystone]> quit

(6) 创建glance用户和服务,关联admin角色

#创建glance用户
[root@controller ~]# openstack user create --domain default --password 111111 glance

#关联admin角色
[root@controller ~]# openstack role add --project service --user glance admin

#创建glance服务
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image

(7) 注册API接口

#public
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292

#internal
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292

#admin
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292

(8) 查看服务端点

[root@controller ~]# openstack endpoint list

(9) 启动glance服务并设置开机自启

[root@controller ~]# systemctl start openstack-glance-api && systemctl enable openstack-glance-api

(10) 测试镜像功能

#此次采用的镜像为cirros-0.5.1-x86_64-disk.img,创建命令如下
[root@controller ~]# openstack image create "cirros" --file cirros-0.5.1-x86_64-disk.img \
--disk-format qcow2 --container-format bare --public

#创建成功后可通过openstack命令查看
[root@controller ~]# openstack image list

#进入glance数据库查看,存放在images表中
[root@controller ~]# mysql -uroot -p111111

MariaDB [(none)]> use glance;
MariaDB [glance]> select * from images\G;

#在/var/lib/glance/images/目录下可以看到镜像文件,如果要删除此镜像需要删除数据库信息,再删除镜像文件
[root@controller ~]# ls /var/lib/glance/images/

#如何删除glance数据库镜像的信息
MariaDB [(none)]> use glance;
MariaDB [glance]> select * from images\G;
MariaDB [glance]> delete from image_locations where image_id ='<需要删除的镜像ID>';
MariaDB [glance]> delete from image_properties where image_id ='<需要删除的镜像ID>';
MariaDB [glance]> delete from images where id='<需要删除的镜像ID>';

Placement放置

(1) 数据库创库授权

#进入数据库
[root@controller ~]# mysql -u root -p111111

#创建placement数据库
MariaDB [(none)]> CREATE DATABASE placement;

#授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '111111';

(2) 安装placement软件包

[root@controller ~]# dnf install -y openstack-placement-api 

(3) 编辑配置文件

#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/placement/placement.conf /etc/placement/placement.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/placement/placement.conf.bak >/etc/placement/placement.conf

#编辑
[root@controller ~]# vim /etc/placement/placement.conf
[placement_database]
connection = mysql+pymysql://placement:111111@controller/placement

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 111111

(4) 数据库初始化

[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement

(5) 查看placement数据库表信息

[root@controller ~]# mysql -uroot -p111111

MariaDB [(none)]> use placement;
MariaDB [keystone]> show tables;
MariaDB [keystone]> quit

(6) 创建placement用户和服务,关联admin角色

#创建placement用户
[root@controller ~]# openstack user create --domain default --password 111111 placement

#关联admin角色
[root@controller ~]# openstack role add --project service --user placement admin

#创建placement服务
[root@controller ~]# openstack service create --name placement --description "Placement API" placement

(7) 注册API接口

#public
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778

#internal
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778

#admin
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778

(8) 查看服务端点

[root@controller ~]# openstack endpoint list

(9) 重启httpd服务

[root@controller ~]# systemctl restart httpd

检测placement服务状态

[root@controller ~]# placement-status upgrade check

Nova计算

1,控制节点(1)

(1) 数据库创库授权
#进入数据库
[root@controller ~]# mysql -u root -p111111

#创建nova_api,nova和nova_cell0数据库
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;

#授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '111111';
(2) 安装nova软件包
[root@controller ~]# dnf install -y openstack-nova-api openstack-nova-conductor \
openstack-nova-novncproxy openstack-nova-scheduler
(3) 编辑配置文件
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak >/etc/nova/nova.conf

#编辑
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:111111@controller:5672/
my_ip = 10.10.10.10				#本机IP,如果将来换IP,这地方一定要改

[api_database]
connection = mysql+pymysql://nova:111111@controller/nova_api

[database]
connection = mysql+pymysql://nova:111111@controller/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 111111

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 111111
(4) 数据库初始化
# 同步nova_api数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova

# 同步nova_cell0数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

# 创建cell1
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

# 同步nova数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
(5) 创建nova用户和服务,关联admin角色
#创建nova用户
[root@controller ~]# openstack user create --domain default --password 111111 nova

#关联admin角色
[root@controller ~]# openstack role add --project service --user nova admin

#创建nova服务
[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
(6) 注册API接口
#public
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1

#internal
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1

#admin
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
(7) 查看服务端点
[root@controller ~]# openstack endpoint list
(8) 验证nova_cell0和cell1是否添加成功
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
(9) 启动nova所有服务并设为开机自启
[root@controller ~]# systemctl enable --now openstack-nova-api \
openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
(10) 查看nova服务是否启动
[root@controller ~]# nova service-list

#一般只会显示两个服务:nova-scheduler和nova-conductor,这是因为上面这条命令是由nova-api接收,而它控制着nova-scheduler和nova-conductor服务,如果nova-api未开启,那这两个服务也会down掉,nova-novncproxy服务则是通过查看端口号的形式,示例如下:
[root@controller ~]# netstat -lntup | grep 6080
tcp        0      0 0.0.0.0:6080            0.0.0.0:*               LISTEN      1456/python3   
[root@controller ~]# ps -ef | grep 1456			
nova        1456       1  0 18:29 ?        00:00:05 /usr/bin/python3 /usr/bin/nova-novncproxy --web /usr/share/novnc/
root       27724   26054  0 20:51 pts/0    00:00:00 grep --color=auto 1456
(11) 如何通过web界面查看
#如果不配置域名解析,就直接用ip
http://10.10.10.10:6080

#如果要配置域名解析,在电脑C:\Windows\System32\drivers\etc目录下里面的hosts文件里添加
10.10.10.10		controller
10.10.10.20 	compute
#再访问
http://controller:6080

2,计算节点

(1) 安装nova软件包
[root@compute ~]# dnf install -y openstack-nova-compute
(2) 编辑配置文件
#复制备份配置文件并去掉注释
[root@compute ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
[root@compute ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak >/etc/nova/nova.conf

#编辑
[root@compute ~]# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:111111@controller
my_ip = 10.10.10.20				#本机IP,如果将来换IP,这地方一定要改

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 111111

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 111111
(3) 确定计算节点是否支持虚拟机的硬件加速
#如果此命令返回值是别的数字,计算节点支持硬件加速;如果此命令返回值是0,计算节点不支持硬件加速,需要配置[libvirt]
[root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo

#配置[libvirt]
[root@compute ~]# vim /etc/nova/nova.conf
[libvirt]
virt_type = qemu
(4) 启动计算节点nova服务并设置开机自启动
[root@compute ~]# systemctl enable --now libvirtd.service openstack-nova-compute.service

控制节点(2)

(5) 将计算节点添加到单元数据库
#确认数据库中存在计算主机
[root@controller ~]# openstack compute service list --service nova-compute

#控制节点发现计算节点
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
(6) 设置发现间隔
[root@controller ~]# vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300

Neutron网络

(1) 数据库创库授权
#进入数据库
[root@controller ~]# mysql -u root -p111111

#创建neutron数据库
MariaDB [(none)] CREATE DATABASE neutron;

#授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '111111';
(2) 创建neutron用户和服务,关联admin角色
#创建neutron用户
[root@controller ~]# openstack user create --domain default --password 111111 neutron

#关联admin角色
[root@controller ~]# openstack role add --project service --user neutron admin

#创建neutron服务
[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
(3) 注册API接口
#public
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696

#internal
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696

#admin
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
(4) 查看服务端点
[root@controller ~]# openstack endpoint list

控制节点公有网络

(1) 安装neutron软件包
[root@controller ~]# dnf -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
(2) 编辑neutron配置文件
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf

#编辑
[root@controller ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:111111@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:111111@controller/neutron

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 111111

[nova]								#如果配置文件没有这个参数,就直接加
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 111111

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
(3) 编辑ml2插件
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak \
> /etc/neutron/plugins/ml2/ml2_conf.ini

#编辑
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[securitygroup]
enable_ipset = true
(4) 配置Linux网桥代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak \
> /etc/neutron/plugins/ml2/linuxbridge_agent.ini

#编辑
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34		#这里选择提供给实例的net网卡

[vxlan]
enable_vxlan = false

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(5) 配置DHCP代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini

#编辑
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
(6) 设置网桥过滤器
#修改系统参数配置文件
[root@controller ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf

#加载br_netfilter模块
[root@controller ~]# modprobe br_netfilter

#检查
[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1			#出现这个则配置成功
net.bridge.bridge-nf-call-ip6tables = 1			#出现这个则配置成功
(7) 配置元数据代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini

#编辑
[root@controller ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET		

#'METADATA_SECRET'为密码,可自行定义。但要与后面配置nova中的元数据参数一致
(8) 配置计算服务以使用网络服务
#在[neutron]部分,配置访问参数,启用元数据代理
[root@controller ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 111111
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET		#密码要一致
(9) 创建网络服务初始化脚本链接
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
(10) 数据库初始化
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage \
--config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
(11) 重启nova的API服务
[root@controller ~]# systemctl restart openstack-nova-api.service
(12) 启动neutron服务并设置开机自启
[root@controller ~]# systemctl enable --now neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

计算节点公有网络

(1) 安装neutron软件包
[root@compute ~]# dnf install -y openstack-neutron-linuxbridge ebtables ipset
(2) 编辑neutron配置文件
#复制备份配置文件并去掉注释
[root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
[root@compute ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf

#编辑
[root@compute ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:111111@controller
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 111111

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
(3) 配置Linux网桥代理
#复制备份配置文件并去掉注释
[root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
[root@compute ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak \
> /etc/neutron/plugins/ml2/linuxbridge_agent.ini

#编辑
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34		#这里选择提供给实例的net网卡

[vxlan]
enable_vxlan = false

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(4) 设置网桥过滤器
#修改系统参数配置文件
[root@compute ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf

#加载br_netfilter模块
[root@compute ~]# modprobe br_netfilter

#检查
[root@compute ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1			#出现这个则配置成功
net.bridge.bridge-nf-call-ip6tables = 1			#出现这个则配置成功
(5) 配置计算服务以使用网络服务
#在[neutron]部分,配置访问参数,启用元数据代理
[root@compute ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 111111
(6) 重启nova服务
[root@compute ~]# systemctl restart openstack-nova-compute.service
(7) 启动Linux网桥服务并设置开机自启
[root@compute ~]# systemctl enable --now neutron-linuxbridge-agent.service

公有网络服务是否正常运行

#控制节点查看网络代理服务列表
[root@controller ~]# openstack network agent list

#一般成功后会出现Metadata agent,DHCP agent,两个Linux bridge agent一共四个代理,一个Linux bridge agent属于controlller,另一个属于compute

控制节点私有网络

(1) 安装neutron软件包
[root@controller ~]# dnf -y install openstack-neutron \
openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
(2) 编辑neutron配置文件
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf

#编辑
[root@controller ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:111111@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:111111@controller/neutron

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 111111

[nova]								#如果配置文件没有这个参数,就直接加
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 111111

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
(3) 编辑ml2插件
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak \
> /etc/neutron/plugins/ml2/ml2_conf.ini

#编辑
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
vni_ranges = 1:1000

[securitygroup]
enable_ipset = true
(4) 配置Linux网桥代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak \
> /etc/neutron/plugins/ml2/linuxbridge_agent.ini

#编辑
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34		#这里选择提供给实例的net网卡

[vxlan]
enable_vxlan = true
local_ip = 10.10.10.10
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(5) 设置网桥过滤器
#修改系统参数配置文件
[root@controller ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf

#加载br_netfilter模块
[root@controller ~]# modprobe br_netfilter

#检查
[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1			#出现这个则配置成功
net.bridge.bridge-nf-call-ip6tables = 1			#出现这个则配置成功
(6) 配置DHCP代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini

#编辑
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
(7) 配置第三层代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini

#编辑
[root@controller ~]# vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
(8) 配置元数据代理
#复制备份配置文件并去掉注释
[root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak \
> /etc/neutron/metadata_agent.ini

#编辑
[root@controller ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET		

#'METADATA_SECRET'为密码,可自行定义。但要与后面配置nova中的元数据参数一致
(9) 配置计算服务以使用网络服务
#在[neutron]部分,配置访问参数,启用元数据代理
[root@controller ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 111111
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET		#密码要一致
(10) 创建网络服务初始化脚本链接
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
(11) 数据库初始化
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
(12) 重启nova的API服务
[root@controller ~]# systemctl restart openstack-nova-api.service
(13) 启动neutron服务并设置开机自启
[root@controller ~]# systemctl enable --now neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service neutron-l3-agent.service

计算节点私有网络

(1) 安装neutron软件包
[root@compute ~]# dnf install -y openstack-neutron-linuxbridge ebtables ipset
(2) 编辑neutron配置文件
#复制备份配置文件并去掉注释
[root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
[root@compute ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf

#编辑
[root@compute ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:111111@controller
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 111111

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
(3) 配置Linux网桥代理
#复制备份配置文件并去掉注释
[root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
[root@compute ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak \
> /etc/neutron/plugins/ml2/linuxbridge_agent.ini

#编辑
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens34		#这里选择提供给实例的net网卡

[vxlan]
enable_vxlan = true
local_ip = 10.10.10.20
l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(4) 设置网桥过滤器
#修改系统参数配置文件
[root@compute ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf

#加载br_netfilter模块
[root@compute ~]# modprobe br_netfilter

#检查
[root@compute ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1			#出现这个则配置成功
net.bridge.bridge-nf-call-ip6tables = 1			#出现这个则配置成功
(5) 配置计算服务以使用网络服务
#在[neutron]部分,配置访问参数,启用元数据代理
[root@compute ~]# vim /etc/nova/nova.conf
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 111111
(6) 重启nova服务
[root@compute ~]# systemctl restart openstack-nova-compute.service
(7) 启动Linux网桥服务并设置开机自启
[root@compute ~]# systemctl enable --now neutron-linuxbridge-agent.service

私有网络服务是否正常运行

#控制节点查看网络代理服务列表
[root@controller ~]# openstack network agent list

#一般成功后会出现Metadata agent,DHCP agent,L3 agent,两个Linux bridge agent一共五个代理,一个Linux bridge agent属于controlller,另一个属于compute

Dashboard仪表盘

(1) 安装dashboard软件包

[root@controller ~]# dnf install -y openstack-dashboard

(2) 编辑dashboard配置文件

#此文件内所有选项与参数用命令模式搜索,有就修改,没有就添加
[root@controller ~]# vim /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"

#不配域名解析就要把IP写进去
ALLOWED_HOSTS = ['controller','compute','10.10.10.10','10.10.10.20']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    },
}

OPENSTACK_KEYSTONE_URL = "http://%s/identity/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"


OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_fip_topology_check': False,
}

TIME_ZONE = "Asia/Shanghai"

(3) 配置http服务

[root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf

WSGIApplicationGroup %{GLOBAL}			#添加这行

#编辑dashboard配置文件
[root@controller ~]# vim /etc/openstack-dashboard/local_settings 

WEBROOT = '/dashboard/'					#添加这行

(4) 重启http和缓存服务

[root@controller ~]# systemctl restart httpd.service memcached.service

(5) 登录web界面

#如果不配置域名解析,就直接用ip
http://10.10.10.10/dashboard

#如果要配置域名解析,在电脑C:\Windows\System32\drivers\etc目录下里面的hosts文件里添加
10.10.10.10		controller
10.10.10.20 	compute
#再访问
http://controller/dashboard

Cinder存储

控制节点

(1) 数据库创库授权
#进入数据库
[root@controller ~]# mysql -u root -p111111

#创建cinder数据库
MariaDB [(none)] CREATE DATABASE cinder;

#授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '111111';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '111111';

(2) 安装cinder软件包

[root@controller ~]# dnf install -y openstack-cinder
(3) 编辑配置文件
#复制一份去掉注释
[root@controller ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
[root@controller ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf

#编辑
[root@controller ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:111111@controller
auth_strategy = keystone
my_ip = 10.10.10.10

[database]
connection = mysql+pymysql://cinder:111111@controller/cinder

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 111111

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
(4) 数据库初始化
[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
(5) 查看cinder数据库表信息
[root@controller ~]# mysql -uroot -p111111

MariaDB [(none)]> use cinder;
MariaDB [cinder]> show tables;
MariaDB [cinder]> quit
(6) 创建cinder用户和服务,关联admin角色
#创建cinder用户
[root@controller ~]# openstack user create --domain default --password 111111 cinder

#关联admin角色
[root@controller ~]# openstack role add --project service --user cinder admin

#创建cinderv2,cinderv3服务
[root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
[root@controller ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
(7) 注册API接口
cinderv2的服务端点
#public
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s

#internal
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s

#admin
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
cinderv3的服务端点
#public
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s

#internal
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s

#admin
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
(8) 查看服务端点
[root@controller ~]# openstack endpoint list
(9) 配置计算服务使用块存储
#编辑nova配置文件
[root@controller cinder]# vi /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne

#重启nova
[root@controller ~]# systemctl restart openstack-nova-api.service
(10) 启动cinder服务并设置开机自启
[root@controller ~]# systemctl enable --now openstack-cinder-api.service openstack-cinder-scheduler.service

计算节点(关闭虚拟机添加一块硬盘,可自行选择大小,这里为50G)

(1) 查看磁盘
[root@compute ~]# fdisk --list
(2) 安装 LVM 包
[root@compute ~]# dnf -y install lvm2 device-mapper-persistent-data
(3) 创建 LVM 物理卷/dev/sdb
[root@compute ~]# pvcreate /dev/sdb
(4) 创建 LVM 卷组cinder-volumes
[root@compute ~]# vgcreate cinder-volumes /dev/sdb
(5) 修改LVM配置
#复制一份去掉注释
[root@compute ~]# cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.bak
[root@compute ~]# grep -Ev '^$|#' /etc/lvm/lvm.conf.bak > /etc/lvm/lvm.conf

#编辑
[root@compute ~]# vi /etc/lvm/lvm.conf
devices {
        filter = [ "a/sda/","a/sdb/","r/.*/"]
}
(6) 安装cinder相关软件包
[root@compute ~]# dnf install -y openstack-cinder targetcli python3-keystone
(7) 编辑cinder配置文件
#复制一份去掉注释
[root@compute ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
[root@compute ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf

#编辑
[root@compute ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:111111@controller
auth_strategy = keystone
my_ip = 10.10.10.20
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:111111@controller/cinder

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 111111

[lvm]					#没有就添加
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes				#要与创建的卷组名对应
target_protocol = iscsi
target_helper = lioadm

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
(8) 启动cinder服务并设置开机自启
[root@compute ~]# systemctl enable --now openstack-cinder-volume.service target.service
(9) 返回控制节点,查看服务列表
[root@controller ~]# openstack volume service list
#显示这样就行
+------------------+-------------+------+---------+-------+----------------------------+
| Binary           | Host        | Zone | Status  | State | Updated At                 |
+------------------+-------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller  | nova | enabled | up    | 2023-05-11T08:12:03.000000 |
| cinder-volume    | compute@lvm | nova | enabled | up    | 2023-05-11T08:12:02.000000 |
+------------------+-------------+------+---------+-------+----------------------------+

至此,openstack云平台搭建V版已全部完成

posted @ 2023-05-16 17:13  SkyRainmom  阅读(588)  评论(0编辑  收藏  举报