OpenStack云平台部署(手动)

一、基础环境准备

1、虚拟机创建

创建如下两个虚拟机:
控制节点:内存4G、CPU2核、硬盘40G、网卡1-仅主机模式、网卡2-NAT模式
计算节点:内存4G、CPU2核、硬盘1-40G、硬盘2-40G、网卡1-仅主机模式、网卡2-NAT模式

操作步骤:文件——新建虚拟机——(典型)下一步——(稍后安装操作系统)下一步——(LINUX)(CentOS7 64位)下一步——》
名称改为 hqs-控制节点,位置改为:D:\hqs\hqs-控制节点,再点下一步——》磁盘40G,默认多个——完成——编辑虚拟机设置——
内存改为4096M、处理器改为1处理器2核、点选Intel VT-x/EPT——》点“添加”——网络适配器——设置网卡1为仅主机、网卡2为NAT。——》
CD/DVD选择ISO文件:CentOS-7-x86_64-DVD-1804.iso。

	  右键点“hqs-控制节点”——》管理-克隆——》克隆方法:创建完整克隆——》名称改为“hqs-计算节点”、位置改为:D:\hqs\hqs-计算节点——》
	  完成。
	  
	  hqs-计算节点——》点击“编辑虚拟机设置”——》点击"添加"——》选择硬盘——》创建新虚拟磁盘——》磁盘大小40G,且拆分多个文件

2、网络环境配置

点击VMWARE最上方菜单栏的“编辑”——》“虚拟网络编辑器”——》需要修改,点击“更改设置”——》点击“添加网络”-"VMnet1"-点确定。

点选“VMnet1”——》点选“仅主机模式”——》子网IP改为192.168.10.0——》可自行选择是否配置DHCP——》点选“VMnet8”——》点选“NAT模式”——》
子网IP改为192.168.20.0——点击“确定”

修改hqs-控制节点网卡配置:点击“编辑虚拟机设置”——》点选“网络适配器”——》点选“自定义”——》点选“VMnet1(仅主机模式)”——》
点选“网络适配器1”——》点选“自定义”——》点选“VMnet8(NAT模式)”
同上修改hqs-计算节点网卡配置。

3、安装centos系统

hqs-控制节点操作:“开启虚拟机”——》上下选择到“Install CentOS 7”——》默认选english,点continue——》点“DATE&TIME”,选Asia-Shanghai,
点击Done——》INSTALLATION DESTINATION,选Done自动分区——》KDUMP点选取消——》Begin Installation——》ROOT PASSWORD设置为“1234”.

hqs-计算节点操作同上,仅在INSTALLATION DESTINATION时,选择第一个磁盘,然后Done自动分区。

4、修改网卡配置

根据网卡添加顺序,ens33对应第一块网卡,ens34对应第二块网卡。

先修改控制节点网卡配置:

# 切换到网卡路径
[root@localhost ~]# cd /etc/sysconfig/network-scripts/

# 修改ens33
[root@localhost network-scripts]# vi ifcfg-ens33
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.10.10
NETMASK=255.255.255.0

# 修改ens34
[root@localhost network-scripts]# vi ifcfg-ens34
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.20.10
NETMASK=255.255.255.0
GATEWAY=192.168.20.2

# 重启网络
[root@localhost network-scripts]# systemctl restart network

再修改计算节点网卡配置:

# 切换到网卡路径
[root@localhost ~]# cd /etc/sysconfig/network-scripts/

# 修改ens33
[root@localhost network-scripts]# vi ifcfg-ens33
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.10.20
NETMASK=255.255.255.0

# 修改ens34
[root@localhost network-scripts]# vi ifcfg-ens34
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.20.20
NETMASK=255.255.255.0
GATEWAY=192.168.20.2

# 重启网络
[root@localhost network-scripts]# systemctl restart network

5、配DNS

配DNS之前要保证NAT模式的网卡,有配置网关(GATEWAY).

修改两台机器的 /etc/resolv.conf 文件。

[root@localhost ~]# vi /etc/resolv.conf
nameserver 8.8.8.8

# 修改后重启网卡
[root@localhost ~]# systemctl restart network

# 重启后测试ping百度
[root@localhost ~]# ping www.baidu.com
PING www.a.shifen.com (14.119.104.254) 56(84) bytes of data.
64 bytes from 14.119.104.254 (14.119.104.254): icmp_seq=1 ttl=128 time=25.2 ms
64 bytes from 14.119.104.254 (14.119.104.254): icmp_seq=2 ttl=128 time=24.3 ms

6、yum源配置

# 切换到yum目录
[root@controller ~]# cd /etc/yum.repos.d/
# 创建备份目录
[root@controller yum.repos.d]# mkdir repo.bak
[root@controller yum.repos.d]# ls
CentOS-Base.repo       CentOS-fasttrack.repo  CentOS-Vault.repo
CentOS-CR.repo         CentOS-Media.repo  ....
# 把原来的yum文件移到到备份目录
[root@controller yum.repos.d]# mv *.repo repo.bak/
# 下载阿里的yum源
[root@controller yum.repos.d]# curl -o /etc/yum.repos.d/CentOS-Base.repo  http://mirrors.aliyun.com/repo/Centos-7.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2523  100  2523    0     0  20847      0 --:--:-- --:--:-- --:--:-- 21025

# 由于官方已经停止了对centos6\centos7\centos8的支持和维护,官网已经关闭了相关的yum源安装。
# 这里为了能够正常安装openstack相关组件,添加了本地教师机(10.104.45.50)的ftp服务,共享yum源
[root@controller yum.repos.d]# vi openstack.repo
[base]
name=base
baseurl=ftp://10.104.45.50/base/     
enable=1                         
gpgcheck=0 
[extras]
name=extras
baseurl=ftp://10.104.45.50/extras/   
enable=1                         
gpgcheck=0 
[updates]
name=updates
baseurl=ftp://10.104.45.50/updates/ 
enable=1                         
gpgcheck=0 
[train]
name=train
baseurl=ftp://10.104.45.50/train/
enable=1                         
gpgcheck=0
[virt]
name=virt
baseurl=ftp://10.104.45.50/virt/
enable=1                         
gpgcheck=0

# 清理yum
[root@controller yum.repos.d]# yum clean all && yum makecache

# 安装net-tools包
[root@localhost ~]# yum install -y net-tools

# 测试
[root@localhost ~]# ifconfig

2、主机名修改

# 修改主机名
[root@asdad yum.repos.d]# hostnamectl set-hostname controller
# 让修改生效显示
[root@asdad yum.repos.d]# bash
[root@controller yum.repos.d]# 
# 修改后hostname文件内容对应修改
[root@controller yum.repos.d]# cat /etc/hostname
controller

3、本地域名的解析

直接通过主机名访问主机,需要将主机名和IP地址进行绑定,绑定工作可以通过本地域名解析来实现。
Linux中的 /etc/hosts 文件可以将一些常用的域名和对应的IP建立对应关系。

默认hosts文件内容和意义:

[root@controller ~]# cat /etc/hosts
# 每一行都表示一个域名到Ip地址的映射关系
# 127.0.0.1表示IPv4的本地IP地址
# ::1表示IPv6的本地IP地址
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4   # 第一个名称是主机名,后面的为主机的别名
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6   # 第一个名称是主机名,后面的为主机的别名

案例:将controller解析为内网IP——192.168.10.10

# 添加解析信息到文件最后一行
[root@controller ~]# echo '192.168.10.10   controller' >> /etc/hosts
[root@controller ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.10   controller

# ping通,解析正常
[root@controller ~]# ping controller
PING controller (192.168.10.10) 56(84) bytes of data.
64 bytes from controller (192.168.10.10): icmp_seq=1 ttl=64 time=0.036 ms
64 bytes from controller (192.168.10.10): icmp_seq=2 ttl=64 time=0.034 ms

4、防火墙管理

CentOS7系统默认使用firewall作为系统防火墙,管理功能集成在系统服务管理命令 systemctl中。

常用参数 功能说明
status 查看服务运行状态
start 开启服务
stop 停止服务
enable 服务开机启动
disable 服务开机禁用
restart 重启服务
# 停止防火墙和禁止开机启动防火墙
[root@controller ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
# 查看防火墙是否关闭
[root@controller ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)

二、Openstack基础支持服务

Openstack平台需要借助多种第三方提供的基础服务才能正常运行,数据库、消息队列、时间同步、缓存服务等。

1、Chrony时间同步服务

同一个系统内的计算机时间必须保持一致才能保证系统工作正常。

Chrony软件是一款开源的自由软件,包括两个核心组件:chronyd(后台守护程序)和chronyc(命令行用户管理工具)。

(1)时间同步服务配置

通过修改chrony配置可以将任意一台计算机配置成NTP服务器或NTP服务器连接的客户端。

# 安装chrony
[root@controller ~]# yum install -y  chrony

# 查看chrony配置文件
[root@controller ~]# cat /etc/chrony.conf 
# Use public servers from the pool.ntp.org project.
server 0.centos.pool.ntp.org iburst  
server 1.centos.pool.ntp.org iburst  # Centos官方提供了4个NTP服务器 
server 2.centos.pool.ntp.org iburst  # iburst作用:设置当NTP服务器一定时间没有应答时,客户端发送8倍数据包以提升同步成功率
server 3.centos.pool.ntp.org iburst
...略
# Allow NTP client access from local network.
allow 192.168.10.0/24   # 设置允许某个网段的chrony客户端使用本机NTP服务
...略

# 修改配置后,重启服务生效
[root@controller ~]# systemctl restart chronyd 
[root@controller ~]# systemctl enable chronyd

(2)时间同步服务管理

时间同步由chronyc命令进行监控和管理。

功能说明 常用参数命令
查看 ntp_servers信息,加-v将显示对数据的说明 chronyc sources -v
查看 ntp_servers 状态 chronyc sourcestats -v
查看 ntp_servers 是否在线 chronyc activity -v
查看 ntp 详细信息 chronyc tracking -v
强制同步下系统时钟 chronyc -a makestep
显示访问本服务器的客户端 chronyc clients
添加新的ntp服务器 chronyc add server
删除已有的ntp服务器 chronyc delete
# 查看当前客户端与NTP服务器连接情况(不加-v)
[root@controller ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 119.28.183.184                2   6    17     4  +4589us[  +17ms] +/-   45ms
^- ntp1.ams1.nl.leaseweb.net     2   6   375     5    +20ms[  +32ms] +/-  235ms
^+ 85.199.214.102                1   6   377     4    -17ms[-4919us] +/-  134ms
^- pingless.com                  2   6   377     4    -15ms[  -15ms] +/-  132ms

# 查看当前客户端与NTP服务器连接情况(加-v)
[root@controller ~]# chronyc sources -v
210 Number of sources = 4

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 119.28.183.184                2   6    17     6  +4589us[  +17ms] +/-   45ms
^- ntp1.ams1.nl.leaseweb.net     2   6   375     7    +20ms[  +32ms] +/-  235ms
^+ 85.199.214.102                1   6   377     6    -17ms[-4919us] +/-  134ms
^- pingless.com                  2   6   377     6    -15ms[  -15ms] +/-  132ms

# 查看 ntp服务器 是否在线
[root@controller ~]# chronyc activity 
200 OK
4 sources online
0 sources offline
0 sources doing burst (return to online)
0 sources doing burst (return to offline)
0 sources with unknown address

# 添加腾讯NTP服务器
[root@controller ~]# chronyc add server time1.cloud.tencent.com
200 OK
# 删除腾讯的NTP服务器
[root@controller ~]# chronyc delete time1.cloud.tencent.com
200 OK

2、Openstack云计算平台框架

安装组件前,需要先将框架搭建起来。因此要安装由CentOS官方发布的Openstack云计算框架与客户端管理工具。

(1)安装Openstack框架和客户端

# 1.安装Openstack框架
# 模糊查询软件包名称
[root@controller ~]# yum list *-openstack-train
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Available Packages
centos-release-openstack-train.noarch                   1-1.el7.centos                    extras
# 安装
[root@controller ~]# yum -y install centos-release-openstack-train

# 2.升级所有的软件包
# 自动检查所有可升级的软件包并升级
[root@controller ~]# yum upgrade -y

# 3.安装openstack客户端
[root@controller ~]# yum install -y python-openstackclient

# 4.查看openstack的版本号
[root@controller ~]# openstack --version
openstack 4.0.2

(2)禁用Centos的SElinux,安装openstack-selinux
selinux有三种工作模式:
强制模式(enforcing)————强制使用selinux的安全策略;
宽容模式(permissive)————不使用安全策略,只将相关信息写入日志;
禁用模式(disabled)————禁用selinux。

openstack-selinux:帮助openstack平台自动控制和管理selinux安全策略。

# 1.禁用Centos的SElinux
[root@controller ~]# vi /etc/selinux/config
...略
SELINUX=disabled     《————改为这样
...略
# 让修改配置生效
[root@controller ~]# setenforce  0

# 2.安装openstack-selinux
[root@controller ~]# yum install -y openstack-selinux

3、mariadb数据库

Mysql的创始人Michel Widenius主导开发完全兼容MYSQL、开源免费的MariaDB数据库。

MariaDB采用Maria存储引擎的MYSQL数据库的分支版本。

(1)安装mariadb

# mariadb-server:数据库后台服务
# python2-PyMySQL:python访问数据库的模块
[root@controller ~]# yum install -y mariadb-server python2-PyMySQL

(2)编辑数据库配置文件

配置文件是 /etc/my.cnf.d/ 目录下所有后缀为 cnf的文件。

配置文件中主要参数及功能:

参数 功能说明
port 数据库对外服务的端口号,默认为3306
datadir 数据库文件存放目录
bind-address 绑定远程访问地址,只允许从该地址访问数据库
default-storage-engine 默认存储引擎,MariaDB支持几十种存储引擎,其中InnoDB是比较常用的支持事务的存储引擎
innodb_file_per_table InnoDB引擎的独立表空间,让每个表的数据都单独保存
max_connections 最大连接数
collation-server 字符的排序规则,也称为排列字符集,每个字符集都对应一个或多个排列字符集
character-set-server 字符集
[root@controller ~]# cd /etc/my.cnf.d/
[root@controller my.cnf.d]# ls
client.cnf                mariadb-server.cnf
enable_encryption.preset  mysql-clients.cnf

# 创建文件
[root@controller my.cnf.d]# touch openstack.cnf
# 写入信息
echo '[mysqld]
bind-address = 192.168.10.10
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8' > openstack.cnf

(3)启动mariadb

# 设置开机启动
[root@controller my.cnf.d]# systemctl enable mariadb
Created symlink from /etc/systemd/system/mysql.service to /usr/lib/systemd/system/mariadb.service.
Created symlink from /etc/systemd/system/mysqld.service to /usr/lib/systemd/system/mariadb.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
# 立即启动数据库
[root@controller my.cnf.d]# systemctl start mariadb
# 检查数据库启动情况
[root@controller my.cnf.d]# systemctl status mariadb

(4)初始化mariadb数据库

[root@controller ~]# mysql_secure_installation 

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):     《输入当前密码,没有则直接按【Enter】键
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y          《是否设置新密码
New password:                       《输入新密码
Re-enter new password:              《确认新密码
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y       《是否去掉匿名用户   
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] Y   《是否禁止root用户远程登录
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y  《是否去掉测试数据库
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y       《是否重新加载权限表
 ... Success!

Cleaning up...
All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!

(5)登录和使用数据库

# 登录数据库
[root@controller ~]# mysql -h localhost -uroot -p000000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 16
Server version: 10.3.20-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
MariaDB [(none)]> use mysql;
MariaDB [mysql]> show tables;
MariaDB [mysql]> exit
Bye

4、RabbitMQ消息队列服务

消息队列(Message Queue,MQ)是一种应用间的通信方式,消息发送到消息队列后由消息队列来确保消息的可靠传递,即消息发布者和消息使用者之间并不产生直接关系。

OpenStack各个组件之间就是通过消息队列进行相互通信的。其中RabbitMQ是一个开源的、应用广泛的消息服务系统。

通常使用RabbitMQ为OpenStack提供消息队列服务。

(1)用户管理常用语法

RabbitMQ用户的用户名和密码由rabbitmqctl命令进行管理
新建RabbitMQ用户的命令如下:
rabbitmqctl add_user <用户名> <密码>
删除RabbitMQ用户的命令如下:
rabbitmqctl delete_user <用户名>
修改RabbitMQ用户密码的命令如下:
rabbitmqctl change_password <用户名> <新密码>

(2)案例

# 1.安装RabbitMQ消息队列
[root@controller ~]# yum install -y rabbitmq-server

# 2.启动RabbitMQ消息队列
# 开机启动
[root@controller ~]# systemctl enable rabbitmq-server
Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service.
# 立即启动
[root@controller ~]# systemctl start rabbitmq-server

# 3.设置用户和密码
# 创建一个名为“rabbitmq”的用户,密码为“RABBIT_PASS”
[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack"

# 修改openstack用户密码为000000
[root@controller ~]# rabbitmqctl change_password openstack 000000
Changing password for user "openstack"

# 4.管理用户权限
# 3个“.*”分别对应配置、写入、读取权。给openstack用户赋予对RabbitMQ所有资源的配置、写入与读取权限
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"

# 查看用户的权限
[root@controller ~]# rabbitmqctl list_user_permissions openstack
Listing permissions for user "openstack"
/	.*	.*	.*

5、Memcached内存缓存服务

内存缓存(Memcached)是一个高性能的分布式内存对象缓存系统,能够存储各种格式的数据,包括图像、视频、文件,以及数据库检索的结果等

(1)安装memcached服务

# “memcached”是内存缓存服务软件,
# “python-memcached”是对该服务进行管理的接口程序软件
[root@controller ~]# yum install -y memcached python-memcached

# 安装完成后,系统将自动创建名为“memcached”的用户
[root@controller ~]# cat /etc/passwd | grep memcached
memcached:x:995:992:Memcached daemon:/run/memcached:/sbin/nologin

(2)配置内存缓存服务

Memcached的配置文件为/etc/sysconfig/memcached

[root@controller ~]# vi /etc/sysconfig/memcached 
PORT="11211"                     # 服务端口
USER="memcached"                 # 用户名
MAXCONN="1024"                   # 允许的最大连接数
CACHESIZE="64"                   # 最大的缓存大小
OPTIONS="-l 127.0.0.1,::1,192.168.10.10"         # 监听地址(默认监听本地)

(3)启动内存缓存服务

# 开机启动
[root@controller ~]# systemctl enable memcached
Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service.
# 立即启动
[root@controller ~]# systemctl start memcached

# 检查服务是否启动
[root@controller ~]# netstat -tnlup | grep memcached
tcp        0      0 192.168.10.10:11211     0.0.0.0:*               LISTEN      3778/memcached      
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      3778/memcached      
tcp6       0      0 ::1:11211               :::*                    LISTEN      3778/memcached

6、etcd分布式键值对存储系统

etcd是一个开源项目,它的目标是构建一个高可用的分布式键-值(Key-Value)数据库用于配置共享和服务发现。

这个软件的作用类似于分布式系统中“/etc”目录的功能,即存储大规模分布式系统的配置信息。

(1)etcd配置参数

配置参数 说明
ETCD_LISTEN_PEER_URLS 用于监听其他etcd成员的地址,只能是IP地址,不能写域名
ETCD_LISTEN_CLIENT_URLS 对外提供服务的地址,只能是IP地址,不能写域名
ETCD_NAME 当前etcd成员的名字,成员必须有唯一名字,建议采用主机名
ETCD_INITIAL_ADVERTISE_PEER_URLS 列出这个成员的伙伴地址,通告给集群中的其他成员
ETCD_ADVERTISE_CLIENT_URLS 列出这个成员的客户端地址,通告给集群中的其他成员
ETCD_INITIAL_CLUSTER 启动初始化集群配置,值为“成员名=该成员服务地址”
ETCD_INITIAL_CLUSTER_TOKEN 初始化etcd集群标识,用于多个etcd集群相互识别
ETCD_INITIAL_CLUSTER_STATE 初始化集群状态(新建值为“new”,已存在值为“existing”)。如果这个选项被设置为existing,etcd将试图加入已有的集群

(2)安装配置

# 1.安装
[root@controller ~]# yum install -y etcd

# 2.备份配置文件
[root@controller ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak
# 修改配置
echo 'ETCD_LISTEN_PEER_URLS="http://192.168.10.10:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.10.10:2379,http://127.0.0.1:2379"
ETCD_NAME="controller"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.10:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.10.10:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
' > /etc/etcd/etcd.conf

# 3.启动服务
[root@controller ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@controller ~]# systemctl start etcd

# 4.检查运行
[root@controller ~]# netstat -tnlup| grep etcd
tcp        0      0 192.168.10.10:2379      0.0.0.0:*               LISTEN      4319/etcd           
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      4319/etcd           
tcp        0      0 192.168.10.10:2380      0.0.0.0:*               LISTEN      4319/etcd  

(3)etcd服务管理

etcdctl是管理etcd服务的工具,利用它可以实现数据的存取。

[root@controller ~]# etcdctl set testkey 001
001
[root@controller ~]# etcdctl get testkey
001
[root@controller ~]# etcdctl set name kobe
kobe
[root@controller ~]# etcdctl get name
kobe

三、集群配置

现在已经有了一台符合条件的虚拟机作为控制节点,还需要准备一个计算节点。

1、计算节点克隆

第1步,在VMware Workstation主界面鼠标右键单击【我的计算机】下的虚拟机(虚拟机需处于关闭状态),在弹出的菜单中依次选择【管理】→【克隆】后出现【克隆虚拟机向导】

第2步,选择克隆源与克隆类型。在【克隆虚拟机向导】对话框中单击【下一页】按钮进入【克隆源】界面。在【克隆源】界面中选择【虚拟机中的当前状态】单选按钮,然后单击【下一步】按钮,弹出克隆类型】界面。在【克隆类型】界面中选择【创建完整克隆】单选按钮,然后单击【下一步】按钮,弹出【新虚拟机名称】界面。

第3步,设置新的虚拟机名称及存储位置,然后开始克隆虚拟机。在【新虚拟机名称】界面中的【虚拟机名称】和【位置】文本框内填写和选择新克隆的虚拟机的名称和虚拟机存储的位置,设置完成后单击【完成】按钮,显示【正在克隆虚拟机】对话框。

给这个计算节点添加一个40G的磁盘。

2、计算节点配置

# 修改网络配置
[root@controller ~]# cd /etc/sysconfig/network-scripts/
# 生产新的uuid
[root@controller network-scripts]# uuidgen >> ifcfg-ens33
[root@controller network-scripts]# uuidgen >> ifcfg-ens34

# 修改ifcfg-ens33配置
NAME=ens33
UUID=80d04405-642e-41f6-aa31-a33b79b92ca4        # 将新生成的uuid放这里替换
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.10.20       # 修改网卡地址
NETMASK=255.255.255.0

# 修改ifcfg-ens34配置
NAME=ens34
UUID=381db26e-7267-485c-a459-f4b56d0e5e42        # 将新生成的uuid放这里替换
DEVICE=ens34
ONBOOT=yes
IPADDR=192.168.20.20       # 修改网卡地址
NETMASK=255.255.255.0
GATEWAY=192.168.20.2
DNS1=114.114.114.114
DNS2=8.8.8.8

# 重启网络
[root@controller network-scripts]# systemctl restart network

# 使用xshell等工具连接192.168.10.20,查看当前的网络信息:
[root@controller ~]# ip a
1: lo: 略
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:fc:73:40 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.20/24 brd 192.168.10.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::4883:d953:184d:971b/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:fc:73:4a brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.20/24 brd 192.168.20.255 scope global noprefixroute ens34
       valid_lft forever preferred_lft forever
    inet6 fe80::e991:bf85:171d:2a7a/64 scope link noprefixroute 
       valid_lft forever preferred_lft foreve
	   
# 更改计算节点主机名
[root@controller ~]# hostnamectl set-hostname compute
[root@controller ~]# bash
[root@compute ~]# 

3、配置域名解析和防火墙

为了提升工作效率,将主机名和ip地址绑定。

(1)域名解析配置

# 修改控制节点本地域名解析
[root@controller ~]# echo '192.168.10.20   compute' >> /etc/hosts
# 修改计算节点本地域名解析
[root@compute ~]# echo '192.168.10.20   compute' >> /etc/hosts
# 验证互PING
[root@controller ~]# ping compute
[root@compute ~]# ping controller

(2)selinux和防火墙关闭

安装openstack之前需要关闭linux系统自带的selinux和firewall防火墙,避免因防火墙设置问题造成服务访问出现问题。

# 1.控制、计算节点都修改/etc/selinux/config,将"SELINUX=enforcing" 改为 "SELINUX=disabled"

# 2.禁止开机启动和关闭防火墙
# 控制节点:
[root@controller ~]# systemctl disable firewalld
[root@controller ~]# systemctl stop firewalld
# 计算节点:
[root@compute ~]# systemctl disable firewalld
[root@compute ~]# systemctl stop firewalld

4、搭建本地软件仓库

不能访问外网的环境,需要搭建本地的YUM源。

没必要在每台机器上都配置同样的软件仓库,可以选择共享的方式为其他主机提供服务。

案例:在控制节点配置YUM源,并搭建文件传输服务器为其他的计算节点提供服务。

(1)在控制节点配置YUM源

添加base、extras、updates、train、virt这5个软件仓库的本地地址——挂载镜像的地址。

name是软件仓库名、baseurl是仓库地址、enable设置1为启用该软件仓库、gpgcheck设置为0为取消数字签名验证。

file:/// 表示链接到本地文件系统的地址。

# 1.上传openStack-train.iso文件到/opt目录下
[root@controller opt]# du -sh *
0	mydriver
16G	openStack-train.iso

# 2.将镜像文件挂载到文件夹中,即可访问镜像文件内容
[root@controller opt]# mkdir openstack
[root@controller opt]# ls
mydriver  openstack  openStack-train.iso
# 挂载命令:将镜像文件挂载到/opt/openstack
[root@controller opt]# mount openStack-train.iso openstack/
mount: /dev/loop0 is write-protected, mounting read-only
[root@controller opt]# df -H
...略
/dev/loop0                17G   17G     0 100% /opt/openstack

# 3.备份原有的yum的配置文件
[root@controller openstack]# cd /etc/yum.repos.d/
[root@controller yum.repos.d]# ls
CentOS-Base.repo  CentOS-OpenStack-train.repo  repo.bak
# 将阿里源改名,避免覆盖官方源备份
[root@controller yum.repos.d]# mv CentOS-Base.repo CentOS-ALIBABA-Base.repo
# 移动repo文件到备份目录
[root@controller yum.repos.d]# mv *.repo repo.bak/

# 4.编写本地YUM源文件,指向本地文件
[root@controller yum.repos.d]# vi OpenStack.repo
[base]
name=base
baseurl=file:///opt/openstack/base/     
enable=1                         
gpgcheck=0 
[extras]
name=extras
baseurl=file:///opt/openstack/extras/   
enable=1                         
gpgcheck=0 
[updates]
name=updates
baseurl=file:///opt/openstack/updates/ 
enable=1                         
gpgcheck=0 
[train]
name=train
baseurl=file:///opt/openstack/train/
enable=1                         
gpgcheck=0
[virt]
name=virt
baseurl=file:///opt/openstack/virt/
enable=1                         
gpgcheck=0 


# 5.清除原有的YUM源缓存并重建缓存
[root@controller ~]# yum clean all
# 重建缓存
[root@controller ~]# yum makecache
# 检查yum源
[root@controller ~]# yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
repo id                     repo name                    status
base                        base                         10,039
extras                      extras                          500
train                       train                         3,168
updates                     updates                       3,182
virt                        virt                             63
repolist: 16,952

# 长期挂载,避免重启丢失
[root@controller ~]# vi /etc/fstab
# 在最后添加如下内容
/opt/openStack-train.iso  /opt/openstack/   iso9660  defaults,loop  0   0

(2)控制节点配置FTP服务器

控制节点上已经有了本地YUM源文件,搭建一个FTP服务器,共享软件仓库为计算节点提供服务。

# 1.安装FTP服务
[root@controller ~]# yum install -y vsftpd

# 2.配置FTP主目录为软件仓库目录:添加如下信息
[root@controller ~]# echo 'anon_root=/opt' >> /etc/vsftpd/vsftpd.conf

# 3.启动FTP服务
[root@controller ~]# systemctl start vsftpd
[root@controller ~]# systemctl enable vsftpd
Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.service to /usr/lib/systemd/system/vsftpd.service.

(3)计算节点配置YUM源

修改yum配置文件,将YUM源指向控制节点FTP服务器的软件仓库。

# 1.备份YUM配置文件
[root@compute ~]# cd /etc/yum.repos.d/
[root@compute yum.repos.d]# mv CentOS-Base.repo CentOS-ALIBABA-Base.repo   # 改名
[root@compute yum.repos.d]# mv *.repo repo.bak/     # 迁移到备份目录

# 2.从控制节点传输配置文件
[root@compute yum.repos.d]# scp root@controller:/etc/yum.repos.d/OpenStack.repo OpenStack.repo
# 注意输入控制节点密码和yes

# 3.编辑YUM源文件
[root@compute yum.repos.d]# vi OpenStack.repo  
[base]
name=base
baseurl=ftp://controller/openstack/base/     
enable=1                         
gpgcheck=0 
[extras]
name=extras
baseurl=ftp://controller/openstack/extras/   
enable=1                         
gpgcheck=0 
[updates]
name=updates
baseurl=ftp://controller/openstack/updates/ 
enable=1                         
gpgcheck=0 
[train]
name=train
baseurl=ftp://controller/openstack/train/
enable=1                         
gpgcheck=0
[virt]
name=virt
baseurl=ftp://controller/openstack/virt/
enable=1                         
gpgcheck=0 

# 4.清除原有的YUM源缓存并重建缓存
[root@compute ~]# yum clean all
[root@compute ~]# yum makecache

5、局域网时间同步

实现控制节点和计算节点的时间同步,控制节点作为时间同步系统服务端,计算节点作为客户端。

# 1.配置控制节点为NTP时间服务器
[root@controller ~]# vi /etc/chrony.conf
###### 注释默认同步服务器,添加阿里时间同步 #########
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp.aliyun.com iburst

###### 配置允许同网段主机使用本机的NTP服务  ###########
# Allow NTP client access from local network.
allow 192.168.10.0/24

# 2.配置计算节点时间同步
[root@compute ~]# vi /etc/chrony.conf
###### 注释默认同步服务器,添加本地同步 #########
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst

# 3.重启时间同步服务让配置生效
[root@controller ~]# systemctl restart chronyd
[root@compute ~]# systemctl restart chronyd

# 4.检查时间同步状态
[root@compute ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address       Stratum Poll Reach LastRx Last sample               
===========================================================================
^* controller              3   6    17    41  +6800ns[  +15us] +/-   29ms
[root@controller ~]# date
Wed Oct  5 16:32:39 CST 2022
[root@compute ~]# date
Wed Oct  5 16:32:39 CST 2022

6、openstack云计算平台基础框架自检

控制节点和计算节点都需要执行软件框架安装、云计算平台管理客户端安装、openstack的selinux防火墙管理包。
安装方法见本章2-2。

# 检查1:只有自建的repo文件
[root@controller ~]# ls /etc/yum.repos.d/
OpenStack.repo  repo.bak
[root@compute ~]# ls /etc/yum.repos.d/
OpenStack.repo  repo.bak

# 检查2:
[root@controller ~]# openstack --version
openstack 4.0.2
[root@compute ~]# openstack --version
openstack 4.0.2

7、MariaDB数据库服务自检

MariaDB数据库的安装和配置只需要在控制节点操作。

# 1.关闭compute节点数据库,删除配置文件
[root@compute ~]# systemctl stop mariadb
[root@compute ~]# systemctl disable mariadb
[root@compute my.cnf.d]# rm /etc/my.cnf.d/openstack.cnf 
rm: remove regular file ‘/etc/my.cnf.d/openstack.cnf’? yes

# 自检1:查看端口占用
[root@controller ~]# netstat -tnlup | grep 3306
tcp        0      0 192.168.10.10:3306      0.0.0.0:*               LISTEN      1159/mysqld

# 自检2:控制节点配置文件没问题,尝试登录
[root@controller ~]# mysql -uroot -p000000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 8
Server version: 10.3.20-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> 

# 自检3:数据库中有名为mysql的数据库
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+

8、RabbitMQ消息队列服务自检

帮助openstack各个组件之间相互通信。仅需在控制节点安装和配置。

# 1.添加rabbitmq用户,密码为000000
[root@controller ~]# rabbitmqctl add_user rabbitmq 000000
Creating user "rabbitmq"

# 2.为rabbitmq用户,授予消息队列中所有资源的配置、写入、读取权限
[root@controller ~]# rabbitmqctl set_permissions rabbitmq ".*" ".*" ".*"
Setting permissions for user "rabbitmq" in vhost "/"

# 自检1:服务端口25672和5672是否处于监听状态
[root@controller ~]# netstat -tnlup | grep 5672
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      964/beam.smp        
tcp6       0      0 :::5672                 :::*                    LISTEN      964/beam.smp 

# 自检2:查看用户列表
[root@controller ~]# rabbitmqctl list_users
Listing users
rabbitmq	[]
openstack	[]
guest	[administrator]

# 自检3:用户权限是否正确
[root@controller ~]# rabbitmqctl list_user_permissions rabbitmq
Listing permissions for user "rabbitmq"
/	.*	.*	.*

9、Memcached缓存服务自检

高性能的分布式内存对象缓存系统,能大大提升云平台的数据分发效率。仅需在控制节点安装和配置。

# 1.修改缓存服务配置,对控制节点访问进行监听
[root@controller ~]# vi /etc/sysconfig/memcached
OPTIONS="-l 127.0.0.1,::1,controller"

# 2.重启缓存服务
[root@controller ~]# systemctl restart memcached

# 自检1:服务端口11211是否处于监听状态
[root@controller ~]# netstat -tnlup|grep 11211
tcp        0      0 192.168.10.10:11211     0.0.0.0:*               LISTEN      5068/memcached      
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      5068/memcached      
tcp6       0      0 ::1:11211               :::*                    LISTEN      5068/memcached 

# 自检2:控制节点能否用telnet访问
# 在计算节点安装telnet
[root@compute my.cnf.d]# yum install -y telnet
[root@compute ~]# telnet controller 11211
Trying 192.168.10.10...
Connected to controller.
Escape character is '^]'.
stats    《——————这里输入stats命令查看
STAT pid 5068
STAT uptime 691
quit      《————输入quit退出

10、etcd分布式键值对存储系统自检

用于存储大规模分布式系统的配置信息,用于组件的注册和服务发现等。仅需在控制节点安装和配置。

# 自检1:服务端口2379和2380是否处于监听状态
[root@controller ~]# netstat -tnlup | grep etcd
tcp        0      0 192.168.10.10:2379      0.0.0.0:*               LISTEN      970/etcd            
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      970/etcd            
tcp        0      0 192.168.10.10:2380      0.0.0.0:*               LISTEN      970/etcd  

# 自检2:能正常存取数据
[root@controller ~]# etcdctl set mykey 007
007
[root@controller ~]# etcdctl get mykey
007

11、永久mount、开机mount

控制节点重启后,/opt/openstack共享的内容全丢失。
做如下操作配置iso文件开机自动挂载关机。

# 在/etc/fstab最后加入如下内容
[root@controller openstack]# vi /etc/fstab 
/opt/openStack-train.iso  /opt/openstack/   iso9660  defaults,loop  0   0

四、keystone组件部署

只在控制节点操作,建议操作前给控制节点拍摄快照。

1、安装和配置keystone

# 1.安装keystone软件包
# wsgi:使web服务器支持WSGI的插件
# httpd:Apache软件包
# openstack-keystone:keystone的软件包
[root@controller ~]# yum install -y openstack-keystone httpd mod_wsgi

# 查看keystone用户信息
[root@controller ~]# cat /etc/passwd | grep keystone
keystone:x:163:163:OpenStack Keystone Daemons:/var/lib/keystone:/sbin/nologin

# 查看keystone用户组
[root@controller ~]# cat /etc/group | grep keystone
keystone:x:163:


# 2.创建keystone的数据库并授权
[root@controller ~]# mysql -uroot -p000000
# 创建数据库
MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.000 sec)
# 授权本地登录keystone用户
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.001 sec)
# 授权任意远程主机登录keystone用户
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '000000';         
Query OK, 0 rows affected (0.000 sec)
# 退出数据库
MariaDB [(none)]> quit
Bye


# 3.修改keystone配置文件
[root@controller ~]# vi /etc/keystone/keystone.conf 
# 找到[database] 部分,加入如下内容,配置数据库连接信息
connection=mysql+pymysql://keystone:000000@controller/keystone    
# 找到[token] 部分,解开注释,配置令牌的加密方式
provider = fernet


# 4.初始化keytone数据库
# 同步数据库
# su keytone:表示切换到keytone用户
# '-s /bin/sh':表示指定使用什么编译器来执行命令
# '-c':表示具体执行的命令
[root@controller ~]# su keystone -s /bin/sh -c "keystone-manage db_sync"

# 检查数据库
[root@controller ~]# mysql -uroot -p000000
# 打开keystone数据库
MariaDB [(none)]> use keystone;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
# 显示keytone数据库中的数据表
MariaDB [keystone]> show tables;
+------------------------------------+
| Tables_in_keystone                 |
+------------------------------------+
| access_rule                        |
| access_token                       |
| application_credential             |
| application_credential_access_rule |
| application_credential_role        |
# 出现如上所示的很多数据表,说明数据库同步成功。

2、keystone组件初始化

keystone安装后,需要为keystone初始化密钥库、初始化用户身份认证信息、初始化服务....

# 1.初始化Fernet密钥库
[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# 执行命令后创建/etc/keystone/fernet-keys,并在目录中生成两个fernet密钥,分别用于加密和解密
[root@controller fernet-keys]# pwd
/etc/keystone/fernet-keys
[root@controller fernet-keys]# du -sh *
4.0K	0
4.0K	1

[root@controller keystone]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
# 执行命令后创建/etc/keystone/credential-keys目录,生成两个fetnet密钥,用于加密/解密用户凭证
[root@controller credential-keys]# pwd
/etc/keystone/credential-keys
[root@controller credential-keys]# du -sh *
4.0K	0
4.0K	1


# 2.初始化用户身份认证信息
# openstack默认有一个admin用户,还没有对应的密码等登录所必须的信息。使用 `keystone-manage bootstrap` 初始化登录凭证。
[root@controller ~]# keystone-manage bootstrap --bootstrap-password 000000 \   # 设置密码
> --bootstrap-admin-url http://controller:5000/v3 \                            # 设置用户服务端点
> --bootstrap-internal-url http://controller:5000/v3 \                         # 设置内部用户的服务端点
> --bootstrap-public-url http://controller:5000/v3 \                           # 设置公共用户的服务端点
> --bootstrap-region-id RegionOne                                              # 设置区域ID
# 命令执行后,keystone数据库中就已经存放了登录需要的验证信息。


# 3.配置web服务
# (1)给apache增加wsgi支持
# 将wsgi-keystone.conf文件软链接到'/etc/httpd/conf.d/目录',作为apache的配置文件
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
[root@controller ~]# ls /etc/httpd/conf.d/
autoindex.conf  README  userdir.conf  welcome.conf  wsgi-keystone.conf

# (2)修改apache服务器配置并启动
[root@controller ~]# vi /etc/httpd/conf/httpd.conf 
# 修改为web服务所在的IP地址或域名
96 ServerName controller

# (3)启动apache
[root@controller ~]# systemctl start httpd
[root@controller ~]# systemctl enable httpd
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.

3、模拟登录验证

通过环境变量可以出传送用户名和密码等凭证给keystone,再由它验证。

# 创建一个文件存储身份凭证
[root@controller ~]# vi admin-login
export OS_USERNAME=admin
export OS_PASSWORD=000000
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

# 导入环境变量
[root@controller ~]# source admin-login

# 查看现有环境信息
[root@controller ~]# export -p
declare -x OS_AUTH_URL="http://controller:5000/v3"
declare -x OS_IDENTITY_API_VERSION="3"
declare -x OS_IMAGE_API_VERSION="2"
declare -x OS_PASSWORD="000000"
declare -x OS_PROJECT_DOMAIN_NAME="Default"/
declare -x OS_PROJECT_NAME="admin"
declare -x OS_USERNAME="admin"
declare -x OS_USER_DOMAIN_NAME="Default"

4、检测keystone服务

openstack平台所有对组件的操作都需要keystone认证才能进行,能执行openstack管理命令,说明keystone服务正常。

# 在default域创建名为 'project' 的项目
[root@controller ~]# openstack project create --domain default project
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description |                                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | e3a549077f354998aa1a75677cfde62e |
| is_domain   | False                            |
| name        | project                          |
| options     | {}                               |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+

# 查看现有项目列表
[root@controller ~]# openstack project list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 4188570a34464b938ed3fa7e08681df8 | admin   |
| e3a549077f354998aa1a75677cfde62e | project |
+----------------------------------+---------+

# 创建名为user的角色
[root@controller ~]# openstack role create user
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | None                             |
| domain_id   | None                             |
| id          | 700ec993d3cf456fa591c03e72f37856 |
| name        | user                             |
| options     | {}                               |
+-------------+----------------------------------+

# 查看当前角色列表
[root@controller ~]# openstack role list
+----------------------------------+--------+
| ID                               | Name   |
+----------------------------------+--------+
| 47670bbd6cc1472ab42db560637c7ebe | reader |
| 5eee0910aeb844a1b82f48100da7adc9 | admin  |
| 700ec993d3cf456fa591c03e72f37856 | user   |
| bc2c8147bbd643629a020a6bd9591eca | member |
+----------------------------------+--------+

# 查看现有域列表
[root@controller ~]# openstack domain list
+---------+---------+---------+--------------------+
| ID      | Name    | Enabled | Description        |
+---------+---------+---------+--------------------+
| default | Default | True    | The default domain |
+---------+---------+---------+--------------------+

# 查看现有用户列表
[root@controller ~]# openstack user list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| f4f16d960e0643d7b5a35db152c87dae | admin |
+----------------------------------+-------+

五、glance部署

安装openstack镜像服务。只在控制节点进行操作。

1、安装glance

安装glance软件包。只在控制节点进行操作。

# 1.安装glance软件包
# 原生的源缺包,把阿里的源下载到这个路径
[root@controller yum.repos.d]# curl -o /etc/yum.repos.d/CentOS-Base.repo  http://mirrors.aliyun.com/repo/Centos-7.repo
[root@controller ~]# yum install -y openstack-glance
# 安装后会自动生成glance用户和用户组
[root@controller ~]# cat /etc/passwd | grep glance
glance:x:161:161:OpenStack Glance Daemons:/var/lib/glance:/sbin/nologin
[root@controller ~]# cat /etc/group | grep glance
glance:x:161:

# 2.创建glance数据库并授权
# 连接数据库
[root@controller ~]# mysql -uroot -p000000
# 新建glance数据库
MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.001 sec)
# 为新数据库授权本地和远程登录glance用户
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.001 sec)
# 退出数据库
MariaDB [(none)]> quit
Bye

2、配置glance

glance的配置文件是/etc/glance/glance-api.conf,修改它可以实现glance与数据库及keystone的连接。

# 1.备份配置文件
[root@controller ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak

# 2.去掉配置文件注释和空行
# grep:查找文件中符合条件的字符串。 -E:采用正则表达式;-v:匹配所有不满足正则的条件(反选)
# ^:以什么开头; $:匹配字符结尾;|:匹配|左或|右的字符
[root@controller ~]# grep -Ev '^$|#' /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf

# 3.编辑配置文件
# default_store = file:默认存储系统为本地系统
# filesystem_store_datadir = /var/lib/glance/images/ : 镜像文件实际存储的目录
[root@controller ~]# vi /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:000000@controller/glance

[glance_store]
stores = file
default_store = file                                  
filesystem_store_datadir = /var/lib/glance/images/

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
username = glance
password = 000000
project_name = project
user_domain_name = Default
project_domain_name = Default

[paste_deploy]
flavor = keystone


# 4.初始化数据库
# 同步数据库:将安装文件中的数据库表信息填入数据库中
[root@controller ~]# su glance -s /bin/sh -c "glance-manage db_sync"

# 检查数据库
[root@controller ~]# mysql -uroot -p000000
MariaDB [(none)]> use glance
MariaDB [glance]> show tables;
+----------------------------------+
| Tables_in_glance                 |
+----------------------------------+
| alembic_version                  |
| image_locations                  |
| image_members                    |
| image_properties                 |
| image_tags                       |
| images                           |
.....

3、Glance组件初始化

Glance安装配置成功之后,需要给glance初始化用户、密码并分配角色、初始化服务和服务端点。

(1)创建Glance用户并分配角色

# 导入环境变量模拟登录
[root@controller ~]# source admin-login 
# 在default域创建名为glance,密码为000000的用户
[root@controller ~]# openstack user create --domain default --password 000000 glance
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 81238b556a444c8f80cb3d7dc72a24d3 |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

# 查看当前已有的项目
[root@controller ~]# openstack project list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 4188570a34464b938ed3fa7e08681df8 | admin   |
| e3a549077f354998aa1a75677cfde62e | project |
+----------------------------------+---------+
# 查看已有的用户
[root@controller ~]# openstack user list
+----------------------------------+--------+
| ID                               | Name   |
+----------------------------------+--------+
| f4f16d960e0643d7b5a35db152c87dae | admin  |
| 81238b556a444c8f80cb3d7dc72a24d3 | glance |
+----------------------------------+--------+

# 授予glance用户操作project项目时的admin角色权限
[root@controller ~]# openstack role add --project project --user glance admin
# 查看glance用户详情
[root@controller ~]# openstack user show glance
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 81238b556a444c8f80cb3d7dc72a24d3 |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

(2)创建glance服务及服务端点

# 1.创建服务
# 创建名为glance,类型为image的服务
[root@controller ~]# openstack service create --name glance image
+---------+----------------------------------+
| Field   | Value                            |
+---------+----------------------------------+
| enabled | True                             |
| id      | 324a07034ea4453692570e3edf73cf2c |
| name    | glance                           |
| type    | image                            |
+---------+----------------------------------+

# 2.创建镜像服务端点
# 服务端点有三种:公共用户(public)、内部组件(internal)、Admin用户(admin)服务的地址。
# 创建公共用户访问服务端点
[root@controller ~]# openstack endpoint create --region RegionOne glance public http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | ab3208eb36fd4a8db9c90b9113da9bbb |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 324a07034ea4453692570e3edf73cf2c |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

# 创建内部组件访问服务端点
[root@controller ~]# openstack endpoint create --region RegionOne glance internal http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 54994f15e8184e099334760060b9e2a9 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 324a07034ea4453692570e3edf73cf2c |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

# 创建Admin用户访问服务端点
[root@controller ~]# openstack endpoint create --region RegionOne glance admin http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 97ae61936255471f9f55858cc0443e61 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 324a07034ea4453692570e3edf73cf2c |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

# 查克服务端点
[root@controller ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+
| ID                               | Region    | Service Name | Service Type | Enabled | Interface | URL                       |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+
| 0d31919afb564c8aa52ec5eddf474a55 | RegionOne | keystone     | identity     | True    | admin     | http://controller:5000/v3 |
| 243f1e7ace4f444cba2978b900aeb165 | RegionOne | keystone     | identity     | True    | internal  | http://controller:5000/v3 |
| 54994f15e8184e099334760060b9e2a9 | RegionOne | glance       | image        | True    | internal  | http://controller:9292    |
| 702df46845be40fb9e75fb988314ee90 | RegionOne | keystone     | identity     | True    | public    | http://controller:5000/v3 |
| 97ae61936255471f9f55858cc0443e61 | RegionOne | glance       | image        | True    | admin     | http://controller:9292    |
| ab3208eb36fd4a8db9c90b9113da9bbb | RegionOne | glance       | image        | True    | public    | http://controller:9292    |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------+

(3)启动glance服务

# 开机启动glance服务
[root@controller ~]# systemctl enable openstack-glance-api
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service.
# 启动glance服务
[root@controller ~]# systemctl start openstack-glance-api

4、验证Glance服务

检查glance服务是否正常。

# 方法一:查看端口占用情况(9292是否被占用)
[root@controller ~]# netstat -tnlup | grep 9292
tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      5740/python2  

# 方法二:查看服务运行状态(active (running)说明服务正在运行)
[root@controller ~]# systemctl status openstack-glance-api
● openstack-glance-api.service - OpenStack Image Service (code-named Glance) API server
   Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2022-10-19 17:09:13 CST;

5、用glance制作镜像

CirrOS是一种很小的Linux操作系统,仅有十几兆字节大小,这里Glance将用它来制作一个镜像。

# 安装lrzsz工具
[root@controller ~]# yum install -y lrzsz

# 上传cirros-0.5.1-x86_64-disk.img 到/root目录
[root@controller ~]# rz
z waiting to receive.**B0100000023be50
[root@controller ~]# ls
admin-login      cirros-0.5.1-x86_64-disk.img

# glance创建镜像
[root@controller ~]# openstack image create --file cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros
# 查看镜像列表
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| a859fddb-3ec1-4cd8-84ec-482112af929b | cirros | active |
+--------------------------------------+--------+--------+
# 删除镜像
[root@controller ~]# openstack image delete a859fddb-3ec1-4cd8-84ec-482112af929b
# 重新创镜像
[root@controller ~]# openstack image create --file cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                      |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | 1d3062cd89af34e419f7100277f38b2b                                                                                                                                                           |
| container_format | bare                                                                                                                                                                                       |
| created_at       | 2022-10-19T09:20:03Z                                                                                                                                                                       |
| disk_format      | qcow2                                                                                                                                                                                      |
| file             | /v2/images/7096885c-0a58-4086-8014-b92affceb0e8/file                                                                                                                                       |
| id               | 7096885c-0a58-4086-8014-b92affceb0e8                                                                                                                                                       |
| min_disk         | 0                                                                                                                                                                                          |
| min_ram          | 0                                                                                                                                                                                          |
| name             | cirros                                                                                                                                                                                     |
| owner            | 4188570a34464b938ed3fa7e08681df8                                                                                                                                                           |
| properties       | os_hash_algo='sha512', os_hash_value='553d220ed58cfee7dafe003c446a9f197ab5edf8ffc09396c74187cf83873c877e7ae041cb80f3b91489acf687183adcd689b53b38e3ddd22e627e7f98a09c46', os_hidden='False' |
| protected        | False                                                                                                                                                                                      |
| schema           | /v2/schemas/image                                                                                                                                                                          |
| size             | 16338944                                                                                                                                                                                   |
| status           | active                                                                                                                                                                                     |
| tags             |                                                                                                                                                                                            |
| updated_at       | 2022-10-19T09:20:03Z                                                                                                                                                                       |
| virtual_size     | None                                                                                                                                                                                       |
| visibility       | public                                                                                                                                                                                     |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

# 查看镜像物理文件
# /var/lib/glance/images/文件夹是 glance-api.conf配置文件中定义镜像文件存储的位置。
[root@controller ~]# ll /var/lib/glance/images/
total 15956
-rw-r----- 1 glance glance 16338944 Oct 19 17:20 7096885c-0a58-4086-8014-b92affceb0e8

六、放置服务(Placement)部署

从Openstack(Stein)版本开始,将系统资源的监控功能从NOVA中独立出来,成为一个独立的组件————Placement。

1、安装Placement软件包

# 安装placement软件包
# 安装好会自动生成placement用户和用户组
[root@controller ~]# yum install -y openstack-placement-api

# 查看确认用户和用户组已经创建
[root@controller ~]# cat /etc/passwd | grep placement
placement:x:993:990:OpenStack Placement:/:/bin/bash
[root@controller ~]# cat /etc/group | grep placement
placement:x:990:

# 登录数据库
[root@controller ~]# mysql -uroot -p000000

# 创建placement数据库
MariaDB [(none)]> create database placement;
Query OK, 1 row affected (0.000 sec)

# 数据库授权
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.000 sec)
MariaDB [(none)]> quit
Bye

2、配置placement服务

# 备份配置文件
[root@controller ~]# cp /etc/placement/placement.conf /etc/placement/placement.conf.bak
[root@controller ~]# ls /etc/placement/
placement.conf  placement.conf.bak  policy.json

# 去掉配置文件注释和空行
[root@controller ~]# grep -Ev '^$|#' /etc/placement/placement.conf.bak  > /etc/placement/placement.conf
[root@controller ~]# cat /etc/placement/placement.conf
[DEFAULT]
[api]
[cors]
[keystone_authtoken]
[oslo_policy]
[placement]
[placement_database]
[profiler]

# 编辑配置文件
[root@controller ~]# vi /etc/placement/placement.conf
[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = 000000

[placement_database]
connection = mysql+pymysql://placement:000000@controller/placement



# 编辑修改apache配置文件
# 在"VirtualHost"节点加入如下 Directory内容
[root@controller ~]# vi /etc/httpd/conf.d/00-placement-api.conf 
Listen 8778

<VirtualHost *:8778>
  WSGIProcessGroup placement-api
  ...略
  <Directory /usr/bin>
    <IfVersion >= 2.4>
      Require all granted
    </IfVersion>
  </Directory>
</VirtualHost>


# 查看Apache版本号
[root@controller ~]# httpd -v
Server version: Apache/2.4.6 (CentOS)
Server built:   Jan 25 2022 14:08:43


# 同步数据库,将数据库的表信息填充进数据库
[root@controller ~]# su placement -s /bin/sh -c "placement-manage db sync"

# 检查数据库同步
[root@controller ~]# mysql -uroot -p000000

MariaDB [(none)]> use placement;

MariaDB [placement]> show tables;
+------------------------------+
| Tables_in_placement          |
+------------------------------+
| alembic_version              |
| allocations                  |
| consumers                    |
| inventories                  |
| placement_aggregates         |
| projects                     |
| resource_classes             |
| resource_provider_aggregates |
| resource_provider_traits     |
| resource_providers           |
| traits                       |
| users                        |
+------------------------------+
12 rows in set (0.000 sec)

3、Placement组件初始化

# 导入环境变量模拟登录
[root@controller ~]# source admin-login

# 创建placement用户
[root@controller ~]# openstack user create --domain default --password 000000 placement
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | e0d6a46f9b1744d8a7ab0332ab45d59c |
| name                | placement                        |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

# 给placement用户分配admin角色
[root@controller ~]# openstack role add --project project --user placement admin

# 创建placement服务
[root@controller ~]# openstack service create --name placement placement
+---------+----------------------------------+
| Field   | Value                            |
+---------+----------------------------------+
| enabled | True                             |
| id      | da038496edf04ce29d7d3d6b8e647755 |
| name    | placement                        |
| type    | placement                        |
+---------+----------------------------------+
# 查看当前已经创建的服务列表
[root@controller ~]# openstack service list
+----------------------------------+-----------+-----------+
| ID                               | Name      | Type      |
+----------------------------------+-----------+-----------+
| 324a07034ea4453692570e3edf73cf2c | glance    | image     |
| 5d25b4ed1443497599707e043866eaae | keystone  | identity  |
| da038496edf04ce29d7d3d6b8e647755 | placement | placement |
+----------------------------------+-----------+-----------+

# 创建服务端点
# placement服务端点有三个:公众用户、内部组件、Admin用户(admin)服务。
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | da0c279c9a394d0f80e7a33acb9e0d8d |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | da038496edf04ce29d7d3d6b8e647755 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 79ca63ffd52d4d96b418cdf962c1e3ca |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | da038496edf04ce29d7d3d6b8e647755 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | fbee454f73d64bb18a52d8696c7aa596 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | da038496edf04ce29d7d3d6b8e647755 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+

# 查看检查
[root@controller ~]# openstack endpoint list

4、Placement组件检测

(1)检测Placement组件的运行状态的两种方法

# 方法一:查看端口占用情况(8778是否被占用)
[root@controller ~]# netstat -tnlup | grep 8778
tcp6       0      0 :::8778                 :::*                    LISTEN      1018/httpd 

# 方法二:查看服务端点通信
[root@controller ~]# curl http://controller:8778
{"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]}

(2)安装完成情况检测

# 1.控制节点是否建立了placement用户
[root@controller ~]# cat /etc/passwd | grep placement
placement:x:993:990:OpenStack Placement:/:/bin/bash

# 2.控制节点是否创建placement用户组
[root@controller ~]# cat /etc/group | grep placement
placement:x:990:

# 3.是否创建placement数据库
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| glance             |
| information_schema |
| keystone           |
| mysql              |
| performance_schema |
| placement          |
+--------------------+

# 4.查看placement用户对数据库的权限
MariaDB [(none)]> show grants for placement@'%';
MariaDB [(none)]> show grants for placement@'localhost';

# 5.查看placement数据表列表
MariaDB [(none)]> use placement;

MariaDB [placement]> show tables;
+------------------------------+
| Tables_in_placement          |
+------------------------------+
| alembic_version              |
| allocations                  |
| consumers                    |

# 6.查看placement用户是否创建
[root@controller ~]# openstack user list
+----------------------------------+-----------+
| ID                               | Name      |
+----------------------------------+-----------+
| f4f16d960e0643d7b5a35db152c87dae | admin     |
| 81238b556a444c8f80cb3d7dc72a24d3 | glance    |
| e0d6a46f9b1744d8a7ab0332ab45d59c | placement |
+----------------------------------+-----------+

# 7.查看placement用户是否有admin权限
# 查看用户id和角色id,然后在role assignment列表中查看id对应关系
[root@controller ~]# openstack user list
+----------------------------------+-----------+
| ID                               | Name      |
+----------------------------------+-----------+
| f4f16d960e0643d7b5a35db152c87dae | admin     |
| 81238b556a444c8f80cb3d7dc72a24d3 | glance    |
| e0d6a46f9b1744d8a7ab0332ab45d59c | placement |
+----------------------------------+-----------+
[root@controller ~]# openstack role list
+----------------------------------+--------+
| ID                               | Name   |
+----------------------------------+--------+
| 47670bbd6cc1472ab42db560637c7ebe | reader |
| 5eee0910aeb844a1b82f48100da7adc9 | admin  |
| 700ec993d3cf456fa591c03e72f37856 | user   |
| bc2c8147bbd643629a020a6bd9591eca | member |
+----------------------------------+--------+
[root@controller ~]# openstack role assignment list
+----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+
| Role                             | User                             | Group | Project                          | Domain | System | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+
| 5eee0910aeb844a1b82f48100da7adc9 | 81238b556a444c8f80cb3d7dc72a24d3 |       | e3a549077f354998aa1a75677cfde62e |        |        | False     |
| 5eee0910aeb844a1b82f48100da7adc9 | e0d6a46f9b1744d8a7ab0332ab45d59c |       | e3a549077f354998aa1a75677cfde62e |        |        | False     |
| 5eee0910aeb844a1b82f48100da7adc9 | f4f16d960e0643d7b5a35db152c87dae |       | 4188570a34464b938ed3fa7e08681df8 |        |        | False     |
| 5eee0910aeb844a1b82f48100da7adc9 | f4f16d960e0643d7b5a35db152c87dae |       |                                  |        | all    | False     |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+

# 8.placement服务是否创建
[root@controller ~]# openstack service list
+----------------------------------+-----------+-----------+
| ID                               | Name      | Type      |
+----------------------------------+-----------+-----------+
| 324a07034ea4453692570e3edf73cf2c | glance    | image     |
| 5d25b4ed1443497599707e043866eaae | keystone  | identity  |
| da038496edf04ce29d7d3d6b8e647755 | placement | placement |
+----------------------------------+-----------+-----------+

# 9.检测placement服务端点
[root@controller ~]# openstack endpoint list

# 10.查看placement服务端口是否正常
[root@controller ~]# curl http://controller:8778
{"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]}

七、计算服务(Nova)部署

Nova负责云主机实例的创建、删除、启动、停止等。

1、安装配置控制节点Nova服务

(1)安装nova软件包

控制节点上需安装Nova的四个软件包:

openstack-nova-api:Nova与外部的接口模块
openstack-nova-conductor:Nova传导服务模块,提供数据库访问
openstack-nova-scheduler:Nova调度服务模块,选择某台主机创建云主机
openstack-nova-novncproxy:Nova虚拟网络控制台代理模块,支持用户通过vnc访问云主机。

# 安装nova相关软件包
[root@controller ~]# yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-scheduler openstack-nova-novncproxy

# 查看nova用户
[root@controller ~]# cat /etc/passwd | grep nova
nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin

# 查看nova用户组
[root@controller ~]# cat /etc/group | grep nova
nobody:x:99:nova
nova:x:162:nova

(2)创建Nova数据库并授权

支持nova组件的数据库有三个:nova_api、nova_celll0、nova。

# 登录数据库
[root@controller ~]# mysql -uroot -p000000

# 创建三个数据库
MariaDB [(none)]> create database nova_api;
MariaDB [(none)]> create database nova_cell0;
MariaDB [(none)]> create database nova;
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| nova               |
| nova_api           |
| nova_cell0         |

# 为数据库授权本地和远程管理权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '000000';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '000000';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '000000';

(3)修改Nova配置文件

Nova配置文件:/etc/nova/nova.conf。修改这个文件可实现Nova与数据库、Keystone和其他组件的连接。

  • api_databas:配置与数据库nova_api连接。
  • database:配置与数据库nova连接。
  • api、keystone_authtoken:配置与keystone交互。
  • placement:配置与Placement组件交互。
  • glance:配置与glance组件交互。
  • oslo_concurrency:配置锁路径。为openstack中的代码块提供线程及进程锁。lock_path配置给这个模块指定了锁路径。
  • DEFAULT:配置使用消息队列和防火墙等信息。
  • vnc:配置vnc连接模式。

配置文件中的 $ 表示取变量值。

# 备份配置文件
[root@controller ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak

# 去除配置文件中的注释和空行
[root@controller ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf

# 修改配置文件
[root@controller ~]# vi /etc/nova/nova.conf
[DEFAULT]
enable_apis = osapi_compute,metadata
transport_url = rabbit://rabbitmq:000000@controller:5672
my_ip = 192.168.10.10
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone

[api_database]
connection = mysql+pymysql://nova:000000@controller/nova_api

[database]
connection = mysql+pymysql://nova:000000@controller/nova

[glance]
api_servers = http://controller:9292

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = nova
password = 000000

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = 000000
region_name = RegionOne

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

(4)初始化数据库

将安装文件中的数据库表信息填入数据库中。

# 初始化 nova_api数据库
[root@controller ~]# su nova -s /bin/sh -c "nova-manage api_db sync"

# 创建‘cell1’单元,该单元使用nova数据库
[root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1"

# 映射nova 到cell0数据库,使cell0的表结构与nova表结构一致
[root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 map_cell0"

# 初始化nova数据库,因为映射,cell0会同步创建相同数据库(有warning先忽略)
[root@controller ~]# su nova -s /bin/sh -c "nova-manage db sync"

(5)验证单元是否都正确注册

存在 cell0cell1 两个单元则正常。
cell0:系统管理
cell1:云主机管理,每增加一个计算节点则增加一个和cell1功能相同的单元。

[root@controller ~]# nova-manage cell_v2 list_cells
+-------+--------------------------------------+----------------------------------------+-------------------------------------------------+----------+
|  Name |                 UUID                 |             Transport URL              |               Database Connection               | Disabled |
+-------+--------------------------------------+----------------------------------------+-------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |                 none:/                 | mysql+pymysql://nova:****@controller/nova_cell0 |  False   |
| cell1 | 83ad6d17-f245-4310-8729-fccaa033edf2 | rabbit://rabbitmq:****@controller:5672 |    mysql+pymysql://nova:****@controller/nova    |  False   |
+-------+--------------------------------------+----------------------------------------+-------------------------------------------------+----------+

2、控制节点Nova组件初始化及检测

(1)Nova组件初始化

# 导入环境变量模拟登录
[root@controller ~]# source admin-login

# 在default域创建名为nova的用户
[root@controller ~]# openstack user create --domain default --password 000000 nova
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 2f5041ed122d4a50890c34ea02881b47 |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

# 为nova用户分配admin角色
[root@controller ~]# openstack role add --project project --user nova admin

# 创建compute类型的nova服务
[root@controller ~]# openstack service create --name nova compute
+---------+----------------------------------+
| Field   | Value                            |
+---------+----------------------------------+
| enabled | True                             |
| id      | e7cccf0a4d2549139801ac51bb8546db |
| name    | nova                             |
| type    | compute                          |
+---------+----------------------------------+

# 创建服务端点
[root@controller ~]# openstack endpoint create --region RegionOne nova public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | c60a9641abbb47b391751c9a0b0d6828 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | e7cccf0a4d2549139801ac51bb8546db |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne nova internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 49b042b01ad44784888e65366d61dede |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | e7cccf0a4d2549139801ac51bb8546db |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 6dd22acff2ab4c2195cefee39f371cc0 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | e7cccf0a4d2549139801ac51bb8546db |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+

[root@controller ~]# openstack endpoint list | grep nova
| 49b042b01ad44784888e65366d61dede | RegionOne | nova         | compute      | True    | internal  | http://controller:8774/v2.1 |
| 6dd22acff2ab4c2195cefee39f371cc0 | RegionOne | nova         | compute      | True    | admin     | http://controller:8774/v2.1 |
| c60a9641abbb47b391751c9a0b0d6828 | RegionOne | nova         | compute      | True    | public    | http://controller:8774/v2.1 |


# 开机启动控制节点的nova服务
[root@controller ~]# systemctl enable openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service.

# 启动nova服务
[root@controller ~]# systemctl start openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy

(2)检测控制节点nova服务

Nova服务会占用8774和9775端口,查看端口是否启动,可判断Nova服务是否已经运行。

nova-conductornova-scheduler两个服务在控制节点的模块均处于开启(UP)状态,则服务正常。

# 方法一:查看端口占用
[root@controller ~]# netstat -nutpl | grep 877
tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      2487/python2        
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      2487/python2        
tcp6       0      0 :::8778                 :::*                    LISTEN      1030/httpd 

# 方法二:查看计算服务列表
[root@controller ~]# openstack compute service list
+----+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host       | Zone     | Status  | State | Updated At                 |
+----+----------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-conductor | controller | internal | enabled | up    | 2022-10-28T10:53:26.000000 |
|  4 | nova-scheduler | controller | internal | enabled | up    | 2022-10-28T10:53:28.000000 |
+----+----------------+------------+----------+---------+-------+----------------------------+

3、安装配置计算节点Nova服务

Nova需要在计算节点安装 nova-compute 计算模块,所有的云主机均为该模块在计算节点生成。

(1)安装nova软件包

# 把阿里云的源下载过来
[root@compute yum.repos.d]# scp root@192.168.10.10:/etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/
root@192.168.10.10's password: 
CentOS-Base.repo                                                        100% 2523     1.1MB/s   00:00    
[root@compute yum.repos.d]# ls
CentOS-Base.repo  OpenStack.repo  repo.bak

# 安装nova的计算模块
[root@compute yum.repos.d]# yum install -y openstack-nova-compute

# 查看用户信息
[root@compute ~]# cat /etc/passwd | grep nova
nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin

# 查看用户组信息
[root@compute ~]# cat /etc/group | grep nova
nobody:x:99:nova
qemu:x:107:nova
libvirt:x:987:nova
nova:x:162:nova

(2)修改Nova配置文件

Nova的配置文件是 /etc/nova/nova.conf。修改它实现Nova与数据库、keystone和其他组件的连接。

与控制节点的主要区别:

  • my_ip = 192.168.10.20
  • libvirt中多了配置:virt_type = qemu
# 备份配置文件
[root@compute ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak

# 去掉配置文件注释和空行
[root@compute ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf

# 编辑配置文件
[root@compute ~]# vi /etc/nova/nova.conf
[DEFAULT]
enable_apis = osapi_compute,metadata
transport_url = rabbit://rabbitmq:000000@controller:5672
my_ip = 192.168.10.20
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone

[glance]
api_servers = http://controller:9292

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = nova
password = 000000

[libvirt]
virt_type = qemu

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = placement
password = 000000
region_name = RegionOne

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.10.10:6080/vnc_auto.html

(3)启动计算节点Nova服务

# 开机启动
[root@compute ~]# systemctl enable libvirtd openstack-nova-compute
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.

# 启动
[root@compute ~]# systemctl start libvirtd openstack-nova-compute

# 在控制节点查看服务状态
[root@controller ~]# openstack compute service list
+----+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host       | Zone     | Status  | State | Updated At                 |
+----+----------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-conductor | controller | internal | enabled | up    | 2022-10-28T11:19:57.000000 |
|  4 | nova-scheduler | controller | internal | enabled | up    | 2022-10-28T11:19:49.000000 |
|  6 | nova-compute   | compute    | nova     | enabled | up    | 2022-10-28T11:19:56.000000 |
+----+----------------+------------+----------+---------+-------+----------------------------+

4、发现计算节点并检验服务

每个计算节点要加入系统,都需要在控制节点上执行一次发现计算节点的操作。被发现的计算节点才能被映射为一个单元。

(1)发现计算节点

注意是控制节点操作。

# 模拟登录验证
[root@controller ~]# source admin-login

# 切换nova用户执行发现未注册计算节点
# 发现计算节点后,将自动与cell1单元形成关联,后面可通过cell1对计算节点管理
[root@controller ~]# su nova -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose"
Found 2 cell mappings.
Getting computes from cell 'cell1': 83ad6d17-f245-4310-8729-fccaa033edf2
Checking host mapping for compute host 'compute': 13af5106-c1c1-4b3f-93f5-cd25e030f39d
Creating host mapping for compute host 'compute': 13af5106-c1c1-4b3f-93f5-cd25e030f39d
Found 1 unmapped computes in cell: 83ad6d17-f245-4310-8729-fccaa033edf2
Skipping cell0 since it does not contain hosts.

# 设置自动发现
# 1.每隔60秒执行一次发现命令
[root@controller ~]# vi /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 60
# 2.重启nova-api服务,让配置生效
[root@controller ~]# systemctl restart openstack-nova-api

(2)验证Nova服务

均在控制节点上操作。

# 方法一:查看计算服务列表
[root@controller ~]# openstack compute service list
+----+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host       | Zone     | Status  | State | Updated At                 |
+----+----------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-conductor | controller | internal | enabled | up    | 2022-10-28T12:02:46.000000 |
|  4 | nova-scheduler | controller | internal | enabled | up    | 2022-10-28T12:02:38.000000 |
|  6 | nova-compute   | compute    | nova     | enabled | up    | 2022-10-28T12:02:40.000000 |
+----+----------------+------------+----------+---------+-------+----------------------------+

# 方法二:查看Openstack服务及端点列表
[root@controller ~]# openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name      | Type      | Endpoints                               |
+-----------+-----------+-----------------------------------------+
| glance    | image     | RegionOne                               |
|           |           |   internal: http://controller:9292      |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:9292         |
|           |           | RegionOne                               |
|           |           |   public: http://controller:9292        |
|           |           |                                         |
| keystone  | identity  | RegionOne                               |
|           |           |   admin: http://controller:5000/v3      |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:5000/v3   |
|           |           | RegionOne                               |
|           |           |   public: http://controller:5000/v3     |
|           |           |                                         |
| placement | placement | RegionOne                               |
|           |           |   internal: http://controller:8778      |
|           |           | RegionOne                               |
|           |           |   public: http://controller:8778        |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:8778         |
|           |           |                                         |
| nova      | compute   | RegionOne                               |
|           |           |   internal: http://controller:8774/v2.1 |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:8774/v2.1    |
|           |           | RegionOne                               |
|           |           |   public: http://controller:8774/v2.1   |
|           |           |                                         |
+-----------+-----------+-----------------------------------------+

# 方法三:使用Nova状态检测工具进行检查
[root@controller ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results          |
+--------------------------------+
| Check: Cells v2                |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Placement API           |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Cinder API              |
| Result: Success                |
| Details: None                  |
+--------------------------------+

5、安装完成情况检测

# 1.检查控制节点nova用户和用户组
[root@controller ~]# cat /etc/passwd | grep nova
nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin
[root@controller ~]# cat /etc/group | grep nova
nobody:x:99:nova
nova:x:162:nova

# 2.检查计算节点nova用户和用户组
[root@compute ~]# cat /etc/passwd | grep nova
nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/sbin/nologin
[root@compute ~]# cat /etc/group | grep nova
nobody:x:99:nova
qemu:x:107:nova
libvirt:x:987:nova
nova:x:162:nova

# 3.查看控制节点数据库
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| glance             |
| information_schema |
| keystone           |
| mysql              |
| nova               |
| nova_api           |
| nova_cell0         |

# 4.查看nova用户对数据库的权限
MariaDB [(none)]> show grants for nova@'%';
+-----------------------------------------------------------------------------------------------------+
| Grants for nova@%                                                                                   |
+-----------------------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'nova'@'%' IDENTIFIED BY PASSWORD '*032197AE5731D4664921A6CCAC7CFCE6A0698693' |
| GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'%'                                                      |
| GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova'@'%'                                                  |
| GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'%'                                                |
+-----------------------------------------------------------------------------------------------------+
4 rows in set (0.000 sec)

MariaDB [(none)]> show grants for nova@'localhost';
+-------------------------------------------------------------------------------------------------------------+
| Grants for nova@localhost                                                                                   |
+-------------------------------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'nova'@'localhost' IDENTIFIED BY PASSWORD '*032197AE5731D4664921A6CCAC7CFCE6A0698693' |
| GRANT ALL PRIVILEGES ON `nova`.* TO 'nova'@'localhost'                                                      |
| GRANT ALL PRIVILEGES ON `nova_cell0`.* TO 'nova'@'localhost'                                                |
| GRANT ALL PRIVILEGES ON `nova_api`.* TO 'nova'@'localhost'  

# 5.nova\nova_api\nova_cell0数据库表同步
MariaDB [(none)]> use nova
Database changed

MariaDB [nova]> show tables;
+--------------------------------------------+
| Tables_in_nova                             |
+--------------------------------------------+
| agent_builds                               |
| aggregate_hosts                            |
| aggregate_metadata                         |

# 6.检查nova用户是否存在
[root@controller ~]# openstack user list
+----------------------------------+-----------+
| ID                               | Name      |
+----------------------------------+-----------+
| f4f16d960e0643d7b5a35db152c87dae | admin     |
| 81238b556a444c8f80cb3d7dc72a24d3 | glance    |
| e0d6a46f9b1744d8a7ab0332ab45d59c | placement |
| 2f5041ed122d4a50890c34ea02881b47 | nova      |

# 7.检查nova用户是否有admin权限
[root@controller ~]# openstack role list
+----------------------------------+--------+
| ID                               | Name   |
+----------------------------------+--------+
| 47670bbd6cc1472ab42db560637c7ebe | reader |
| 5eee0910aeb844a1b82f48100da7adc9 | admin  |
| 700ec993d3cf456fa591c03e72f37856 | user   |
| bc2c8147bbd643629a020a6bd9591eca | member |
+----------------------------------+--------+
[root@controller ~]# openstack role assignment list
+----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+
| Role                             | User                             | Group | Project                          | Domain | System | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+
| 5eee0910aeb844a1b82f48100da7adc9 | 2f5041ed122d4a50890c34ea02881b47 |       | e3a549077f354998aa1a75677cfde62e |        |        | False     |

# 8.检查是否创建看了服务实体nova
[root@controller ~]# openstack service list
+----------------------------------+-----------+-----------+
| ID                               | Name      | Type      |
+----------------------------------+-----------+-----------+
| 324a07034ea4453692570e3edf73cf2c | glance    | image     |
| 5d25b4ed1443497599707e043866eaae | keystone  | identity  |
| da038496edf04ce29d7d3d6b8e647755 | placement | placement |
| e7cccf0a4d2549139801ac51bb8546db | nova      | compute   |

# 9.检查nova服务端点
[root@controller ~]# openstack endpoint list | grep nova
| 49b042b01ad44784888e65366d61dede | RegionOne | nova         | compute      | True    | internal  | http://controller:8774/v2.1 |
| 6dd22acff2ab4c2195cefee39f371cc0 | RegionOne | nova         | compute      | True    | admin     | http://controller:8774/v2.1 |
| c60a9641abbb47b391751c9a0b0d6828 | RegionOne | nova         | compute      | True    | public    | http://controller:8774/v2.1 |

# 10.nova服务是否正常运行
[root@controller ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results          |
+--------------------------------+
| Check: Cells v2                |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Placement API           |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Cinder API              |
| Result: Success                |
| Details: None                  |
+--------------------------------+

八、网络服务(Neutron)部署

Neutron负责虚拟网络设备的创建、管理。包含网桥、网络、端口等。

1、网络初始环境准备

(1)设置外网网卡为混杂模式

需要将网卡设置为混杂模式,网卡能将通过自己接口的所有数据都捕获。
为了实现虚拟网络的数据转发,Neutron需要将外网网卡设置为混杂模式。

# 设置控制节点
[root@controller ~]# ifconfig ens34 promisc
[root@controller ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.10  netmask 255.255.255.0  broadcast 192.168.10.255
		略...

ens34: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500    《————这里增加了PROMISC

# 设置计算节点
[root@compute ~]# ifconfig ens34 promisc
[root@compute ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.20  netmask 255.255.255.0  broadcast 192.168.10.255
		略...

ens34: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500

网卡信息中出现“PROMISC”字样,则表示成功设置为混杂模式,凡是通过该网卡的数据均可被该网卡接收。

再设置开机后混杂模式自动生效

# 控制节点执行
[root@controller ~]# echo 'ifconfig ens34 promisc' >> /etc/profile
[root@controller ~]# tail -1 /etc/profile
ifconfig ens34 promisc

# 计算节点执行
[root@compute ~]# echo 'ifconfig ens34 promisc' >> /etc/profile
[root@compute ~]# tail -1 /etc/profile
ifconfig ens34 promisc

(2)加载桥接模式防火墙模块

网络过滤器(Netfilter)是Linux内核中的一个软件框架,用于管理网络数据包,能网络地址转换,还能修改数据包、数据包过滤等。

# 1.修改系统参数配置文件
# 控制节点修改
[root@controller ~]# echo 'net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf

[root@controller ~]# tail -n 2 /etc/sysctl.conf 
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

# 计算节点修改
[root@compute ~]#  echo 'net.bridge.bridge-nf-call-iptables = 1
> net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf

[root@compute ~]#  tail -n 2 /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

# 2.分别加载br_netfilter模块
[root@controller ~]# modprobe br_netfilter
[root@compute ~]# modprobe br_netfilter

# 3.分别检查模块加载
[root@controller ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

[root@compute ~]# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

2、控制节点Neutron服务安装配置

(1)安装Neutron软件包

openstack-neutron:neutron-server模块的包。
openstack-neutron-ml2 :ML2插件的包。
openstack-neutron-linuxbridge:网桥和网络提供者相关的软件包。

# 安装相关软件包
# 阿里云上有包dnsmasq-utils-2.76-17.el7_9.3.x86_64.rpm缺失
[root@controller ~]# yum install -y wget
[root@controller ~]# wget http://mirror.centos.org/centos/7/updates/x86_64/Packages/dnsmasq-utils-2.76-17.el7_9.3.x86_64.rpm
[root@controller ~]# ls
admin-login      cirros-0.5.1-x86_64-disk.img    dnsmasq-utils-2.76-17.el7_9.3.x86_64.rpm
[root@controller ~]#  rpm -ivh dnsmasq-utils-2.76-17.el7_9.3.x86_64.rpm 
Preparing...                          ################################# [100%]
Updating / installing...
   1:dnsmasq-utils-2.76-17.el7_9.3    ################################# [100%]

[root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge

# 检查用户信息
[root@controller ~]# cat /etc/passwd | grep neutron
neutron:x:990:987:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin

# 检查用户组信息
[root@controller ~]# cat /etc/group | grep neutron
neutron:x:987:

(2)创建Neutron数据库并授权

支持Neutron组件的数据库只有一个,一般命名为neutron

# 1.登录并创建数据库
[root@controller ~]# mysql -uroot -p000000
MariaDB [(none)]> create database neutron;
Query OK, 1 row affected (0.000 sec)
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| glance             |
| information_schema |
| keystone           |
| mysql              |
| neutron            |

# 2.为数据库授权本地和远程管理权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.000 sec)

(3)修改Neutron服务相关配置文件

  1. 配置Neutron组件信息
  • 修改“[DEFAULT]”与“[keystone_authtoken]”部分,实现与Keystone交互
  • 修改[database]部分,实现与数据库连接
  • 修改“[DEFAULT]”部分,实现与消息队列交互及核心插件等
  • 修改“[oslo_concurrency]”,配置锁路径
# 备份配置文件
[root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
# 去掉配置文件注释和空行
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf

# 编辑配置文件
[root@controller ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins = router
transport_url = rabbit://rabbitmq:000000@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:000000@controller/neutron

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = project
username = neutron
password = 000000

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = project
username = nova
password = 000000
region_name = RegionOne
server_proxyclient_address = 192.168.10.10
  1. 修改二层模块插件(ML2 Plugin)的配置文件
    二层模块插件(ML2 Plugin)是Neutron的核心插件。
# 备份配置文件
[root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
# 去除配置文件中的注释和空行
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini

# 编辑配置文件
[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,local,vlan,gre,vxlan,geneve
tenant_network_types = local,flat
mechanism_drivers = linuxbridge
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[securitygroup]
enable_ipset = true

# 设置映射启用ML2插件
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@controller ~]# ll /etc/neutron/
lrwxrwxrwx  1 root root       37 Nov  4 20:01 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
  1. 修改网桥代理的配置文件
    要在ML2的配置文件中设置机制驱动(mechanism_drivers)的值为 linuxbridge
# 1.备份配置文件
[root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

# 2.删除注释和空行
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[root@controller ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]

# 3.编辑配置文件
[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[linux_bridge]
physical_interface_mappings = provider:ens34

[vxlan]
enable_vxlan = false

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  1. 修改DHCP代理配置文件
    dhcp-agent 为云主机提供了自动分配IP地址的服务。
# 1.备份和去除空行和注释配置文件
[root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
[root@controller ~]# cat /etc/neutron/dhcp_agent.ini
[DEFAULT]

# 2.编辑配置文件
[root@controller ~]# vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
  1. 修改元数据代理的配置文件
    云主机运行在计算节点,运行过程中需要和控制节点 nova-api 模块交互,交互需要使用 neutron-metadata-agent
# 1.备份和去除空行注释配置文件
[root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
[root@controller ~]# grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak  > /etc/neutron/metadata_agent.ini
[root@controller ~]# cat /etc/neutron/metadata_agent.ini
[DEFAULT]
[cache]

# 2.编辑配置文件
[root@controller ~]# vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
[cache]
  1. 修改nova配置文件
    Nova处于云平台的核心位置,需要在Nova配置文件中指明如何和Neutron进行交互。
# 注意文件目录
[root@controller ~]# echo '
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = project
username = neutron
password = 000000
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
' >> /etc/nova/nova.conf

(4)初始化数据库

Neutron数据库同步,将安装文件中的数据库的表信息填充到数据库中。

# 数据库同步
[root@controller neutron]# su neutron -s /bin/sh -c "neutron-db-manage \
--config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"

# 数据库验证
[root@controller neutron]# mysql -uroot -p000000
MariaDB [(none)]> use neutron;
Database changed
MariaDB [neutron]> show tables;
+-----------------------------------------+
| Tables_in_neutron                       |
+-----------------------------------------+
| address_scopes                          |
| agents                                  |
| alembic_version                         |
| allowedaddresspairs                     |
| arista_provisioned_nets                 |
| arista_provisioned_tenants              |

3、Neutron组件初始化

任务均在控制节点完成。

(1)创建Neutron用户并分配角色

# 模拟登录
[root@controller ~]# source admin-login 
# 在 default 域创建neutron用户 
[root@controller ~]# openstack user create --domain default --password 000000 neutron
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 67bd1f9c48174e3e96bb41e0f76687ca |
| name                | neutron                          |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

# 给neutron用户分配admin角色
[root@controller ~]# openstack role add --project project --user neutron admin

# 验证
[root@controller ~]# openstack role assignment list
+----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+
| Role                             | User                             | Group | Project                          | Domain | System | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+
| 5eee0910aeb844a1b82f48100da7adc9 | 2f5041ed122d4a50890c34ea02881b47 |       | e3a549077f354998aa1a75677cfde62e |        |        | False     |
| 5eee0910aeb844a1b82f48100da7adc9 | 67bd1f9c48174e3e96bb41e0f76687ca |       | e3a549077f354998aa1a75677cfde62e |        |        | False     |

(2)创建neutron服务及服务端点

# 创建network类型neutron服务
[root@controller ~]# openstack service create --name neutron network
+---------+----------------------------------+
| Field   | Value                            |
+---------+----------------------------------+
| enabled | True                             |
| id      | 459c365a11c74e5894b718b5406022a8 |
| name    | neutron                          |
| type    | network                          |
+---------+----------------------------------+

# 创建3个服务端点
[root@controller ~]# openstack endpoint create --region RegionOne neutron public http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 1d59d497c89c4fa9b8789d685fab9fe5 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 459c365a11c74e5894b718b5406022a8 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne neutron internal http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 44de22606819441aa845b370a9304bf5 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 459c365a11c74e5894b718b5406022a8 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne neutron admin http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 75e7eaf8bc664a2c901b7ad58141bedc |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 459c365a11c74e5894b718b5406022a8 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+

(3)启动控制节点上的Neutron服务

由于修改了Nova的配置文件,启动Neutron服务前,需要先重启Nova服务。

# 重启nova服务
[root@controller ~]# systemctl restart openstack-nova-api

# 服务开机启动
[root@controller ~]# systemctl enable neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.

[root@controller neutron]# systemctl start neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent

4、检测控制节点上的Neutron服务

# 方法一:查看端口占用情况
[root@controller neutron]# netstat -tnlup|grep 9696
tcp        0      0 0.0.0.0:9696            0.0.0.0:*               LISTEN      4652/server.log

# 方法二:检验服务端点
[root@controller neutron]# curl http://controller:9696
{"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "http://controller:9696/v2.0/", "rel": "self"}]}]}

# 方法三:查看服务运行状态
# Loaded:值为enabled,表示服务以及设置了开机启动
# Active:值为active(running),表示服务当前处于运行状态
[root@controller neutron]# systemctl status neutron-server
● neutron-server.service - OpenStack Neutron Server
   Loaded: loaded (/usr/lib/systemd/system/neutron-server.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2022-11-11 16:31:20 CST; 5min ago
 Main PID: 4652 (/usr/bin/python)

5、安装和配置计算节点的Neutron服务

均在计算节点上完成。

(1)安装Neutron软件包

# 计算节点安装软件包,包含网桥和网络提供者的相关软件
[root@compute ~]# yum install -y openstack-neutron-linuxbridge

# 查看neutron用户和用户组
[root@compute ~]# cat /etc/passwd | grep neutron
neutron:x:989:986:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin
[root@compute ~]# cat /etc/group | grep neutron
neutron:x:986:

(2)修改Neutron配置文件

要对Neutron组件、网桥代理、Nova组件进行配置。

  1. Neutron配置文件
# 备份配置文件
[root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
# 去除空行和注释
[root@compute ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
[root@compute ~]# cat /etc/neutron/neutron.conf
[DEFAULT]
[cors]
[database]
[keystone_authtoken]

# 修改Neutron配置文件
[root@compute ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://rabbitmq:000000@controller:5672
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = project
username = neutron
password = 000000

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
  1. 网桥代理的配置文件
# 网桥代理的配置文件备份和去空行和注释
[root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
[root@compute ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini

# 修改网桥代理的配置文件
[root@compute ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[linux_bridge]
physical_interface_mappings = provider:ens34

[vxlan]
enable_vxlan = false

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  1. Nova配置文件
# 在Nova配置文件中,需要在[DEFAULT]部分加入两行内容。在[neutron]部分加入内容
[root@compute ~]# vi /etc/nova/nova.conf
[DEFAULT]
enable_apis = osapi_compute,metadata
transport_url = rabbit://rabbitmq:000000@controller:5672
my_ip = 192.168.10.20
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
vif_plugging_is_fatal = false
vif_plugging_timeout = 0

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = project
username = neutron
password = 000000

(3)启动计算节点Neutron服务

[root@compute ~]# systemctl restart openstack-nova-compute
[root@compute ~]# systemctl enable neutron-linuxbridge-agent
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
[root@compute ~]# systemctl start neutron-linuxbridge-agent

6、检测Neutron服务

两种方法检测Neutron组件的运行状态。均在控制节点执行。

# 方法一:查看网络代理服务列表
# 查询出四个数据,均为UP状态
[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 0e2c0f8f-8fa7-4b64-8df2-6f1aedaa7c2b | Linux bridge agent | compute    | None              | :-)   | UP    | neutron-linuxbridge-agent |
| c6688165-593d-4c5e-b25c-5ff2b6c75866 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
| dc335348-5639-40d1-b121-3abfc9aefc8e | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| ddc49378-aea8-4f2e-b1b4-568fa4c85038 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

# 方法二:用Neutron状态检测工具检测
[root@controller ~]# neutron-status upgrade check
+---------------------------------------------------------------------+
| Upgrade Check Results                                               |
+---------------------------------------------------------------------+
| Check: Gateway external network                                     |
| Result: Success                                                     |
| Details: L3 agents can use multiple networks as external gateways.  |
+---------------------------------------------------------------------+
| Check: External network bridge                                      |
| Result: Success                                                     |
| Details: L3 agents are using integration bridge to connect external |
|   gateways                                                          |
+---------------------------------------------------------------------+
| Check: Worker counts configured                                     |
| Result: Warning                                                     |
| Details: The default number of workers has changed. Please see      |
|   release notes for the new values, but it is strongly              |
|   encouraged for deployers to manually set the values for           |
|   api_workers and rpc_workers.                                      |
+---------------------------------------------------------------------+

7、安装完成情况检测

# 1.控制节点外网卡设置了混杂模式(PROMISC)
[root@controller ~]# ip a
3: ens34: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

# 2.计算节点外网卡设置了混杂模式(PROMISC)
[root@compute ~]# ip a
3: ens34: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

# 3.控制节点创建neutron用户和用户组
[root@controller ~]# cat /etc/passwd | grep neutron
neutron:x:990:987:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin
[root@controller ~]# cat /etc/group | grep neutron
neutron:x:987:

# 4.计算节点创建neutron用户和用户组
[root@compute ~]# cat /etc/passwd | grep neutron
neutron:x:989:986:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin
[root@compute ~]# cat /etc/group | grep neutron
neutron:x:986:

# 5.控制节点是否建立neutron数据库
[root@controller ~]# mysql -uroot -p000000
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| glance             |
| information_schema |
| keystone           |
| mysql              |
| neutron            |

# 6.检查neutron用户对数据库的权限
MariaDB [(none)]> show grants for neutron;
+--------------------------------------------------------------------------------------------------------+
| Grants for neutron@%                                                                                   |
+--------------------------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'neutron'@'%' IDENTIFIED BY PASSWORD '*032197AE5731D4664921A6CCAC7CFCE6A0698693' |
| GRANT ALL PRIVILEGES ON `neutron`.* TO 'neutron'@'%'                                                   |
+--------------------------------------------------------------------------------------------------------+

# 7.检查neutron数据库中的数据表是否同步
MariaDB [(none)]> use neutron;
Database changed
MariaDB [neutron]> show tables;
+-----------------------------------------+
| Tables_in_neutron                       |
+-----------------------------------------+
| address_scopes                          |
| agents                                  |
| alembic_version                         |
| allowedaddresspairs                     |
| arista_provisioned_nets                 |
| arista_provisioned_tenants              |

# 8.检查openstack用户列表
[root@controller ~]# openstack user list | grep neutron
| 67bd1f9c48174e3e96bb41e0f76687ca | neutron   |

# 9.查看neutron用户是否有ADMIN权限
[root@controller ~]# openstack role list | grep admin
| 5eee0910aeb844a1b82f48100da7adc9 | admin  |
[root@controller ~]# openstack role assignment list | grep 67bd1f9c48174e3e96bb41e0f76687ca
| 5eee0910aeb844a1b82f48100da7adc9 | 67bd1f9c48174e3e96bb41e0f76687ca |       | e3a549077f354998aa1a75677cfde62e |        |        | False     |

# 10.检查是否创建了服务实体neutron
[root@controller ~]# openstack service list
+----------------------------------+-----------+-----------+
| ID                               | Name      | Type      |
+----------------------------------+-----------+-----------+
| 324a07034ea4453692570e3edf73cf2c | glance    | image     |
| 459c365a11c74e5894b718b5406022a8 | neutron   | network   |

# 11.neutron的三个域端点是否创建
[root@controller ~]# openstack endpoint list | grep neutron
| 1d59d497c89c4fa9b8789d685fab9fe5 | RegionOne | neutron      | network      | True    | public    | http://controller:9696      |
| 44de22606819441aa845b370a9304bf5 | RegionOne | neutron      | network      | True    | internal  | http://controller:9696      |
| 75e7eaf8bc664a2c901b7ad58141bedc | RegionOne | neutron      | network      | True    | admin     | http://controller:9696

# 12.查看网络代理列表,检查服务是否正常运行
[root@controller ~]# openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 0e2c0f8f-8fa7-4b64-8df2-6f1aedaa7c2b | Linux bridge agent | compute    | None              | :-)   | UP    | neutron-linuxbridge-agent |
| c6688165-593d-4c5e-b25c-5ff2b6c75866 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
| dc335348-5639-40d1-b121-3abfc9aefc8e | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| ddc49378-aea8-4f2e-b1b4-568fa4c85038 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

九、仪表盘服务(Dashboard/Horizon)部署

OpenStack推出了一个名为Horizon的项目,它提供了图形化的操作界面来使用OpenStack云计算平台。
Horizon中主要提供了一个Web前端控制台,该控制台软件被称为Dashboard。

1、安装Dashboard软件包

在计算节点安装Dashboard软件包。

[root@compute ~]# yum install -y openstack-dashboard

2、修改Horizon配置文件

# 修改Horizon配置文件
[root@compute ~]# vi /etc/openstack-dashboard/local_settings
# 控制节点位置
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
# 启用对多域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
# 配置API版本
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3
}
# 配置通过仪表盘创建的用户默认域为Default
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
# 配置通过仪表盘创建的用户默认角色为user
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

# 修改配置二层网络
OPENSTACK_NEUTRON_NETWORK = {
    'enable_auto_allocated_network': False,
    'enable_distributed_router': False,
    'enable_fip_topology_check': False,
    'enable_ha_router': False,
    'enable_ipv6': False,
    'enable_quotas': False,
    'enable_rbac_policy': False,
    'enable_router': False,

    'default_dns_nameservers': [],
    'supported_provider_types': ['*'],
    'segmentation_id_range': {},
    'extra_provider_types': {},
    'supported_vnic_types': ['*'],
    'physical_networks': [],
}
# 配置时区
TIME_ZONE = "Asia/Shanghai"

# 允许从任意主机访问
ALLOWED_HOSTS = ['*']

# 配置使用缓存服务
CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION': 'controller:11211',
    },
}
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

3、重建Apache下Dashboard配置文件

Dashboard是一个Web应用,必须运行在Apache这样的WEB服务器上,因此要设置让Apaceh知道如何运行该服务。

# 进入Dashboard网站目录
[root@compute ~]# cd /usr/share/openstack-dashboard/
[root@compute openstack-dashboard]# ls
manage.py  manage.pyc  manage.pyo  openstack_dashboard  static

# 编译生成Dashboard的WEB服务文件
[root@compute openstack-dashboard]# python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf
[root@compute openstack-dashboard]# cat /etc/httpd/conf.d/openstack-dashboard.conf 
<VirtualHost *:80>
    ServerAdmin webmaster@openstack.org
    ServerName  openstack_dashboard
    DocumentRoot /usr/share/openstack-dashboard/
    LogLevel warn
    ErrorLog /var/log/httpd/openstack_dashboard-error.log
    CustomLog /var/log/httpd/openstack_dashboard-access.log combined
    WSGIScriptReloading On
    WSGIDaemonProcess openstack_dashboard_website processes=3
    WSGIProcessGroup openstack_dashboard_website
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py
    <Location "/">
        Require all granted
    </Location>
    Alias /static /usr/share/openstack-dashboard/static
    <Location "/static">
        SetHandler None
    </Location>
</Virtualhost>

这样就实现了在Apache的web服务配置目录下生成一个配置文件。

4、建立策略文件软连接

/etc/openstack-dashboard/目录下保存了一些Dashboard与其他组件交互时的默认策略。

# 查看交互默认策略
[root@compute ~]# cd /etc/openstack-dashboard/
[root@compute openstack-dashboard]# ls
cinder_policy.json  keystone_policy.json  neutron_policy.json  nova_policy.json
glance_policy.json  local_settings        nova_policy.d

# 将这些策略链接到Dashboard项目中,让策略生效
[root@compute openstack-dashboard]# ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf
[root@compute openstack-dashboard]# ll /usr/share/openstack-dashboard/openstack_dashboard/
total 240
drwxr-xr-x  3 root root  4096 Nov 18 15:00 api
lrwxrwxrwx  1 root root    24 Nov 18 15:33 conf -> /etc/openstack-dashboard

5、启动服务并验证

# Apaceh服务开机启动和重启
[root@compute ~]# systemctl enable httpd
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@compute ~]# systemctl restart httpd

访问计算节点地址(Dashboard的地址):http://192.168.10.20

填入域名:Default,用户名:admin,密码:000000,单击“登入”按钮。

blockchain

十、块存储服务(Cinder)部署

控制节点和计算节点部署配置Cinder服务。

1、控制节点安装和配置Cinder

(1)安装Cinder软件包

openstack-cinder 软件包中包括 cinder-apicinder-scheduler模块。

# 安装cinder软件包
[root@controller ~]# yum install -y openstack-cinder

# 查看cinder用户和用户组
[root@controller ~]# cat /etc/passwd | grep cinder
cinder:x:165:165:OpenStack Cinder Daemons:/var/lib/cinder:/sbin/nologin
[root@controller ~]# cat /etc/group | grep cinder
nobody:x:99:nova,cinder
cinder:x:165:cinder

(2)创建Cinder数据库并授权

# 登录数据库
[root@controller ~]# mysql -uroot -p000000

# 创建cinder数据库
MariaDB [(none)]> CREATE DATABASE cinder;
Query OK, 1 row affected (0.004 sec)

# 给cinder用户授权本地和远程访问
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '000000';
Query OK, 0 rows affected (0.007 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '000000'; 
Query OK, 0 rows affected (0.000 sec)

(3)修改Cinder配置文件

# 备份配置文件
[root@controller ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
# 去除配置文件空行和注释
[root@controller ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf

# 编辑配置文件
[root@controller ~]# vi /etc/cinder/cinder.conf
[DEFAULT]
auth_stategy = keystone
transport_url = rabbit://rabbitmq:000000@controller:5672

[database]
connection = mysql+pymysql://cinder:000000@controller/cinder

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password 
username = cinder
password = 000000 
project_name = project
user_domain_name = Default
project_domain_name = Default

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

(4)修改Nova配置文件

Cinder组件要和Nova交互,需要修改Nova配置。

[root@controller ~]# vi /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne

(5)初始化Cinder数据库

# 执行初始化操作,同步数据库
[root@controller ~]# su cinder -s /bin/sh -c "cinder-manage db sync"
Deprecated: Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".

# 验证查看cinder库里的表
MariaDB [cinder]> show tables;
+----------------------------+
| Tables_in_cinder           |
+----------------------------+
| attachment_specs           |
| backup_metadata            |
| backups                    |

(6)创建Cinder用户并分配角色

# 模拟登陆
[root@controller ~]# source admin-login 

# 平台创建cinder用户
[root@controller ~]# openstack user create --domain default --password 000000 cinder
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | b9a2bdfcbf3b445ab0db44c9e35af678 |
| name                | cinder                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

# 给用户cinder分配admin角色
[root@controller ~]# openstack role add --project project --user cinder admin

(7)创建Cinder服务及端点

Openstack(Train版)Cinder支持的卷是第3版本的卷。

# 创建服务
[root@controller ~]# openstack service create --name cinderv3 volumev3
+---------+----------------------------------+
| Field   | Value                            |
+---------+----------------------------------+
| enabled | True                             |
| id      | 90dc0dcf9879493d98144b481ea0df2b |
| name    | cinderv3                         |
| type    | volumev3                         |
+---------+----------------------------------+

# 创建服务端点
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 6bb167be751241d1922a81b6b4c18898         |
| interface    | public                                   |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 90dc0dcf9879493d98144b481ea0df2b         |
| service_name | cinderv3                                 |
| service_type | volumev3                                 |
| url          | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | e8ad2286c57443a5970e9d17ca33076a         |
| interface    | internal                                 |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 90dc0dcf9879493d98144b481ea0df2b         |
| service_name | cinderv3                                 |
| service_type | volumev3                                 |
| url          | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | dd6d3b221e244cd5a5bb6a2b33159c1d         |
| interface    | admin                                    |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 90dc0dcf9879493d98144b481ea0df2b         |
| service_name | cinderv3                                 |
| service_type | volumev3                                 |
| url          | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+

(8)启动Cinder服务

# 重启nova服务(配置文件改过了)
[root@controller ~]# systemctl restart openstack-nova-api

# 开机启动
[root@controller ~]# systemctl enable openstack-cinder-api openstack-cinder-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.

# 立即启动
[root@controller ~]# systemctl start openstack-cinder-api openstack-cinder-scheduler

(9)检测控制节点Cinder服务

# 方法一:查看8776端口占用情况
[root@controller ~]# netstat -nutpl | grep 8776
tcp        0      0 0.0.0.0:8776            0.0.0.0:*               LISTEN      15517/python2

# 方法二:查看存储服务列表,是否处于UP状态
[root@controller ~]# openstack volume service list
+------------------+------------+------+---------+-------+----------------------------+
| Binary           | Host       | Zone | Status  | State | Updated At                 |
+------------------+------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up    | 2022-11-18T11:08:47.000000 |
+------------------+------------+------+---------+-------+----------------------------+

2、搭建存储节点

(1)为计算节点添加硬盘

VMWARE点选计算节点——》点击“虚拟机”——》点击“设置”。

在虚拟机设置页面,点击“添加”按钮——》硬件类型选择“硬盘”,点击下一步——》选默认磁盘类型,点下一步——》点选“创建新虚拟磁盘”,点下一步——》磁盘大小20G以上,分配方式任选,下一步——》点击“完成”。

(2)创建卷组

逻辑卷管理(Logical Volume Manager,LVM)是Linux环境下对磁盘分区进行管理的一种机制,它可以将几块磁盘(也称物理卷)组合起来形成一个存储池或者卷组(Volume Group)。
LVM可以每次从卷组中划分出不同大小的逻辑卷(Logical Volume)创建新的逻辑设备。
Cinder可以使用LVM来实现块设备(卷)的管理。

# 1.查看系统硬盘挂载情况
[root@compute ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   40G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   39G  0 part 
  ├─centos-root 253:0    0   35G  0 lvm  /
  └─centos-swap 253:1    0    4G  0 lvm  [SWAP]
sdb               8:16   0   40G  0 disk     《—————sdb设备还没有分区和挂载
sr0              11:0    1 1024M  0 rom  

# 2.创建LVM物理卷组
# 2.1 硬盘初始化为物理卷
[root@compute ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.

# 2.2 物理卷归并为卷组
# 格式:vgcreate 卷组名  物理卷...
[root@compute ~]# vgcreate cinder-volumes /dev/sdb
  Volume group "cinder-volumes" successfully created

# 2.3 修改LVM配置
# 在配置文件中的devices部分,添加过滤器,只接受/dev/sdb
# a表示接受,r表示拒绝
[root@compute ~]# vi /etc/lvm/lvm.conf
devices {
        filter = ["a/sdb/","r/.*/"]

# 3.启动LVM元数据服务
[root@compute ~]# systemctl enable lvm2-lvmetad
[root@compute ~]# systemctl start lvm2-lvmetad

3、安装和配置存储节点

均在计算节点操作。

(1)安装Cinder相关软件包

openstack-cinder是Cinder的软件包;
targetcli是一个命令行工具,用于管理Linux的存储资源;
python-keystone是与Keystone的连接插件.

[root@compute ~]# yum install -y openstack-cinder targetcli python-keystone

(2)配置文件修改

# 备份配置文件
[root@compute ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
# 去除空行和注释
[root@compute ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf

# 修改配置文件
# 配置文件中“volume_group”的值应和“创建LVM物理卷组”部分创建的卷组名一致:cinder-volumes
[root@compute ~]# vi /etc/cinder/cinder.conf
[DEFAULT]
auth_stategy = keystone
transport_url = rabbit://rabbitmq:000000@controller:5672
enabled_backends = lvm
glance_api_servers = http://controller:9292

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm

[database]
connection = mysql+pymysql://cinder:000000@controller/cinder

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password 
username = cinder
password = 000000 
project_name = project
user_domain_name = Default
project_domain_name = Default

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

(3)存储节点启动cinder服务

[root@compute ~]# systemctl enable openstack-cinder-volume target
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.
[root@compute ~]# systemctl start openstack-cinder-volume target

4、检查Cinder服务

# 方法一:查看存储服务列表
# 查看Cinder服务中各个模块的服务状态
[root@controller ~]# openstack volume service list
+------------------+-------------+------+---------+-------+----------------------------+
| Binary           | Host        | Zone | Status  | State | Updated At                 |
+------------------+-------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller  | nova | enabled | up    | 2022-11-18T12:15:46.000000 |
| cinder-volume    | compute@lvm | nova | enabled | up    | 2022-11-18T12:15:43.000000 |
+------------------+-------------+------+---------+-------+----------------------------+

# 方法二:查看Dashboard检查卷状态
# 1.左侧导航栏出现卷的选项
# 2.在项目概况中出现卷、卷快照、卷存储三个饼图

5、用cinder创建卷

(1)命令模式创建卷

要在控制节点执行命令。

[root@controller ~]# openstack volume create --size 8 volume1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2022-11-25T06:26:14.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 690449e4-f950-4949-a0d4-7184226a2447 |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | volume1                              |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 8                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | __DEFAULT__                          |
| updated_at          | None                                 |
| user_id             | f4f16d960e0643d7b5a35db152c87dae     |
+---------------------+--------------------------------------+

[root@controller ~]# openstack volume list
+--------------------------------------+---------+-----------+------+-------------+
| ID                                   | Name    | Status    | Size | Attached to |
+--------------------------------------+---------+-----------+------+-------------+
| 690449e4-f950-4949-a0d4-7184226a2447 | volume1 | available |    8 |             |
+--------------------------------------+---------+-----------+------+-------------+

(2)用Dashboard创建

在左侧菜单栏,点击项目——》卷——》卷。进入卷页面。
查看到命令行创建的volume1
点击创建卷,输入名称,输入卷大小,其他保持默认,创建新卷。

posted @ 2022-09-16 16:23  休耕  阅读(5011)  评论(2编辑  收藏  举报