kolla-ansible安装openstack(Ocata)
基本功能部署
基础环境
角色 | 操作系统 | 硬件配置 |
---|---|---|
Depoly | CentOS 7 Server | 磁盘:40GB 内存:8GB 网卡:ens3(内网) ens4(外网) |
Sched | CentOS 7 Server | 磁盘:40GB 内存:8GB 网卡:ens3(内网) ens4(外网) |
Nova | CentOS 7 Server | 磁盘:40GB 内存:8GB 网卡:ens3(内网) CPU开启嵌套虚拟化 |
网络配置
主机名 | 网络地址 | 角色 |
---|---|---|
deploy | 4.0.0.10/24(内网) 192.168.200.10/24(外网) | Depoly |
sched | 4.0.0.11/24(内网) 192.168.200.11/24(外网) | Sched |
nova | 4.0.0.12/24(内网) 192.168.200.12/24(外网) | Nova |
Deploy基础环境配置
安装PIP
# yum install epel-release
# yum install python-pip
# pip install -U pip
安装PIP编译环境
# yum install python-devel libffi-devel gcc openssl-devel
安装ansible
# pip install -U ansible
安装docker
# tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
# yum makecache fast
# yum install -y docker-engine-1.12.0
配置docker环境
### 配置镜像加速
# mkdir -p /etc/docker
# tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://registry.docker-cn.com"]
}
EOF
# systemctl daemon-reload
# systemctl restart docker
# systemctl enable docker
安装kolla-ansible
# git clone http://git.trystack.cn/openstack/kolla-ansible
# cd kolla-ansible
# git checkout stable/ocata
# pip install .
# cp -r etc/kolla /etc
### 拷贝inventory到当前目录,也就是后面要执行kolla-ansible命令的目录
# cp ansible/inventory/* ~/
配置kolla-ansible
### 设置网卡信息
# vim /etc/kolla/globals.yml
kolla_internal_vip_address: "4.0.0.9" ### 选择一个没被使用的内网地址
keepalived_virtual_router_id: "9" ### 与kolla_internal_vip_address末尾相同,防止同内网环境其他openstack部署环境干扰
network_interface: "ens3"
neutron_external_interface: "ens4"
openstack_logging_debug: "True"
nova_console: "spice"
# kolla-genpwd
# vim /etc/kolla/passwords.yml
keystone_admin_password: admin
关闭防火墙和SELINUX
# systemctl stop firewalld
# systemctl disable firewalld
# setenforce 0
# vim /etc/selinux/config
SELINUX=disabled
SSH免密码登入
### Deploy执行
# ssh-keygen -t rsa
### Sched和Nova执行
# scp root@4.0.0.10:~/.ssh/id_rsa.pub ./
# cat id_rsa.pub >> ~/.ssh/authorized_keys
# chmod 600 ~/.ssh/authorized_keys
获取docker镜像
- 使用kolla官方镜像源
# vim /etc/kolla/globals.yml
kolla_install_type: "source"
openstack_release: "ocata"
docker_namespace: "kolla"
- 手动构建镜像
# git clone https://github.com/openstack/kolla
# cd kolla
# git checkout stable/ocata
# pip install tox
# tox -e genconfig
# cp -r etc/kolla /etc/
# vim /etc/kolla/kolla-build.conf
push = true
namespace = kolla
registry = 4.0.0.10:4000
install_type = source
tag = ocata
# kolla-build --config-file=/etc/kolla/kolla-build.conf
### kolla-ansible配置如下
# vim /etc/kolla/globals.yml
docker_registry: "4.0.0.10:4000"
docker_namespace: "kolla"
openstack_release: "ocata"
kolla_install_type: "source"
- 下载镜像包
# wget http://tarballs.openstack.org/kolla/images/centos-source-registry-ocata.tar.gz
### kolla-ansible配置如下
# vim /etc/kolla/globals.yml
docker_registry: "4.0.0.10:4000"
docker_namespace: "lokolla"
openstack_release: "4.0.2"
kolla_install_type: "source"
设置本地镜像源
# docker run -d -v /opt/registry:/var/lib/registry -p 4000:5000 --restart=always --name registry registry:2
### 此处使用下载镜像包方式
# wget http://tarballs.openstack.org/kolla/images/centos-source-registry-ocata.tar.gz
# tar zxf centos-source-registry-ocata.tar.gz -C /opt/registry/
搭建http代理服务器(可选步骤)
在Deploy节点上搭建http服务器
# yum install squid -y
### 修改配置文件,删除默认自带的对IP和端口的限制,然后添加如下规则
# vim /etc/squid/squid.conf
http_access allow all
### 设置开机自启动
# systemctl restart squid
# systemctl enable squid
### 修改ansible脚本
# vim /usr/share/kolla-ansible/ansible/kolla-host.yml
environment:
https_proxy : http://4.0.0.10:3128/
http_proxy : http://4.0.0.10:3128/
修改Sched/Nova的环境的配置文件
### 设置yum代理
# vim /etc/yum.conf
proxy=http://4.0.0.10:3128
# yum makecache
修改部署文件
### Deploy节点上执行
# vim ~/multinode
[control]
4.0.0.11
[network]
4.0.0.11
[compute]
4.0.0.12
[monitoring]
4.0.0.11
[storage]
4.0.0.12
### 配置Sched、Nova主机基础环境,安装必备软件包
# kolla-ansible -i multinode bootstrap-servers
部署openstack
# kolla-ansible prechecks -i multinode
# kolla-ansible deploy -i multinode
验证openstack安装
# kolla-ansible post-deploy -i multinode
OpenStack更新
### 修改镜像版本
# vim /etc/kolla/globals.yml
openstack_release: "4.0.3"
# kolla-ansible pull -i multinode
# kolla-ansible upgrade -i multinode
环境还原
### 将删除所有容器和卷
# kolla-ansible destroy -i multinode
可选功能部署
Ceph部署
修改配置文件
# vim multinode
[storage]
4.0.0.12
# vim /etc/kolla/globals.yml
enable_ceph: "yes"
enable_ceph_rgw: "yes"
enable_cinder: "yes"
### 也可以同时配置yes
glance_backend_file: "no"
glance_backend_ceph: "yes"
# mkdir -p /etc/kolla/config
# tee /etc/kolla/config/ceph.conf <<-'EOF'
[global]
osd pool default size = 1
osd pool default min size = 1
osd pool default pg num = 128
osd pool default pgp num = 128
EOF
修改Nova节点
### 添加一块总线为IDE格式为qcow2的磁盘
# fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
# parted /dev/sda -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
# parted /dev/sda print
Model: ATA QEMU HARDDISK (scsi)
Disk /dev/sda: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 10.7GB 10.7GB KOLLA_CEPH_OSD_BOOTSTRAP
问题处理
部署问题
CEPH部署失败
### 问题描述
TASK [ceph : Fetching Ceph keyrings] ********************************************************************************************************************************************************
[WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ (ceph_files_json.stdout | from_json).changed }}
fatal: [92.0.0.11]: FAILED! => {"failed": true, "msg": "The conditional check '{{ (ceph_files_json.stdout | from_json).changed }}' failed. The error was: No JSON object could be decoded"}
### 解决办法
### 在OSD节点执行,获取挂载的CEPH磁盘
# ds4ft=`mount | grep ceph | awk '{print $1}' | tr -d [:digit:] | sort -u`
### 在OSD节点执行,取消挂载
# echo $ds4ft | tr ' ' '\n' | xargs -i umount {}
### 在OSD节点执行,格式化磁盘
# echo $ds4ft | tr ' ' '\n' | xargs -i mkfs.xfs -f {}
### 在OSD节点执行,重新设置标记
# echo $ds4ft | tr ' ' '\n' | xargs -i parted {} -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
### 在MON和OSD节点执行,删除所有CEPH容器和配置文件
# docker ps --filter "label=kolla_version" --format "{{.Names}}" -a | grep -E "ceph|osd" | xargs -i docker rm -f {}
# rm -rf /var/lib/kolla/var/lib/ceph /var/lib/kolla/etc/ceph /etc/kolla/ceph-*
### 在MON节点执行
# docker volume rm ceph_mon_config
Mariadb部署失败
### 问题描述
TASK: [mariadb | Creating haproxy mysql user] *********************************
......
stdout: localhost | FAILED! => {
"changed": false,
"failed": true,
"msg": "unable to connect to database, check login_user and login_password are correct or ~/.my.cnf has the credentials. Exception message: (1045, \"Access denied for user 'root'@'mick-workstation' (using password: YES)\")"
}
msg: Task failed as maximum retries was encountered
### 解决办法
# docker rm mariadb
# rm -rf /var/lib/docker/volumes/mariadb/_data/*
等待VIP超时
### 问题描述
获取不到VIP
### 解决办法
### 尝试一: 修改kolla_internal_vip_address或keepalived_virtual_router_id防止VIP被其他kolla部署环境占用
# vim /etc/kolla/globals.yml
kolla_internal_vip_address: "4.0.0.5"
keepalived_virtual_router_id: "5"
# kolla-ansible -i multinode deploy --tags="haproxy"
### 尝试二: 禁用主机的网络管理器
# docker rm -f haproxy keepalived
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# kolla-ansible -i multinode deploy --tags="haproxy"
VIP自动消失
### 问题描述
获取到的VIP会自动消失
### 解决办法
### 尝试一: IP是否被占用
# ping 4.0.0.9
### 尝试二: 判断keepalived监控的服务是否正常
# docker exec -it keepalived bash
# ./check_alive.sh
### 如果打印的值为0说明keepalived监控的服务正常
# echo $?
### 尝试三: 参考网上一篇帖子的做法(http://www.cnblogs.com/ayao/p/keepalived-loss-vip.html),将VIP绑定的网卡的BOOTPROTO=none防止因为dhcp导致的VIP消失问题(虽然问题出现时网卡的配置是static的,应该不是dhcp的问题,但是似乎VIP不会消失了)
Bootstrap-servers执行失败
### 问题描述
docker-engine安装失败
### 解决办法
### 在节点上手动安装,下载超时也会导致失败
# yum clean all
# yum makecache fast
# yum install docker-engine-1.12.0 -y
kolla-ansible执行报错
### 问题描述
Ansible 2.4.0: No test named 'equalto'
### 解决办法
# pip install --upgrade Jinja2
kolla-ansible执行报错
### 问题描述
ansible "msg": "shade is required for this module"
### 解决办法
# pip install shade
镜像拉取失败
### 问题描述
Error response from daemon: Get https://4.0.0.10:4000/v1/_ping: http: server gave HTTP response to HTTPS client
Network timed out while trying to connect to http://4.0.0.10:4000/v1/repositories/lokolla/centos-source-fluentd/images. You may want to check your internet connection or if you are behind a proxy.
### 解决办法
### 尝试一: 降低docker版本
# yum remove docker-engine
# yum install docker-engine-1.12.0 -y
### 尝试二: 修改docker服务配置文件
# vim /etc/systemd/system/docker.service.d/kolla.conf
[Service]
MountFlags=shared
ExecStart=
ExecStart=/usr/bin/docker daemon --insecure-registry 92.0.0.10:4000
# systemctl daemon-reload
# systemctl restart docker
# ps -ef|grep docker
root 1294 1 0 11:36 ? 00:00:01 /usr/bin/dockerd --insecure-registry 92.0.0.10:4000
无法生成/etc/hosts
文件
### 问题描述
TASK [baremetal : Generate /etc/hosts for all of the nodes] *********************************************************************************************************************************
fatal: [4.0.0.11]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute u'ansible_ens3'\n\nThe error appears to have been in '/usr/share/kolla-ansible/ansible/roles/baremetal/tasks/pre-install.yml': line 40, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generate /etc/hosts for all of the nodes\n ^ here\n"}
to retry, use: --limit @/usr/share/kolla-ansible/ansible/kolla-host.retry
### 解决办法
- 查询下是不是有节点安装python失败,导致没有执行Gather facts操作造成失败
- 查询下是不是有节点的网卡名不对
使用问题
虚拟机创建失败
### 问题描述
创建带volume的虚拟机提示"Block device mapping is invalid"
- 查看nova-compute日志:VolumeNotCreated: Volume f103b3f3-d0ff-4a2b-9a5e-4b7ea5a9abdc did not finish being created even after we waited 3 seconds or 2 attempts. And its status is error.
- 查看cinder-volume日志:Volume group "cinder-volumes" not found
### 解决办法
### 打印vg列表发现的确没有"cinder-volumes",这个卷是要手动建立的
# vgdisplay
# dd if=/dev/zero of=./disk.img count=4096 bs=1MB
### 查询未被占用的loop设备
# losetup -f
# losetup /dev/loop2 disk.img
# pvcreate /dev/loop2
# vgcreate cinder-volumes /dev/loop2
### 重启容器,然后查看cinder-volume日志是否成功找到"cinder-volumes"
# docker restart cinder-volumes
虚拟机创建失败
### 问题描述
创建带volume的虚拟机提示"No valid host was found"
- 查看nova-compute日志: 'iscsiadm -m node -T iqn.2010-10.org.openstack:volume-bbbccab7-bdd7-4086-8d0e-e14898439131 -p 127.0.0.1:3260' failed
### 解决办法
### 从日志看来是登入iscsi服务器失败,IP地址不对
# vim /etc/kolla/cinder-volume/cinder.conf
my_ip = 4.0.0.12
# docker restart cinder-volumes
虚拟机创建失败
### 问题描述
- nova-compute日志提示
ERROR nova.compute.manager [instance: 3af11e19-b4f8-452a-8f3d-3d659be050bd] libvirtError: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
- libvirtd日志提示
2017-11-01 18:46:09.362+0000: 30348: error : virDBusCall:1570 : error from service: CanSuspend: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the re
ply, the reply timeout expired, or the network connection was broken.
### 解决办法
检查节点状态,发现selinux开启了,关闭selinux即可
虚拟机创建失败
### 问题描述
- nova-compute日志提示
ERROR nova.image.glance [req-3d1ff662-1b82-4cf9-8aba-28644c8ba88b ded6f7f63e6743b4ac88213d5d2df5ce dded963ced3946b585c92d0684277eec - - -] Error writing to /var/lib/nova/instances/_base/bc9f28a296e64237d4205a6ea278c3e51c083ba0.part: 'NoneType' object is not iterable
ERROR nova.compute.manager [instance: 2e6b91e4-96f3-41ae-98d3-4d72f1b2cd37] File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 586, in download
ERROR nova.compute.manager [instance: 2e6b91e4-96f3-41ae-98d3-4d72f1b2cd37] for chunk in image_chunks:
ERROR nova.compute.manager [instance: 2e6b91e4-96f3-41ae-98d3-4d72f1b2cd37] TypeError: 'NoneType' object is not iterable
- glance_api日志提示
WARNING glance.location [req-27feac8f-9cfb-4891-91fd-913bcb7a9f1f ded6f7f63e6743b4ac88213d5d2df5ce dded963ced3946b585c92d0684277eec - default default] Get image 856dbf39-8821-4906-a68c-e78f77d88b27 data failed: Image /var/lib/glance/images/856dbf39-8821-4906-a68c-e78f77d88b27 not found.
ERROR glance.location [req-27feac8f-9cfb-4891-91fd-913bcb7a9f1f ded6f7f63e6743b4ac88213d5d2df5ce dded963ced3946b585c92d0684277eec - default default] Glance tried all active locations to get data for image 856dbf39-8821-4906-a68c-e78f77d88b27 but all have failed.
### 解决办法
到glance节点的/var/lib/glance/images目录下的确没有任何内容,重启horizon容器,然后删除镜像,然后重新上传并且重命名
虚拟机创建失败
### 问题描述
rpc超时
### 解决办法
rabbitmq集群部署的配置文件使用的是主机名,所以HA环境下各个Contoller主机的主机名不能相同
虚拟机创建失败
### 问题描述
failed to connect to the hypervisor
### 解决办法
### 查询虚拟机的嵌套虚拟化是否开启
# cat /sys/module/kvm_intel/parameters/nested
N
### 如果为N,关闭虚拟机,设置CPU,开启嵌套虚拟化
# shutdown now
mariadb容器启动失败
### 问题描述
[ERROR] WSREP: failed to open gcomm backend connection: 131: invalid UUID: 00000000 (FATAL)
at gcomm/src/pc.cpp:PC():271
### 解决办法
# rm -rf /var/lib/docker/volumes/mariadb/_data/gvwstate.dat
# docker restart mariadb
宕机后mariadb集群恢复
### 问题描述
mariadb服务异常
### 解决办法
########## 停止所有mariadb容器 ##########
# docker stop mariadb
########## 找到最后关闭的mariadb主机,如果不记得就随机选取一台或者根据/var/lib/docker/volumes/mariadb/_data/grastate.dat的seqno进行选取(越大优先级越高),然后修改其grastate.dat文件的safe_to_bootstrap参数 ##########
# vim /var/lib/docker/volumes/mariadb/_data/grastate.dat
safe_to_bootstrap: 1
########## 修改mariadb容器启动命令后启动容器,查询日志保证mariadb服务正常启动 ##########
# vim /etc/kolla/mariadb/config.json
"command": "/usr/bin/mysqld_safe --wsrep-new-cluster",
# docker start mariadb
# tail -200f /var/lib/docker/volumes/kolla_logs/_data/mariadb/mariadb.log
########## 启动其他节点的mariadb容器 ##########
# docker start mariadb
# tail -200f /var/lib/docker/volumes/kolla_logs/_data/mariadb/mariadb.log
########## 确保集群运行正常后,恢复最初修改的config.json(这样就保证集群中所有的mariadb容器都是平等的)##########
# vim /etc/kolla/mariadb/config.json
"command": "/usr/bin/mysqld_safer",
# docker stop mariadb
# docker start mariadb
# tail -200f /var/lib/docker/volumes/kolla_logs/_data/mariadb/mariadb.log
Horizon无法访问
### 问题描述
[Tue May 29 09:58:25.056236 2018] [core:error] [pid 20] [client 192.168.0.1:51277] Script timed out before returning headers: django.wsgi
[Tue May 29 09:58:28.017141 2018] [core:error] [pid 81] [client 192.168.0.105:48074] End of script output before headers: django.wsgi, referer: http://47.98.113.179:8011/
### 解决办法
### 添加 WSGIApplicationGroup 配置项
# vim /etc/httpd/conf.d/openstack-dashboard.conf
...
WSGISocketPrefix run/wsgi
WSGIApplicationGroup %{GLOBAL}
...
Redis容器异常
### 问题描述
5:M 29 May 17:31:11.072 # Short read or OOM loading DB. Unrecoverable error, aborting now.
5:M 29 May 17:31:11.072 # Internal error in RDB reading function at rdb.c:1428 -> Unexpected EOF reading RDB file
### 解决办法
# rm -rf /var/lib/docker/volumes/redis/_data/dump.rdb
# docker restart redis
在nova-compute容器执行PIP命令权限不够
### 问题描述
容器内无法执行pip命令
### 解决办法
### 在容器内执行
# cat /etc/nova/rootwrap.conf | grep filters_path
filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
### 在宿主机上执行
# tee pip.filters << EOF
[Filters]
pip: CommandFilter, pip, root
EOF
# docker cp pip.filters nova_compute:/usr/share/nova/rootwrap
# sudo nova-rootwrap /etc/nova/rootwrap.conf pip install -U pip
ovs命令无法正常执行
### 问题描述
# ovs-appctl ofproto/trace br-tun dl_vlan=3
2017-06-05T08:37:32Z|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-vswitchd.pid: open: No such file or directory
ovs-appctl: cannot read pidfile "/var/run/openvswitch/ovs-vswitchd.pid" (No such file or directory)
### 解决办法
### 添加启动参数
# vim /etc/kolla/openvswitch-vswitchd/config.json
--pidfile=/var/run/openvswitch/ovs-vswitchd.pid
# docker restart openvswitch_vswitchd