Openstack实验笔记

Openstack实验笔记

制作人:全心全意

Openstack:提供可靠的云部署方案及良好的扩展性
Openstack简单的说就是云操作系统,或者说是云管理平台,自身并不提供云服务,只是提供部署和管理平台
架构图:
http://m.qpic.cn/psb?/V12uCjhD3ATBKt/Mf6rnJXoRGXLpebCzPTUfETy68mVidyW.VTA2AbQxE0!/b/dDUBAAAAAAAA&bo=swFuAQAAAAARB.0!&rf=viewer_4
	
	Keystone作为Openstack的核心模块,为Nova(计算),Glance(镜像),Swift(对象存储),Cinder(块存储),Neutron(网络)以及Horizon(Dashboard)提供认证服务
	Glance:openstack的镜像服务组件,主要提供了一个虚拟机镜像文件的存储、查询和检索服务,通过提供一个虚拟磁盘映像目录和存储库,为Nova的虚拟机提供镜像服务,现在有v1和v2两个版本

物理硬件配置(最低)
	控制节点:
		1-2个cpu
		8G内存
		2个网卡
	计算节点:
		2-4个cpu
		8G内存
		2个网卡
	块节点:
		1-2个cpu
		4G内存
		1个网卡
		最少2个磁盘
	对象节点:
		1-2个cpu
		4G内存
		1个网卡
		最少2个磁盘

网络拓扑图:(实验中,管理、存储和本地网络合并)
http://m.qpic.cn/psb?/V12uCjhD3ATBKt/r30ELjijnHAaYX*RMZe4vhwVNcix4zUb2pNnovlYZ7E!/b/dL8AAAAAAAAA&bo=xgKqAQAAAAADB00!&rf=viewer_4


安装
控制节点:quan		172.16.1.211	172.16.1.221
计算节点:quan1		172.16.1.212	172.16.1.222
存储节点:storage	172.16.1.213	172.16.1.223
对象存储节点1:object01	172.16.1.214	172.16.1.224
对象存储节点2:object02	172.16.1.215	172.16.1.225



准备工作:
	关闭防火墙
	关闭selinux
	关闭NetworkManager

安装ntp服务:
	yum -y install chrony(所有主机)
	修改配置文件:允许网段中的主机访问
	allow 172.16.1.0/24
	
	systemctl enable chronyd.service 
	systemctl start chronyd.service
	
	其它节点:
	vi /etc/chrony.conf
	server quan iburst
	
	#注意:使用原始的centos网络源
	yum install epel-release
	yum install centos-release-openstack-queens
	
	yum install openstack-selinux
	yum install python-openstackclient

安装数据库
	控制(quan)节点安装数据库
	yum install -y mariadb mariadb-server python2-PyMySQL
	vi /etc/my.cnf.d/openstack.cnf
	bind-address=172.16.1.211
	default-storage-engine=innodb
	innodb_file_per_table=on
	max_connections=4096
	collation-server=utf8_general_ci
	character-set-server=utf8

	启动数据库,并设置开机启动
	systemctl enable mariadb.service && systemctl start mariadb.service

	初始化数据库
	mysql_secure_installation


控制节点(quan)安装消息队列(端口:5672)
	yum install rabbitmq-server -y

	服务启动,并设置开机启动
	systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service

	添加openstack用户
	rabbitmqctl add openstack openstack

	为openstack用户添加读写权限
	rabbitmqctl set_permissions openstack ".*" ".*" ".*"


控制节点(quan)安装memcached缓存(端口:11211)
	yum -y install memcached python-memcached

	vi /etc/sysconfig/memcached
	OPTIONS="-l 127.0.0.1,::1,quan"

	服务启动,并设置开机启动
	systemctl enable memcached.service && systemctl start memcached.service


控制节点(quan)安装etcd服务(key-value存储系统)
	yum -y install etcd

	vi /etc/etcd/etcd.conf
	#[Member]
	ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
	ETCD_LISTEN_PEER_URLS="http://quan:2380"
	ETCD_LISTEN_CLIENT_URLS="http://quan:2379"
	ETCD_NAME="quan"
	#[Clustering]
	ETCD_INITIAL_ADVERTISE_PEER_URLS="http://quan:2380"
	ETCD_ADVERTISE_CLIENT_URLS="http://quan:2379"
	ETCD_INITIAL_CLUSTER="quan=http://quan:2380"
	ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
	ETCD_INITIAL_CLUSTER_STATE="new"

	服务启动,并设置开机启动
	systemctl enable etcd.service && systemctl start etcd.service

Keystone组件
Keystone作为Openstack的核心模块,为Nova(计算),Glance(镜像),Swift(对象存储),Cinder(块存储),Neutron(网络)以及Horizon(Dashboard)提供认证服务
基本概念:
	User:用户,代表可以通过keystone进行访问的人或程序。User通过认证信息(credentials,如密码,API Keys等)进行验证。
	Tenant:租户,各个服务中的一些可以访问的资源集合。例如,在Nova中一个tenant可以是一些机器,在Swift和Glance中一个tenant可以是一些镜像存储,在Neutron中一个tenant可以是一些网络资源。Users默认的总是绑定到某些tenant上。
	Role:角色,Roles代表一组用户可以访问的资源权限,例如Nova中的虚拟机、Glance中的镜像。Users可以被添加到任意一个全局的或租户的角色中。在全局的role中,用户的role权限作用于所有的租户,即可以对所有的租户执行role规定的权限,在租户内的role中,用户仅能在当前租户内执行role规定的权限。
	Service:服务,如Nove、Glance、Swift。根据User、Tenant和Role三个概念,一个服务可以确定当前用户是否具有访问其资源的权限,但是当一个user尝试着访问其租户内的service时,他必须知道这个service是否存在以及如何访问这个service,这里通常使用一些不同的名称表示不同的服务。
	Endpoint:端点,可以理解为是一个服务暴露出的访问点
	Token:访问资源的钥匙。通过Keystone验证后的返回值,在之后与其它服务器交互中只需要携带Token值即可,每个Token都有一个有效期。
	
	各概念之间的关系
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/PJAecZuZ1C44VKDjcsKLYotu5KOz3RNZwumR07nBIug!/b/dDUBAAAAAAAA&bo=BAIsAQAAAAADBwk!&rf=viewer_4
	1、租户下,管理者一堆用户(人,或程序)
	2、每个用户都有自己的credentials(凭证)用户名+密码或者用户名+API key,或其它凭证
	3、用户在访问其他资源(计算、存储)之前,需要用自己的credential去请求keystone服务,获得验证信息(主要是Token信息)和服务信息(服务目录和它们的endpoint)
	4、用户拿着Token信息,就可以去访问资源了
	
keystone在Openstack中的工作流程图
http://m.qpic.cn/psb?/V12uCjhD3ATBKt/ptROtuhyzh7Mq3vSVz3Ut1TtGDXuBbYf*WbN8UZdWDE!/b/dLgAAAAAAAAA&bo=igIRAgAAAAADB7k!&rf=viewer_4
	
搭建keystone
	创建数据库
	mysql -uroot -popenstack
	create database keystone;
	grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'openstack';
	grant all privileges on keystone.* to 'keystone'@'%' identified by 'openstack';
	
	安装
	yum -y install openstack-keystone httpd mod_wsgi
	vi /etc/keystone/keystone.conf
	[database]
	connection = mysql+pymysql://keystone:openstack@quan/keystone	#数据库连接 用户名:密码@主机名/数据库名
	[token]
	provider=fernet
	
	初始化keystone数据库
	su -s /bin/sh -c "keystone-manage db_sync" keystone
	
	初始化femet密钥存储库
	keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
	keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
	
	创建keystone的服务端口(会在endpoint中生成数据)
	keystone-manage bootstrap --bootstrap-password openstack --bootstrap-admin-url http://quan:35357/v3/ --bootstrap_internal-url http://quan:5000/v3/ --bootstrap-public-url http://quan:5000/v3/ --bootstrap-region-id RegionOne
	
	配置http服务
	vi /etc/httpd/conf/httpd.conf
	ServerName quan
	
	创建软链接
	ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
	
	服务启动,并设置开机启动
	systemctl enable httpd.service && systemctl start httpd.service
	
	创建管理员账号
	vim admin-openrc
	export OS_USERNAME=admin
	export OS_PASSWORD=openstack
	export OS_PROJECT_NAME=admin
	export OS_USER_DOMAIN_NAME=Default
	export OS_PROJECT_DOMAIN_NAME=Default
	export OS_AUTH_URL=http://quan:35357/v3
	export OS_IDENTITY_API_VERSION=3
	
	导入管理员账号
	source admin-openrc
	
	创建域/项目/用户/和角色
	创建项目
	openstack project create --domain default --description "Service Project" service
	openstack project create --domain default --description "Demo Project" demo
	
	创建用户(demo),并指定其密码
	openstack user create --domain default --password-prompt demo
	
	创建角色(user)
	openstack role create user
	
	将demo添加的user角色中
	openstack role add --project demo --user demo user
	
	
	验证
	解除之前的环境变量
	unset OS_AUTH_URL OS_PASSWORD
	
	执行下面的命令,输入admin的密码
	openstack --os-auth-url http://quan:35357/v3 \
	--os-project-domain-name Default \
	--os-user-domain-name Default \
	--os-project-name admin \
	--os-username admin token issue
	
	执行下面的命令,输入demo用户的密码
	openstack --os-auth-url http://quan:5000/v3 \
	--os-project-domain-name Default \
	--os-user-domain-name Default \
	--os-project-name demo \
	--os-username demo token issue
	
	创建openstack客户端脚本环境
	创建管理员账号
	vim admin-openrc
	export OS_USERNAME=admin		
	export OS_PASSWORD=openstack
	export OS_PROJECT_NAME=admin	
	export OS_USER_DOMAIN_NAME=Default
	export OS_PROJECT_DOMAIN_NAME=Default
	export OS_AUTH_URL=http://quan:35357/v3
	export OS_IDENTITY_API_VERSION=3	#指定认证服务版本
	export OS_IMAGE_API_VERSION=2	#指定镜像服务版本
	
	创建demo用户账号
	vim demo-openrc
	export OS_USERNAME=demo		
	export OS_PASSWORD=openstack
	export OS_PROJECT_NAME=demo
	export OS_USER_DOMAIN_NAME=Default
	export OS_PROJECT_DOMAIN_NAME=Default
	export OS_AUTH_URL=http://quan:35357/v3
	export OS_IDENTITY_API_VERSION=3	#指定认证服务版本
	export OS_IMAGE_API_VERSION=2	#指定镜像服务版本
	
	导入管理员账号
	source admin-openrc
	验证管理员
	openstack token issue
	
	导入demo用户
	source demo-openrc
	验证demo用户
	openstack token issue
	

glance组件
	Glance:openstack的镜像服务组件,主要提供了一个虚拟机镜像文件的存储、查询和检索服务,通过提供一个虚拟磁盘映像目录和存储库,为Nova的虚拟机提供镜像服务,现在有v1和v2两个版本
	
	Glance的架构图:
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/mkXPMrNM9RL.NizLwc22Vm*FHkAc2NWh9668JHk4zS0!/b/dLYAAAAAAAAA&bo=RQHZAAAAAAADB78!&rf=viewer_4
	
	镜像服务组件
	Glance-api:是一个对外的API接口,能够接受外部的API镜像请求。默认端口是9292
	glance-registry:用于存储、处理、获取Image Metadate。默认端口的9191
	glance-db:在Openstack中使用MySQL来支撑,用于存放Image Metadate。通过glance-registry保存在MySQL Database
	Image Store:用于存储镜像文件。通过Strore Backend后端存储接口来与glance-api联系。通过这个接口,glance可以从Image Store获取镜像文件再交由Nove用于创建虚拟机
	
	Glance通过Store Adapter(存储适配器)支持多种Image Store方案,支持swift、file system、s3、sheepdog、rbd、cinder等。
	
	Glance支持的Image格式
	raw:非结构化的镜像格式
	vhd:一种通用的虚拟机磁盘格式,可用于Vmware、Xen、VirtualBox等
	vmdk:Vmware的虚拟机磁盘格式
	vdi:VirtualBox、QEMU等支持的虚拟机磁盘格式
	qcow2:一种支持QEMU并且可以动态扩展的磁盘格式(默认使用)
	aki:Amazon Kernel镜像
	ari:Amazon Ramdisk镜像
	ami:Amazon虚拟机镜像
	
	Glance的访问权限
	public:公共的,可以被所有的Tenant使用
	Private:私有的/项目的,只能被Image Owner所在的Tenant使用
	Shared:共享的,一个非公共的Image可以共享给指定的Tenant,通过member-*操作来实现
	Protected:受保护的,不能被删除
	
	状态类型
	Queued:没有上传Image数据,只存有该镜像的元数据
	Saving:正在上传Image
	Active:正常状态
	Deleted/pending_delete:已删除/等待删除的Image
	Killed:Image元数据不正确,等待被删除
	
搭建glance
	创建数据库
	mysql -uroot -popenstack
	create database glance;
	grant all privileges on glance.* to 'glance'@'localhost' identified by 'openstack';
	grant all privileges on glance.* to 'glance'@'%' identified by 'openstack';
	
	创建glance用户,并在service项目中添加管理员角色
	source admin_openrc
	openstack user create --domain default --password-prompt glance  #输入其密码
	
	openstack role add --project service --user glance admin
	
	openstack user list  #可查看创建的用户
	
	创建glance服务
	openstack service create --name glance --description "OpenStack Image" image
	
	openstack endpoint create --region RegionOne image public http://quan:9292
	openstack endpoint create --region RegionOne image internal http://quan:9292
	openstack endpoint create --region RegionOne image admin http://quan:9292
	
	安装相关包并配置
	yum -y install openstack-glance
	
	vi /etc/glance/glance-api.conf
	[database]
	connection = mysql+pymysql://glance:openstack@quan/glance
	
	[keystone_authtoken]
	auth_uri=http://quan:5000
	auth-url=http://quan:35357
	memcached_servers=quan:11211
	auth_type=password
	project_domain_name=default
	user_domain_name=default
	project_name=service
	username = glance
	password = openstack
	
	[paste_deploy]
	flavor = keystone
	
	[glance_store]
	stores = file,http
	default_store = file
	filesystem_store_datadir = /var/lib/glance/images/
	
	
	vi /etc/glance/glance-registry.conf
	[database]
	connection = mysql+pymysql://glance:openstack@quan/glance
	[keystone_authtoken]
	auth_uri=http://quan:5000
	auth-url=http://quan:35357
	memcached_servers=quan:11211
	auth_type=password
	project_domain_name=default
	user_domain_name=default
	project_name=service
	username = glance
	password = openstack
	
	[paste_deploy]
	flavor = keystone
	
	初始化数据库
	su -s /bin/sh -c "glance-manage db_sync" glance
	
	服务启动,并设置开机启动
	systemctl enable openstack-glance-api.service openstack-glance-registry.service && systemctl start openstack-glance-api.service openstack-glance-registry.service
	
	验证
	source admin-openrc 
	
	下载实验镜像
	wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
	创建镜像:
	openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
	
	查看已存在的镜像
	openstack image list
	
	查看镜像的详细信息
	openstack image show (#镜像id)
	

Nova组件
	Nova:openstack中最核心的组件。openstack的其它组件归根结底是为Nova组件服务的,基于用户需求为VM提供计算资源管理
	Nova架构图:
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/bKTJmZis5k..ds6fjUYXv8KDu9EzeaB4WYyV883uAq8!/b/dL8AAAAAAAAA&bo=*QE1AQAAAAADB.o!&rf=viewer_4
	
	目前的Nova主要由API、Compute、Conductor、Scheduler四个核心服务组成,它们之间通过AMQP通信,API是进入Nova的HTTP接口。Compute是VMM(虚拟机管理器)交互来运行虚拟机并管理虚拟机的生命周期(通常一个主机一个Compute服务)。Scheduler从可用池中选择最合适的节点来创建虚拟机实例。Conductor主要用于和数据库进行交互。
	
	Nova逻辑模块
	Nova API:HTTP服务,用于接收和处理客户端发送的HTTP请求
	Nova Cell:Nova Cell子服务的目的是为了便于实现横向扩展和大规模的部署,同时不增加数据库和RPC消息中间件的复杂度。在Nova Scheduler服务的主机调度基础上实现了区域调度
	Nova Cert:用于管理证书,为了兼容AWS,AWS提供了一整套的基础设施和应用程序服务,使得几乎所有的应用程序在云上运行。
	Nova Comput:Nova组件中最核心的服务,实现虚拟机管理的功能。实现了在计算节点上创建、启动、暂停、关闭和删除虚拟机、虚拟机在不同的计算节点间迁移、虚拟机安全控制、管理虚拟机磁盘镜像以及快照等功能。
	Nova Conductor:RPC服务,主要提供数据库查询功能,以前的openstack版本中,Nova Compute子服务中定义了许多的数据库查询方法。但是,由于Nova Compute子服务需要在每个计算节点上启动,一旦某个计算节点被攻击,就将完全获得数据库的访问权限。有了Nova Compute子服务之后,便可在其中实现数据库访问权限的控制
	Nova Scheduler:Nova调度子服务。当客户端向Nova服务器发起创建虚拟机的请求时,决定将虚拟机创建在哪个节点上。
	
	Nova Console、Nova Consoleauth、Nova VNCProxy,Nova控制台子服务。功能是实现客户端通过代理服务器远程访问虚拟机实例的控制界面。
	
	nova启动虚拟机的过程图:
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/iy2efxOLLowl3RvoIcZ6d7KNZ3jcdOI7zY5XroEBPVM!/b/dDQBAAAAAAAA&bo=xQJnAgAAAAADJ6A!&rf=viewer_4
	
	
	Nova Scheduler Filter的类型
	选择一个虚拟机在哪个主机运行的方式有多种,nova支持的方式主要由以下三种:
	ChanceScheduler(随机调度器):从所有nova-compute服务正常运行的节点中随机选择
	FilterScheduler(过滤调度器):根据指定的过滤条件以及权重挑选最佳节点
	CachingScheduler:FilterScheduler的一种,在FilterScheduler的基础上,将主机资源的信息存到本地的内存中,然后通过后台的定时任务从数据库中获取最新的主机资源信息。
	
	Nova Scheduler的工作流程图:
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/LpB5fYBuLUgMASXWrH*Emw5qwkWHKM7slpof.lF21DY!/b/dEYBAAAAAAAA&bo=OQODAQAAAAADB5o!&rf=viewer_4
	FilterScheduler首先使用指定的Filters(过滤器)得到符合条件的主机,比如内存小于50%,然后对得到的主机重新计算权重并且排列,获取最佳的一个。具体的Filter有以下几种:
	1)RetryFilter:重试过滤,假设Host1、Host2、Host3过滤筛选出来了,Host1权重最高,被选中,由于某些原因VM在Host1上落地失败,nova-scheduler会重新筛选新的host,Host1因为失败不会入选。可通过scheduler_max_attempts=3设置重试的次数
	2)AvalilabilityZoneFilter可选域过滤,可以提供容灾行和隔离服务,计算节点可以纳入一个创建好的AZ中,创建VM的时候可以指定AZ,这样虚拟机会落到指定的host中
	3)RamFilter:内存过滤,创建VM时会选择flavor,不满足flavor中内存要求的host会过滤掉。超量使用的设置:ram_allocation_ratio=3(如果计算节点有16G内存,那么openstack会认为有48G内存)
	4)CoreFilter:CPU core过滤,创建VM时会选择flavor,不满足flavor中core要求的host会过滤掉。CPU的超量设置:cpu_allocation_ratio=16.0(若计算节点为24core,那么openstack会认为348core)
	5)DiskFilter:磁盘容量过滤,创建VM时会选择flavor,不满足flavor中磁盘要求的host会过滤掉。Disk超量设置:disk_allocation_ratio=1.0(硬盘容量不建议调大)
	6)ComputeFilter:nova-compute服务过滤,创建VM时,若host的nova-compute服务不正常,就会被筛选掉
	7)ComputeCababilitiesFilter:根据计算节点的特性来筛选,例如x86_64
	8)ImagePropertiesFilter:根据所选的image的属性来匹配计算节点,例如希望某个image只能运行在KVM的hypervisor上,可以通过"Hypervisor Type"属性来指定。
	9)ServerGroupAntiAffinityFilter:尽量将Instance部署到不同的节点上。例如vm1,vm2,vm3,计算节点有Host1,Host2,Host3
		创建一个anti-affinity策略server group “group-1”
		nova server-group-create-policy anti-affinity group-1
		nova boot-image IMAGE_ID -flavor 1 -hint group-group1 vm1
		nova boot-image IMAGE_ID -flavor 1 -hint group-group1 vm2
		nova boot-image IMAGE_ID -flavor 1 -hint group-group1 vm3
	10)ServerGroupAffinityFilter:尽量将Instance部署到同一节点上。例如vm1,vm2,vm3,计算节点有Host1,Host2,Host3
		创建一个group-affinity策略server group “group-2”
		nova server-group-create-policy anti-affinity group-2
		nova boot-image IMAGE_ID -flavor 1 -hint group-group2 vm1
		nova boot-image IMAGE_ID -flavor 1 -hint group-group2 vm2
		nova boot-image IMAGE_ID -flavor 1 -hint group-group2 vm3
		
搭建nova组件
	搭建nova控制节点
	数据库相关操作
	mysql -uroot -popenstack
	create database nova_api;
	create database nova;
	create database nova_cell0;
	grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'openstack';
	grant all privileges on nova_api.* to 'nova'@'%' identified by 'openstack';
	grant all privileges on nova.* to 'nova'@'localhost' identified by 'openstack';
	grant all privileges on nova.* to 'nova'@'%' identified by 'openstack';
	grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by 'openstack';
	grant all privileges on nova_cell0.* to 'nova'@'%' identified by 'openstack';
	
	创建nova用户,并在service项目中添加管理员角色
	source admin-openrc
	
	openstack user create --domain default --password-prompt nova	#创建nova用户
	
	openstack role --project service --user nova admin	#将nova用户加入到service项目管理员角色
	
	创建nova服务及端口
	openstack service create --name nova --description "OpenStack Compute" conpute
	
	openstack endpoint create --region RegionOne compute public http://quan:8774/v2.1
	openstack endpoint create --region RegionOne compute internal http://quan:8774/v2.1
	openstack endpoint create --region RegionOne compute admin http://quan:8774/v2.1
	
	创建placement用户,并在service项目中添加管理员角色
	source admin-openrc
	
	openstack user create --domain default --password-prompt placement	#创建placement用户
	
	openstack role --project service --user placement admin	#将placement用户加入到service项目管理员角色

	创建placement服务及端口
	openstack service create --name placement --description "Placement API" placement
	
	openstack endpoint create --region RegionOne placement public http://quan:8778
	openstack endpoint create --region RegionOne placement internal http://quan:8778
	openstack endpoint create --region RegionOne placement admin http://quan:8778

	
	删除端口的方法:
	查看端口:
		openstack endpoint list | grep placement
	根据id删除端口
		openstack endpoint delete 端口id
	
	
	安装相关包,并配置
	yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
	
	vi /etc/nova/nova.conf
	[DEFAULT]
	enabled_apis = osapi_compute,metadata
	transport_url = rabbit://openstack:openstack@quan
	my_ip = 172.16.1.221
	use_neutron = True
	firewall_driver = nova.virt.firewall.NoopFirewallDriver
	
	[api_database]
	connection = mysql+pymysql://nova:openstack@quan/nova_api
	
	[database]
	connection = mysql+pymysql://nova:openstack@quan/nova

	[api]
	auth_strategy = keystone
	
	[keystone_authtoken]
	auth_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = nova
	password = openstack
	
	[vnc]
	enabled = true
	vncserver_listen = 172.16.1.221
	vncserver_proxyclient_address = 172.16.1.221
	
	[glance]
	api_servers = http://quan:9292
	
	[oslo_concurrency]
	lock_path = /var/lib/nova/tmp
	
	[placement]
	os_region_name = RegionOne
	project_domain_name = Default
	project_name = service
	auth_type = password
	user_domain_name = Default
	auth_url = http://quan:35357/v3
	username = placement
	password = openstack
	
	
	vim /etc/httpd/conf.d/00-nova-placement-api.conf		#添加至末尾
	<Directory /usr/bin>
		<IfVersion >= 2.4>
			Require all granted
		</IfVersion>
		<IfVersion < 2.4>
			Order allow,deny
			Allow from all
		</IfVersion>
	</Directory>
	
	重启httpd服务
	systemctl restart httpd
	
	修改配置文件(解决初始化nova_api数据库表结构的bug)
	vi /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py
	在175行中加入"use_tpool"
	
	初始化nova_api数据库表结构
	su -s /bin/sh -c "nova-manage api_db sync" nova
	
	注册cell0数据库
	su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
	
	创建cell1
	su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell --verbose" nova
	
	初始化nova数据库
	su -s /bin/sh -c "nova-manage db sync" nova
	
	验证cell0和cell1是否注册
	nova-manage cell_v2 list_cells
	
	服务启动,并设置开机启动
	systemctl enable openstack-nova-api openstack-nova-consoleauth openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
	systemctl start openstack-nova-api openstack-nova-consoleauth openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
	
	验证
	openstack compute service list
	
	
	搭建nova计算节点
	安装相关包并配置
	yum -y install openstack-nova-compute
	
	vim /etc/nova/nova.conf
	[DEFAULT]
	enabled_apis = osapi_compute,metadata
	transport_url = rabbit://openstack:openstack@quan
	my_ip = 172.16.1.222
	use_neutron = True
	firewall_driver = nova.virt.firewall.NoopFirewallDriver
	
	[api]
	auth_strategy = keystone
	
	[keystone_authtoken]
	auth_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = nova
	password = openstack
	
	[vnc]
	enabled = True
	vncserver_listen = 0.0.0.0
	vncserver_proxyclient_address = 172.16.1.222
	novncproxy_base_url = http://172.16.1.221:6080/vnc_auto.html
	
	[glance]
	api_servers = http://quan:9292
	
	[oslo_concurrency]
	lock_path = /var/lib/nova/tmp
	
	[placement]
	os_region_name = RegionOne
	project_domain_name = Default
	project_name = service
	auth_type = password
	user_domain_name = Default
	auth_url = http://quan:35357/v3
	username = placement
	password = openstack
	
	查看机器是否支持虚拟化
	egrep -c '(vmx|svm)' /proc/cpuinfo
	若返回0,修改/etc/nova/nova.conf
	vi /etc/nova/nova.conf
	[libvirt]
	virt_type = qemu
	
	服务启动,并设置开机启动
	systemctl enable libvirt openstack-nova-compute && systemctl start libvirt openstack-nova-compute
	
	
	将compute节点添加到cell数据库(控制节点操作)
	source admin-openrc
	openstack compute service list --service nova-compute
	su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
	
	vi /etc/nova/nova.conf
	[scheduler]
	discover_hosts_in_cells_interval = 300
	
	验证
	source admin-openrc
	openstack compute service list
	openstack catalog list
	openstack image list
	nova-status upgrade check
	
	
neutron组件
	Neutron是Openstack中的一个项目,在各接口设备之间提供网络服务,而且受其它openstack服务管理,如Nova。Neutron为openstack云提供了更灵活的划分物理网络,在多租户的环境下提供给每个租户独立的网络环境。另外,Neutron提供API来实现这种目标。Neutron中的“网络”是一个可以被用户创建的对象,如果要和物理环境下的概念映射的话,这个对象相当于一个巨大的交换机,可以拥有无限多个动态可创建和销毁的虚拟端口。
	
	Neutron提供的网络虚拟化能力有:
	(1)二层到七层网络的虚拟化:L2(virtual switch)、L3(virtual Router和LB)、L4-L7(virtual Firewall)等
	(2)网络连通性:二层网络和三层网络
	(3)租户隔离性
	(4)网络安全性
	(5)网络扩展性
	(6)REST API
	(7)跟高级的服务:如LBaas
	
	Neutron的架构图:
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/Ei6CaKeBs.55JXz9GIW8xuGBeMGe*rVaB*3D3cGQDsY!/b/dFIBAAAAAAAA&bo=vQLoAQAAAAADB3Q!&rf=viewer_4
	总的来说,创建一个Neutron网络的过程如下:
	1、管理员拿到一组可在互联网上寻址的IP地址,并且创建一个外部网络和子网
	2、租户创建一个网络和子网
	3、租户创建一个路由器并且连接租户子网和外部网络
	4、租户创建虚拟机
	
	Neutron中的各种概念
	network:network是一个隔离的二层广播域。Neutron支持多种类型的network,包括local,flat,VLAN,VxLAN和GRE
		local:local网络与其它网络和节点隔离。local网络中的instance只能与同一节点上同一网络的instance通信,local网络主要用于单机测试
		flat:flat网络是无vlan tagging的网络。flat网络中的instance能与位于同一网络的instance通信,并且可以跨多个节点。
		vlan:vlan网络是具有802.1q tagging的网络。vlan是一个二层的广播域,同一vlan中的instance可以通信,不同vlan只能通过router通信。vlan网络可以跨节点,是应用最广泛的网络类型
		vxlan:vxlan是基于隧道技术的overlay网络。vxlan网络通过唯一的segmentation ID(也叫VNI)与其它vxlan网络区分。vxlan中数据包会通过VNI封装成UDP包进行传输。因为二层的包通过封装在三层传输,能够克服vlan和物理网络基础设施的限制。
		gre:gre是vxlan类似的一种overlay网络。主要区别在于使用IP包而非UDP进行封装。不同network之间在二层上是隔离的。
		
		network必须属于某个Project(Tenant租户),Project中可以创建多个network。network与Project之间是1对多的关系
	subnet:subject是一个IPv4或者IPv6地址段。instance的IP从subnet中分配。每个subnet需要定义IP地址的范围和掩码。
		subnet与network是1对多的关系。一个subnet只能属于某个network;一个network可以有多个subnet,这些subnet可以是不同的IP段,但不能重叠。
			例:有效的配置
				network A
					subnet A-a:10.10.1.0/24 {"start":"10.10.1.1","end":"10.10.1.50"}
					subnet A-b:10.10.2.0/24 {"start":"10.10.2.1","end":"10.10.2.50"}
				无效的配置(因为subnet有重叠)
				network A
					subnet A-a:10.10.1.0/24 {"start":"10.10.1.1","end":"10.10.1.50"}
					subnet A-b:10.10.1.0/24 {"start":"10.10.1.51","end":"10.10.1.100"}
			注:这里判断的不是IP地址是否重叠,而是子网是否重叠(10.10.1.0/24)
	port:port可以看做是虚拟交换机上的一个端口,port上定义了MAC地址和IP地址,当instance的虚拟网卡VIF(Virtual Interface)绑定到port时,port会将MAC和IP分配给VIF。port与subnet是1对多的关系。一个port必须属于某个subnet,一个subnet可以有多个port。
			
	
	Neutron中的Plugin和agent
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/Gm3J*.Vh27nLny6oXfuZlh.yXNYx.YE3I*Mwoea.MH4!/b/dL4AAAAAAAAA&bo=pAKJAQAAAAADBww!&rf=viewer_4
	
搭建neutron

	linuxbridge+vxlan模式
	
	控制节点:
	数据库相关操作
	mysql -uroot -popenstack
	create database neutron;
	grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'openstack';
	grant all privileges on neutron.* to 'neutron'@'%' identified by 'openstack';
	
	创建neutron用户,并在service项目中添加管理员角色
	source admin_openrc
	
	openstack user create --domain default --password-prompt neutron
	
	openstack role add --project service --user neutron admin
	
	创建网络服务及端口
	openstack service create --name neutron --description "Openstack Networking" network
	
	openstack endpoint create --region RegionOne network public http://quan:9696
	openstack endpoint create --region RegionOne network internal http://quan:9696
	openstack endpoint create --region RegionOne network admin http://quan:9696
	
	安装相关包并配置
	yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
	
	vi /etc/neutron/neutron.conf
	[database]
	connection = mysql+pymysql://neutron:openstack@quan/neutron
	
	[DEFAULT]
	core_plugin=ml2
	service_plugins = router
	allow_overlapping_ips = true
	
	transport_url = rabbit://openstack:openstack@quan
	
	auth_strategy = keystone
	
	notify_nova_on_port_status_changes = true
	notify_nova_on_port_data_changes = true
	
	[keystone_authtoken]
	auth_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = neutron
	password = openstack
	
	[nova]
	auth_url = http://quan:35357
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	region_name = RegionOne
	project_name = service
	username = nova
	password = openstack
	
	[oslo_concurrency]
	lock_path = /var/lib/neutron/tmp
	
	
	vi /etc/neutron/plugins/ml2/ml2_conf.ini
	[ml2]
	type_drivers = flat,vlan,vxlan
	
	tenant_network_types = vxlan
	
	mechanism_drivers = linuxbridge,l2population
	
	extension_drivers = port_security
	
	[ml2_type_flat]
	flat_networks = provider
	
	[ml2_type_vxlan]
	vni_ranges = 1:1000
	
	[securitygroup]
	enable_ipset = true
	
	vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
	[linux_bridge]
	physical_interface_mappings = provider:ens34			#外部网卡设备
	
	[vxlan]
	enable_vxlan = true
	local_ip = 172.16.1.221
	l2_population = true
	
	[securitygroup]
	enable_security_group = true
	firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
	
	确认操作系统内核支持桥接
	echo "net.bridge.vridge-nf-call-iptables = 1" >> /etc/sysctl.conf
	echo "net.bridge.vridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
	sysctl -p  #若出现“No such file or directory”错误,执行下面的操作
		modinfo by_netfilter  #查看内核模块信息
		modprobe by_netfilter	#加载内核模块
		再次执行sysctl -p
	
	vi /etc/neutron/l3_agent.ini
	[DEFAULT]
	interface_driver = linuxbridge
	
	
	vi /etc/neutron/dhcp.agent.ini
	[DEFAULT]
	interface_driver = linuxbridge
	dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
	enable_isolated_metadata = true
	
	
	vi /etc/neutron/metadata_agent.ini
	[DEFAULT]
	nova_metadata_host = 172.16.1.221
	metadata_proxy_shared_secret = openstack
	
	
	vi /etc/nova/nova.conf
	[neutron]
	url = http://quan:9696
	auth_url = http://quan:35357
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	region_name = RegionOne
	project_name = service
	username = neutron
	password = openstack
	service_metadata_proxy = true
	metadata_proxy_shared_secret = openstack
	
	
	#ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
	
	初始化neutron数据库
	su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin/ml2/ml2_conf.ini upgrade head" neutron
	
	重启nova服务
	systemctl restart openstack-nova-api
	
	服务启动,并设置开机启动
	systemctl enable neutron-server neutron-linuxbridge neutron-dhcp-agent neutron-metadata-agent
	systemctl start neutron-server neutron-linuxbridge neutron-dhcp-agent neutron-metadata-agent
	
	systemctl enable neutron-l3-agent && systemctl start neutron-l3-agent
	
	
	计算节点:
	安装相关包并配置
	yum -y install openstack-neutron-linuxbridge ebtables ipset
	
	vi /etc/neutron/neutron.conf
	[DEFAULT]
	transport_url = rabbit://openstack:openstack@quan
	
	auth_strategy = keystone
	
	[keystone_authtoken]
	auth_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = neutron
	password = openstack
	
	[oslo_concurrency]
	lock_path = /var/lib/neutron/tmp
	
	
	vi /etc/neutron/plugin/ml2/linuxbridge_agent.ini
	[linux_bridge]
	physical_interface_mappings = provider:ens34
	
	[vxlan]
	enable_vxlan = true
	local_ip = 172.16.1.222
	l2_population = true
	
	[securitygroup]
	enable_security_group = true
	firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
	
	
	vi /etc/nova/nova.conf
	[neutron]
	url = http://quan:9696
	auth_url = http://quan:35357
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	region_name = RegionOne
	project_name = service
	username = neutron
	password = openstack
	
	重启nova-compute服务
	systemctl restart openstack-nova-compute
	
	服务启动,并设置开机启动
	systemctl enable neutron-linuxbridge-agent && systemctl strat neutron-linuxbridge-agent
	
	验证(控制节点)
	source admin-openrc
	openstack extension list --network
	openstack network agent list
	

horizon组件
	horizon:UI界面 (Dashboard)。OpenStack中各种服务的Web管理门户,用于简化用户对服务的操作

搭建horizon
	安装相关包并配置
	yum -y install openstack-dashboard
	
	vim /etc/openstack-dashboard/local_settings
	OPENSTACK_HOST = "quan"
	ALLOWED_HOSTS = ['*']
	
	SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
	
	CACHES = {
		'default':{
			'BACKEND':'django.core.cache.backends.memcached.MemcachedCache',
			'LOCATION':'quan:11211',
		}
	}
	#注释掉其它的cache
	
	OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" %OPENSTACK_HOST
	OPENSTACK_kEYSTONE_MULTIDOMAIN_SUPPORT = True
	
	OPENSTACK_API_VERSIONS = {
		"identity":3,
		"image":2,
		"volume":2,	
	}
	OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
	OPENSTACK_KEYSTONE_DEFAULT_ROLE= 'user'
	
	OPENSTACK_NEUTRON_NETWORK = {
		...
		'enable_quotas':True,
		'enable_distributed_router':True,
		'enable_ha_router':True,
		'enable_lb':True,
		'enable_firewall':True,
		'enable_vpn':Flase,
		'enable_fip_topology_check':True,
	}
	
	TIME_ZONE = "Asia/Chongqing"
	
	
	vi /etc/httpd/conf.d/openstack-dashboard.conf
	WSGIApplicationGroup %{GLOBAL}
	
	重启相关服务
	systemctl restart httpd.service memcached.service
	
	访问地址:http://172.16.1.221/dashboard/
	
	关闭domain验证
	vi /etc/openstack-dashboard/local_settings
	#OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True		#注释此行
	重启相关服务
	systemctl restart httpd.service memcached.service
	
	用户名:admin 密码:openstack
	
	
通过命令行创建一个虚拟机的实例
	创建provider网络(外部网络)
	source admin-openrc
	
	openstack network create --share --external \
	--provider-physical-network provider \
	--provider-network-type flat provider
	
	openstack network create --network provider \				#创建外部子网(和物理网络位于同一网络)
	--allocation-pool start 172.16.1.231,end 172.16.1.240 \
	--dns-nameserver 8.8.4.4 --gateway 172.16.1.1 \
	--subnet-range 172.16.1.1/24 provider
	
	创建私有网络self-services
	source demo-openrc
	
	openstack network create selfservice					#创建私有网络
	
	openstack subnet create --network selfservice \			#创建私有网络子网
	--dns-nameserver 8.8.4.4 --gateway 192.168.0.1 \
	--subnet-range 192.168.0.0/24 selfservice
	
	openstack router create router		#创建虚拟路由
	
	openstack router add subnet selfservice		#为路由添加子网
	
	openstack router set router --extemal-gateway provider		#设置路由的外部网关
	
	
	验证
	source admin-openrc
	ip netns
	openstack port list --router router
	
	ping -c 网关ip
	
	创建flavor(启动虚拟机的模板,cpu是几个,内存是多少)
	openstack flavor --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
	
	查看创建的flavor
	source demo-openrc
	openstack flavor list
	
	生成秘钥对
	source demo-openrc
	ssh-keygen -q -N ""
	openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
	openstack keypair list
	
	添加安全组规则
	openstack security group rule create --proto icmp default  #允许ping通
	openstack security group rule create --proto tcp --dst-port 22 default		#允许连接tcp22号端口
	
	查看验证
	source demo-openrc
	openstack flavor list
	openstack image list
	openstack network list
	openstack security group list
	openstack security group rule list
	
	
	启动一个实例
	创建一个虚拟机
	openstack server create --flavor m1.nano --image cirros(可以是id也可以是名称) \
	--nic net-id SELFSERVICE_NET_ID --security-group default \
	--key-name mykey selfservice-instance(虚拟机名称)
	
	查看虚拟机
	openstack server list	#查看拥有的虚拟机
	openstack server show (虚拟机id)	#查看虚拟机详细信息
	
	通过界面绑定ip
	
	查看虚拟机控制台信息
	openstack console log show (虚拟机id)
	
	
cinder组件
	cinder:提供REST_API使用户能够查询和管理volume、volume snapshot以及volume type,
			提供scheduler调度volume创建请求,合理优化存储资源的分配
			通过driver架构支持多种back-end(后端)存储方式,包括LVM,NFS,Ceph和其它诸如EMC、IBM等商业存储产品方案
	
	cinder的架构图:
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/FpuhoZP0gP2rwhfFn*1Q1BXUZlHCtEvh7xmNRgJYqiw!/b/dL8AAAAAAAAA&bo=CQIYAQAAAAARByI!&rf=viewer_4
	
	cinder包含的组件:
		cinder-api:接收API请求,调用cinder-volume执行操作
		cinder-volume:管理volume的服务,与volume provider协调工作,管理volume的生命周期。运行cinder-volume服务的节点被称作为存储节点
		cinder-scheduler:scheduler通过调度算法选择最合适的存储节点创建volume
		volume provider:数据的存储设备,为volume提供物理存储空间。cinder-volume支持多种volume provider,每种volume provider通过自己的driver与cinder-volume协调工作
		Message Queue:cinder各个子服务通过消息队列实现进程间通信和相互协作。因为有了消息队列,子服务之间实现了解耦,这种松散的结构也是分布式系统的重要特征
		Database cinder:有一些数据需要存放到数据库中,一般使用MySQL。数据库是安装在控制节点上的。
	
搭建cinder组件
	控制节点
	数据库相关操作
	mysql -uroot -popenstack
	create database cinder;
	grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'openstack';
	grant all privileges on cinder.* to 'cinder'@'%' identified by 'openstack';
	
	创建cinder用户,并在service项目中添加管理员角色
	source admin_openrc
	openstack user create --domain default --password-prompt cinder
	
	openstack role add --project service --user cinder admin
	
	创建cinder服务及端口
	openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
	
	openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
	
	openstack endpoint create --region RegionOne volumev2 public http://quan:8776/v2/%\{project_id\}s
	openstack endpoint create --region RegionOne volumev2 internal http://quan:8776/v2/%\{project_id\}s
	openstack endpoint create --region RegionOne volumev2 admin http://quan:8776/v2/%\{project_id\}s
	
	openstack endpoint create --region RegionOne volumev3 public http://quan:8776/v3/%\{project_id\}s
	openstack endpoint create --region RegionOne volumev3 internal http://quan:8776/v3/%\{project_id\}s
	openstack endpoint create --region RegionOne volumev3 admin http://quan:8776/v3/%\{project_id\}s
	
	安装相关包并配置
	yum -y install openstack-cinder
	
	vim /etc/cinder/cinder.conf
	[database]
	connection = mysql+pymysql://cinder:openstack@quan/cinder
	
	[DEFAULT]
	transport_url = rabbit://openstack:openstack@quan
	
	auth_strategy = keystone
	
	my_ip = 172.16.1.221
	
	[keystone_authtoken]
	auth_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = cinder
	password = openstack
	
	[oslo_concurrency]
	lock_path = /var/lib/cinder/tmp
	
	初始化数据库
	su -s /bin/sh -c "cinder-manage db sync" cinder

	配置计算服务使用cinder
	vi /etc/nova/nova.conf
	[cinder]
	os_region_name = RegionOne
	
	计算服务重启
	systemctl restart openstack-nova-api
	
	服务启动,并设置开机启动
	systemctl enable openstack-cinder-api openstack-cinder-scheduler && systemctl start openstack-cinder-api openstack-cinder-scheduler

	验证
	openstack volume service list	#state状态为up即为启动成功
	
	
	存储节点(除系统盘外要有磁盘)
	安装相关包并配置
	yum -y install lvm2 device-mapper-persistent-data
	
	systemctl enable lvm2-lvmetad && systemctl start lvm2-lvmetad

	pvcreate /dev/sdb		#创建pv
	vgcreate cinder-volume /dev/sdb		#创建vg
	
	vi /etc/lvm/lvm.conf
	devices{"a/dev/sda/","a/dev/sdb/","r/.*/"}
	#a表示接收,r表示拒绝
	
	可通过命令lsblk查看系统安装是否使用lvm,若sda磁盘没有使用lvm可不添加"a/dev/sda/"
	
	yum -y install openstack-cinder targetcli python-keystone
	
	vi /etc/cinder/cinder.conf
	[database]
	connection = mysql+pymysql://cinder:openstack@quan/cinder
	
	[DEFAULT]
	transport_url = rabbit://openstack:openstack@quan
	
	auth_strategy = keystone
	
	my_ip = 172.16.1.223
	
	enabled_backends = lvm
	
	glance_api_servers = http://quan:9292
	
	[keystone_authtoken]
	auth_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = cinder
	password = openstack
	
	[lvm]
	volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
	volume_group = cinder-volumes		#vg的名称
	iscsi_protocol = iscsi
	iscsi_helper = lioadm
	
	[oslo_concurrency]
	lock_path = /var/lib/cinder/tmp
	
	服务启动,并设置开机启动
	system enable openstack-cinder-volume target && system start openstack-cinder-volume target
	
	
	验证
	source admin-openrc
	openstack volume service list
	
	
为虚拟机分配虚拟磁盘
	命令:
		source demo-openrc
		openstack volume create --size 2 volume2	#--size指定虚拟机磁盘大小2G
		openstack volume list	#状态为available可用的
		
		openstack server add volume selfservice-instance volume2	#为虚拟机挂载磁盘
		openstack volume list	#状态为in-use
		
		可登录虚拟机通过fdisk -l 查看挂载磁盘
		
		
		
Swift组件
	swift:被称为对象存储,提供了强大的扩展性、冗余和持久性。对象存储,用于永久类型的静态数据的长期存储
	
	
搭建swift组件
	控制节点
	创建swift用户,并在service项目中添加管理员角色
	source admin-openrc
	openstack user create --domain default --password-prompt swift
	
	openstack role add --project service --user swift admin
	
	创建swift服务及端口
	openstack service create --name swift --description "OpenStack Object Stroage" object-store
	
	openstack endpoint create --region RegionOne object-store public http://quan:8080/v1/AUTH_%\{project_id\}s
	openstack endpoint create --region RegionOne object-store internal http://quan:8080/v1/AUTH_%\{project_id\}s
	openstack endpoint create --region RegionOne object-store admin http://quan:8080/v1

	安装相关包
	yum -y install openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached
	
	下载swift-proxy.conf的配置文件,并配置
	curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/queens
	
	vi /etc/swift/proxy-server.conf
	[DEFAULT]
	bind_port = 8080
	swift_dir = /etc/swift
	user = swift
	
	[pipeline:main]
	pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

	[app:proxy-server]
	use = egg:swift#proxy
	
	account_autocreate = True
	
	[filter:keystoneauth]
	use = egg:swift#keystoneauth
	
	operator_roles = admin,user
	
	[filter:authtoken]
	paste.filter_factory = keystonemiddleware.auth_token:filter_factory
	
	www_authenticate_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_id = default
	user_domain_id = default
	project_name = service
	username = swift
	password = openstack
	delay_auth_decision = True
	
	[filter:cache]
	memcache_servers = quan:11211
	
	
	存储节点(所有的)
	安装相关包
	yum install xfsprogs rsync
	
	格式化磁盘
	mkfs.xfs /dev/sdb
	mkfs.xfs /dev/sdc
	
	mkdir -p /srv/node/sdb
	mkdir -p /src/node/sdc
	
	配置自动挂载
	vi /etc/fstab
	/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
	/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
	
	mount /srv/node/sdb
	mount /srv/node/sdc
	或者
	mount -a
	
	vi /etc/rsyncd.conf
	uid = swift
	gid = swift
	log_file = /var/log/rsyncd.log
	
	pid_file = /var/run/rsyncd.pid
	address =  172.16.1.224      #多个节点请自行调整          
	
	[account]
	max_connections = 2
	path = /srv/node/
	read only = False
	locak file = /var/lock/account.lock
	
	[container]
	max_connections = 2
	path = /srv/node/
	read only = False
	locak file = /var/lock/container.lock
	
	[object]
	max_connections = 2
	path = /srv/node/
	read only = False
	locak file = /var/lock/object.lock
	
	服务启动,并设置开机启动
	systemctl enable rsyncd && systemctl start rsyncd
	
	安装相关包
	yum -y install openstack-swift-account openstack-swift-container openstack-swift-object
	
	下载相关配置文件,并配置
	curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/queens
	curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/queens
	curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/queens
	
	vi /etc/swift/account-server.conf
	[DEFAULT]
	bind_ip = 172.16.1.224
	bind_prot = 6202
	user = swift
	swift_dir = /etc/swift
	devices = /srv/node
	mount_check = True
	
	[pipeline:main]
	pipeline = healthcheck recon account-server
	
	[filter:recon]
	recon_cache_path = /var/cache/swift
	
	
	vi /etc/swift/container-server.conf
	[DEFAULT]
	bind_ip = 172.16.1.224
	bind_prot = 6201
	user = swift
	swift_dir = /etc/swift
	devices = /srv/node
	mount_check = True
	
	[filter:recon]
	recon_cache_path = /var/cache/swift
	
	
	vi /etc/swift/object-server.conf
	[DEFAULT]
	bind_ip = 172.16.1.224
	bind_prot = 6200
	user = swift
	swift_dir = /etc/swift
	devices = /srv/node
	mount_check = True
	
	[pipeline:main]
	pipeline = healthcheck recon object-server
	
	[filter:recon]
	recon_cache_path = /var/cache/swift
	recon_lock_path = /var/lock
	
	修改文件权限
	chown -R swfit:swift /srv/node
	mkdir -p /var/cache/swift
	chown -R root:swift /var/cache/swift
	chmod -R 755 /var/cache/swift
	
	终止存储节点操作,上述操作全部在所有存储节点中操作
	控制节点操作
	cd /etc/swift
	swift-ring-builder account.builder create 10 3 1
	
	创建第一存储节点
	swift-ring-builder account.builder add \
	--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdb --weight 100
	
	swift-ring-builder account.builder add \
	--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdc --weight 100
	
	创建第二存储节点
	swift-ring-builder account.builder add \
	--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdb --weight 100
	
	swift-ring-builder account.builder add \
	--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdc --weight 100
	
	swift-ring-builder account.builder
	swift-ring-builder account.builder rebalance
	
	
	
	swift-ring-builder container.builder create 10 3 1
	
	创建第一存储节点
	swift-ring-builder container.builder add \
	--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdb --weight 100
	
	swift-ring-builder container.builder add \
	--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdc --weight 100
	
	创建第二存储节点
	swift-ring-builder container.builder add \
	--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdb --weight 100
	
	swift-ring-builder container.builder add \
	--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdc --weight 100
	
	swift-ring-builder container.builder
	swift-ring-builder container.builder rebalance
	
	
	swift-ring-builder object.builder create 10 3 1
	
	创建第一存储节点
	swift-ring-builder object.builder add \
	--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdb --weight 100
	
	swift-ring-builder object.builder add \
	--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdc --weight 100
	
	创建第二存储节点
	swift-ring-builder object.builder add \
	--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdb --weight 100
	
	swift-ring-builder object.builder add \
	--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdc --weight 100
	
	swift-ring-builder object.builder
	swift-ring-builder object.builder rebalance
	
	将生成文件放到对象存储节点中
	scp account.ring.gz container.ring.gz object.ring.gz object01:/etc/swift/
	scp account.ring.gz container.ring.gz object.ring.gz object02:/etc/swift/
	
	获取swift.conf配置文件
	curl -o /etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/queens
	
	vi /etc/swift/swift.conf
	[swift-hash]
	swift_hash_path_suffix = HASH_PATH_SUFFIX
	swift_hash_path_prefix = HASH_PATH_PREFIX
	
	[storage-policy:0]
	name = Policy-0
	default = yes
	
	
	将swift.conf配置文件分发到对象存储节点
	scp /etc/swift/swift.conf object01:/etc/swift/
	scp /etc/swift/swift.conf object02:/etc/swift/
	
	控制节点和所有对象存储节点执行
	chown -R root:swift /etc/swift
	
	控制节点
	systemctl enable openstack-swift-proxy memcached && systemctl start openstack-swift-proxy memcached
	
	对象存储节点(所有)
	systemctl enable openstack-swift-account openstack-swift-account-auditor openstack-swift-account-reaper openstack-swift-account-replicator
	systemctl start openstack-swift-account openstack-swift-account-auditor openstack-swift-account-reaper openstack-swift-account-replicator
	systemctl enable openstack-swift-container openstack-swift-container-auditor openstack-swift-container-replicator openstack-swift-container-updater
	systemctl start openstack-swift-container openstack-swift-container-auditor openstack-swift-container-replicator openstack-swift-container-updater
	systemctl enable openstack-swift-object openstack-swift-object-auditor openstack-swift-object-replicator openstack-swift-object-updater
	systemctl start openstack-swift-object openstack-swift-object-auditor openstack-swift-object-replicator openstack-swift-object-updater
	
	验证(控制节点)
	备注:首先检查/var/log/audit/audit.log,若存在selinux的信息,使得swift进程无法访问,做如下修改:
	chcon -R system_u:object_r:swift_data_t:s0 /srv/node
	
	source demo-openrc
	swift stat  #查看swift状态
	
	openstack container create container1
	openstack object create container1 FILE	#上传文件到容器中
	openstack container list	#查看所有的container(容器)
	openstack object list container1	#查看container1容器中的文件
	openstack object save container1 FILE	#从容器中下载文件
	
	

  

posted @ 2019-03-11 10:23  全心全意_运维  阅读(1873)  评论(0编辑  收藏  举报