www.cnblogs.com/ruiyqinrui

开源、架构、Linux C/C++/python AI BI 运维开发自动化运维。 春风桃李花 秋雨梧桐叶。“力尽不知热 但惜夏日长”。夏不惜,秋不获。@ruiY--秦瑞

python爬虫,C编程,嵌入式开发.hadoop大数据,桉树,onenebula云计算架构.linux运维及驱动开发.

  博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

preface:当你完全且正确的配置好整个OpenStack ENV

你将能看到的和体验到的!!!

我们先来看看简单效果吧,祝君能在这条路上走的更远,更好;

1,wget openstack icehouse Version yum repo;

https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm

2,在系统上部署好上rpm后,在你的系统/etc/yum.repo/下多了3个repo

foreman你应该懂的!很牛!

ntp(network time protocol)

随机密码openssl / certutil

3,mysql数据库配置

yum -y install mysql mysql-server MySQL-python (frontend)

yum -y install MySQL-python (backend Node)

数据库配置

4,安装yum priorities插件

yum install yum-plugin-priorities

5,epel

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

6,yum -y install openstack-utils

7,yum install openstack-selinux

8,yum -y upgrade & reboot

9,openStack messaging service configure;

 [Notes byRuiy:Openstack uses a message broker(消息代理,经纪人..) to coordinate operations(协调操作) and status information among services

  ]

  yum -y install qpid-cpp-server

  

  {Ruiy Notes:simplify installtion test environment,disable authencation}

  edit the /etc/qpidd.conf file and change the following key:

    auth = no

[乐一把,云中插曲!!]

hypervisors (KVM,XEN,VMWARE,POWER)and OS(90% linux or derivative)

cloud sepqrate roles:Frontend (execute the OpenNebula services GUI界面,云系统 or 虚拟化管理) and Nodes (一台搞性能物理机,跑某一种Hypervisors(KVM较主流 and 配置维护简单));

跑Hypervisor 物理机,虚拟化扩展支持?supports
virtualization extensions, please run?

grep -E "svm|vmx" /proc/cpuinfo;

网桥Bridge配置

Congratulations, now you are ready to install OpenStack services!

咱,言归正传!,接着搞!!

<1,安装认证服务 Identity Service -->keystone>

主要需要细心操作的有define users,tenants,and roles

  define services and API endpoints <众所周知,OpenStack 设计之初面向的就是公有云,租户,用户的安全性重中之重啊!>

identityService功能:

user management,tracks users and their permissions

service catalog,provides a catalog of available services with their API endpoints;

Endpoint(a network-accessible address,usually described by a URL)

yum install openstack-keystone python-keystoneclient

配置keystone使用mysql存储配置信息

这个是错误的,请注意--;

 openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:321@ruiy.openstack.cc/keystone
mysql://keystone:321@ruiy.openstack.cc/keystone

connection = MysqlUserName:MysqlUserNamePasswd@Mysql主机系统名即FQDN/keystone服务存储信息所有的数据库的库名称;

同步keystone 数据到mysql数据库的命令是下面截图语句,经过一番折腾啊!

http://wiki.incloudus.com/display/OP/Nova

一直以来,openstack-keystone服务器起不来,原来是/var/log/keystone 日志文件的权限不对,卧槽,折腾啊,这是唱的 那一出,难道keystone bug!,我去!

上面一大楼都是扯淡的话!,我都没放心上,咱接着搞,继续;

3,define an authorization token to use as a shared secret between the identity service and other Openstack service,use openssl to generate a random token and store it in the configuration file;

echo $ADMIN_TOKEN = a40b81a0c7373a4422eb

 4,配置keystone uses PKI tokens,create the signing keys and certificates and restrict access to the generated data;

 5,性能优化,清除过期的 expire token

6,define users,tenants and roles

设置OS_SERVICE_TOKEN,OS_SERVICE_ENDPOINT系统变量;

创建管理员用户;

创建角色及租户;

keystone数据库中存储的信息;

集成user,role tenant;

创建普通用户

集成demo user demo tenant到_member_ role;

创建服务租户

7,定义服务和API endpoint;

keystone service-create,describes the service

keystone endpoint-create associates API endpoints with the service;

 

临时设置的环境变量,我这里是之前环境没配置好,机器重启了,一会这些环境变量我们会放到/etc/profile or 放到一个shell脚本中维护;

没插入API endpoint前

插入API endpoint后

specify an API endpoint for the identity service by using the returned service IS,when you specify an endpoint,you provide URLs for the public API,internal API,and admin API,In this guide,the * hostName is used,Note that the identity service uses a different port for the admin API

8,verify the identity service installation

  to verify that the identity service is installed and configured correctly,clear the values in the OS_SERVICE_TOKEN and OS_SERVICE_ENDPOINT environment variables

these variables,which ware used to bootstrap the administrative user and register the identity service,are no longer needed;

you can now use regular user name-based authentication

request a authentication token by using the admin user and the password you chose for that user

tanant token-get

设置 --os-* variables in your environment to simplify command-line usage setup a admin-openrc.sh file with the admin credentials and admin endpoint

keystone token-get


<二,openStack client install and configure>

use the openstack command-line clients to run simple commands that make API calls;

OpenStack 客户端组件(components)一览,请注意此处非指OStack服务组件哦,亲!

install the openstack command-line clients;

prerequisite software and python package for each openstack client;

KeyboardInterrupt

prerequisite software;

python2.6 or later>

setuptools

pip

综上我们可以使用pip安装所有的openStack组件

PIP(Python Index Package);

pip enables you to update or remove a package

pip install python-PROJECTclient

Python Index Package
pip upgrade or remove OpenStack commonpoents;
  1,upgrade
    pip install --upgrade python-PROJECTclient
  2,remove
    pip uninstall python-PROJECTclient

set environment variables using the openstack RC file;

Override environment variable values

keystone --os-password PASSWORD service-list;

在执行过程中--os--参数 先以命令行提供的优先级最高,没提供的而且又必须提供的则到rc文件中找,找到,呵呵,找不到,则哇哇流泪!

admin-openrc.sh for the administrative user

demo-openrc.sh for the normal user;

<三,Ostack image service ->> glance ins>

 OpenStack image service enables users to discover,register,and retrieve virtual machine images.image service offer a REST API that enables you to query virtual machine image metadata and retrieve an actual image;

在这里我们的demo Environment

 image service to use the file backend,this means that images uploaded to the image service are stored in a directory on the same system that hosts the service,by default,that directory is : /var/lib/glance/images/;

此处到我们来玩转Openstack (image service)的时候了

  glance-api: accepts image API calls for image discover,retrieval,and storage;

  glance-registry: stores,processes,and retrieves metadata about images,Metadata includes items such as size and type;

我们大家应该已经发现OpenStack 包组件的一些规律了(服务核心组件和客户端组件命名)

openStack核心组件命名 openstack-keystone(认证服务器) or openstack-glance(镜像服务器)

  客户端组件包命名 python-glanceclient or python-keystoneclient....;

database,stores image metadata.you can chose your database depending on your preference,most deployments Mysql or SQLite;

storage repository for image files.image service supports a variety of repositories including normal file systems,object storage,RADOS block devices,http and amazon s3 some types of repositories support only read-only usage;

Use
snapshots for back up and as templates to launch new servers

配置镜像服务器使用message broker,还记得前面我们用的是哪个message了吧?

create a glance user that the image service can use to authenticate with the identity service;

 

register the image service with identity service,create endpoint;

验证镜像服务(verify image service)

1,下载测试镜像

wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

2,上传image 到 image service语法;

glance image-create --name=imagelabel --disk-format=fileformat --container-format=containerformat --is-public=accessvalue < imagefile;

语法参数说明:

有效磁盘格式: qcow2,raw,vhd,vmdk,vdi,iso,aki,ari,ami;

验证disk-format 格式命令[root@ruiy ~]#file

containerformat:有效的容器格式有:bare,ovf,aki,ari,ami;

Specify bare to indicate that the image file is not in a file
format that contains metadata about the virtual machine.
Although this field is currently required, it is not actually used

by any of the OpenStack services and has no effect on system
behavior. Because the value is not used anywhere, it is safe to
always specify bare as the container format;

呵呵,看完上面的这段,你就应该懂得的为什么当前的Docker技术是如此的火,火,火!!!

accessvalue:

  true --All users can view and use the image

  false - Only administrators can view and use the image;

imagefile;

实战:

备注;另外我们可以直接指定公网的镜像 使用--copy-from;

glance image-create --name="ruiy-cirros-0.3.2-x86_64" --disk-format=qcow2 --container-format=bare --is-public=true --copy-from http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

 <三,configure computer service-->controlNode>

the computer service is a cloud computing fabric controller,which is the main part of an iaas system;

computer service is made up of the following functional areas and their underlying components:

001, API

nova-api service.accepts and respnds to end user compute API calls.supports the openstack computer API,the amazon EC2 API

and a special admin API for privileged users to perform administrative actives,also,initiates most orchestration activities,such as running an instance,and enforces some policies;

nova-api-metadata service. accepts metadata requests from instances.the nova-api-metadata service is generally only used when you run in multi-host mode with nova-network installations;

....等一大堆,在此就不枚举了;

配置mysql存储 nova配置信息

create nova user that computer users to authenticate with the identity service,use service tenant and give the user the admin role;

configure compute to use these credentials with the identity service running on the ruiy.openstack;

register computer with the identity service so that other openstack services can locate it,register the service and specify the endpoint;

 

完成keystone,image,compute services后init.d下的服务及相关可执行程序列表

<configure computer service-->compute Nodes>

 The compute node receives requests from thecontroller node and hosts virtual machine instances;

Compute service relies on a hypervisor to run virtual machine instances. OpenStack can
use various hypervisors, but this guide uses KVM.

其实我没什么截图类似于上面这样的呢!肯定是事出有因的,我们到时候会基于设计基于离线的方式部署整个openStack ENV,我把每个组件所依赖的包找到了,到时候我们在到repo上去下载,当然你也可以把当前或是你所喜欢青睐的repo下的所有rpm都个下载下来,在本机执行yum -y --localinstall *.rpm;

determine whether your system's processor and/or hypervisor support handware acceleration for virtual machines,

run command:

egrep -c '(vmc|svm)' /proc/cpuinfo

run command:

 <四,Add a networking service>

1,OpenStack Networking(neutron)

2,legacy networking(noca-network)

OpenStack Networking (neutron) manages all of the networking facets for the Virtual
Networking Infrastructure (VNI) and the access layer aspects of the Physical Networking
Infrastructure (PNI) in your OpenStack environment. OpenStack Networking allows tenants
to create advanced virtual network topologies including services such as firewalls, load
balancers, and virtual private networks (VPNs)

modular layer 2(ML2) plug-in

configure controller node

prerequisites

Before you configure OpenStack Networking (neutron), you must create a database and
Identity service credentials including a user and service

create identity service credentials for Networking

create the neutron user:

link the neutron user to the service tenant and admin role

 

create neutron service

yum install openstack-neutron openstack-neutron-ml2 python-neutronclient -y

to configure networking server component;

the networking server component configuration includes the database,authentication mechanism,message broker,topology changer notifier,and plug-in

configure networking to use the identity service for authentication

configure networking to use the message broker:

configure networking to notify compute about network topology changer:

configure Networking to use the Modular Layer2(ML2) plug-in and associated services

 

Note,we recommend adding verbose = True to the [DEFFAULT] section in /etc/neutron/neutron.conf

to assist with troubleshooting

to configure the modular layer2 (ML2) plug-in

tehe ML2 plug-in uses the open vSitch(OVS) mechanism(agent) to build the virtual networking framwork for instances,however,the controller node does not need OVS agent or servic because is does not handle instance network traffic;

to configure compute to use Netorking

by default,most distributions configure Compute to use legacy Networking,You must reconfigure compute to manage networks through Networking

Note,by default compute uses internal firewall service,since Networking includes a firewall service,you must disable the compute firewall service by using

the nova.virt.firewall.NoopFirewallDriver firewall driver

to finalize installtion

the Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the

configuration file associated with your chosen plug-in,Using ML2,for example ,the symbolic link must point to /etc/neutron/plugins/m12/m12_conf.ini

if this symbolic link does not exist,create it using the following commands;

<五,configure network node>

prerequisites

before you configure openStack Networking,you must enable certain kernel networking functions

配置前

配置后

to configure the Networking common components

the Networking common compoent configuration includes the authentication mechanism,message broker,and plug-in;

1,configure Networking to use the identity service for authentication

2,configure Network to use the message broker

3,configure Networking to use the Modular Layer 2(ML2) plug-in and associated services;

Note,We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting;

  to configure the Layer-3(ML3)agent

 Note

we recommand adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with troubleshooting;

To configure the DHCP agent

To configure the metadata agent

The metadata agent provider configure information such as credentails for remote access to instances;

Note,we recommand adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with troubleshooting;

Note perform the next two steps on the controller node;

on the controller node,configure compute to use the metadata service;

to configure the Modular Layer 2(ML2) plug-in

The ML2 plug-in uses the Open vSwitch(OVS) mechanism(agent) to build virtual networking framework for instances;

To configure the Open vSwitch(OVS) service

The OVS service provides the underlying virtual networking framework for instances,the integration bridge br-int handles external instance network traffic within OVS,The external bridge br-ex handles external instance network traffic within OVS,the external bridge requires a port on the physcial external network interface to provide instances with external network access,in essence,this port bridges the virtual and physcial external network in your environment;

start the OVS(open vswitch) and configure it to start when the system boots;

add a interface to the external bridge that connects to the physcial external network interface:

Note,depending on your network interface driver,you may need to disable Generic Receive Offload(GRO) to achieve suitable throughput between your instances and the external network;

to temporarily disable GRO on the external network interface while testing your environment;

to finalize the installation

在配置Openstack Networking (neutron)时,3个节点都需要配置,我的controller和compute和neutron都是在一台server上的!!

To install the Networking components

To configure the Networking common components

Networking common component configuration includes the authentication mechanism,message broker, and plug-in;

instance_tunnels_interface_ip_address;

gre(generical receive ether)

gro(generical receive offload)

Create initial Networks;

before launching your first instance,you must create the necessary virtual network infrastructure

External network;

  the external network typically provides internet access for your instances,by default ,this network only allows internet access from

instances using Network Address Translation(NAT),you can enable internet access to individual instances using a floating IP address and suitable security group rules,

the admin tenant owns this network because it provides external netork access for multiple tenants,you must also enable sharing to allow acces by those tenants;

Note,perform these commands on the controller node

To create the external network

首先你必须有admin tenant credentials;

5672,qpid<--messagebroker;

 To create subnet on the external network;

具体语法格式:

neutron subnet-create ext-net --name ext-subnet --allocation-pool start=floating_ip_start,end=floating_ip_end --disable-dhcp --gateway external_network_gateway external_network_cdir

 

Tenant network

The tenant network provides internal network access for instance,The architecture isolates this type of network from other tenants,The demo tenant owns this network because it only provides network access for instances within it;

tenant network provides internal network access for instance,the architecture isolates this type of network from other;

1,source the demo tenant credentials;

Like the external network,your tenant network also requires a subnet attached to it.You can specify any valid subnet because the architecture isolates tenant netorks

To create a subnet on the tenant network;

创建tenant subnet 语法

neutron subnet-create demo-net-ruiy --name demo-subnet-ruiy --gateway TENANT_NETWORK_GATEWAY TENANT_NETWORK_CIDR

Legacy networking(nova-network)

configure controller mode

openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API

openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova

Restart the Compute service

service openstack-nova-api restart

openstack-nova-scheduler

openstack-nova-conductor

[Note,我们当前的环境使用的是neutron,在此只是对legacy network作下使用配置简单了解]

Configure compute node;

This section covers deployment of a simple flat network that provides IP address to your instances via DHCP,if your environment includes multiple compute nodes;

the multi-host feature provides redundancy by spreading network functions access compute nodes;

To install legacy networking components

 yum -y install openstack-nova-network openstack-nova-api;

openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API

openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova

network_manager nova.network.manager.FlatDHCPManager

firewall_driver nova.virt.libvirt.firewall.IptablesFirewallDriver

network_size 254

allow_same_net_traffic False

multi_host True

send_arp_for_ha True

share_dhcp_address True

force_dhcp_release Trueflat_network_bridge br100

flat_interface INTERFACE_NAME

public_interface INTERFACE_NAME;

Create initial network

before launching your first instance,you must create the necessary virtual network infratructure to which the instance will connect,This network typically provides interface access from instances,You can enable internet access to individual instances using a floating IP address and suitable security group rules,The admin tenant owns this network beause it providers external network access for multiple tenants;

nova network-vreate demo-net --bridge br100 --multi-host T --fixed-range-v4 NETWORK_CIDR

 

posted on 2014-08-20 09:49  秦瑞It行程实录  阅读(1330)  评论(0编辑  收藏  举报
www.cnblogs.com/ruiyqinrui