devstack 安装配置(安装部署开发测试环境)
废话不说,直接上步骤:
1、安装devstack
环境: 最好是物理机服务器
devstack 的安装不能直接用root 用户,新建一个用户stack
adduser -m stack passwd stack
从github上克隆devstack 的源码
su - stack git clone https://github.com/openstack-dev/devstack.git su - root cd /home/stack/devstack/tools/create-stack-user.sh
在/home/stack/devstack 目录中创建local.conf配置文件
[root@devstack-liberty devstack]# cat local.conf [[local|localrc]] # Define images to be automatically downloaded during the DevStack built process. IMAGE_URLS="http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img" GIT_BASE=https://github.com # Credentials DATABASE_PASSWORD=openstack ADMIN_PASSWORD=openstack SERVICE_PASSWORD=openstack SERVICE_TOKEN=openstack RABBIT_PASSWORD=openstack #FLAT_INTERFACE=em1 HOST_IP=192.168.3.191 SERVICE_HOST=192.168.3.191 MYSQL_HOST=192.168.3.191 RABBIT_HOST=192.168.3.191 GLANCE_HOSTPORT=192.168.3.191:9292 ## Neutron options Q_USE_SECGROUP=True FLOATING_RANGE="192.168.3.0/24" FIXED_RANGE="10.0.0.0/24" Q_FLOATING_ALLOCATION_POOL=start=192.168.3.103,end=192.168.3.110 PUBLIC_NETWORK_GATEWAY="192.168.3.1" Q_L3_ENABLED=True PUBLIC_INTERFACE=em1 Q_USE_PROVIDERNET_FOR_PUBLIC=True OVS_PHYSICAL_BRIDGE=br-ex PUBLIC_BRIDGE=br-ex OVS_BRIDGE_MAPPINGS=public:br-ex # Work offline #OFFLINE=True # Reclone each time RECLONE=no # Logging # ------- # By default ``stack.sh`` output only goes to the terminal where it runs. It can # be configured to additionally log to a file by setting ``LOGFILE`` to the full # path of the destination log file. A timestamp will be appended to the given name. LOGFILE=/opt/stack/logs/stack.sh.log VERBOSE=True LOG_COLOR=True SCREEN_LOGDIR=/opt/stack/logs # the number of days by setting ``LOGDAYS``. LOGDAYS=1 # Database Backend MySQL enable_service mysql # RPC Backend RabbitMQ enable_service rabbit # Enable Keystone - OpenStack Identity Service enable_service key # Horizon - OpenStack Dashboard Service enable_service horizon # Enable Swift - Object Storage Service without replication. enable_service s-proxy s-object s-container s-account SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5 SWIFT_REPLICAS=1 # Enable Glance - OpenStack Image service enable_service g-api g-reg # Enable Cinder - Block Storage service for OpenStack #VOLUME_GROUP="cinder-volumes" enable_service cinder c-api c-vol c-sch c-bak # Enable Heat (orchestration) Service enable_service heat h-api h-api-cfn h-api-cw h-eng # Enable Trove (database) Service enable_service trove tr-api tr-tmgr tr-cond # Enable Sahara (data_processing) Service enable_service sahara # Disable Tempest - The OpenStack Integration Test Suite enable_service tempest # Enabling Neutron (network) Service disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta enable_service q-metering enable_service neutron ## Neutron - Load Balancing enable_service q-lbaas ## Neutron - Firewall as a Service enable_service q-fwaas ## Neutron - VPN as a Service enable_service q-vpn # VLAN configuration. Q_PLUGIN=ml2 ENABLE_TENANT_VLANS=True # GRE tunnel configuration #Q_PLUGIN=ml2 #ENABLE_TENANT_TUNNELS=True # VXLAN tunnel configuration Q_PLUGIN=ml2 Q_ML2_TENANT_NETWORK_TYPE=vxlan # Enable Ceilometer - Metering Service (metering + alarming) enable_service ceilometer-acompute ceilometer-acentral ceilometer-collector ceilometer-api enable_service ceilometer-alarm-notify ceilometer-alarm-eval enable_service ceilometer-anotification ## Enable NoVNC enable_service n-novnc # Branches KEYSTONE_BRANCH=stable/mitaka NOVA_BRANCH=stable/mitaka NEUTRON_BRANCH=stable/mitaka SWIFT_BRANCH=stable/mitaka GLANCE_BRANCH=stable/mitaka CINDER_BRANCH=stable/mitaka HEAT_BRANCH=stable/mitaka TROVE_BRANCH=stable/mitaka HORIZON_BRANCH=stable/mitaka SAHARA_BRANCH=stable/mitaka CEILOMETER_BRANCH=stable/mitaka TROVE_BRANCH=stable/mitaka TACKER_BRANCH=stable/mitaka # Select Keystone's token format # Choose from 'UUID', 'PKI', or 'PKIZ' # INSERT THIS LINE... KEYSTONE_TOKEN_FORMAT=${KEYSTONE_TOKEN_FORMAT:-UUID} KEYSTONE_TOKEN_FORMAT=$(echo ${KEYSTONE_TOKEN_FORMAT} | tr '[:upper:]' '[:lower:]') [[post-config|$NOVA_CONF]] [DEFAULT] # Ceilometer notification driver instance_usage_audit=True instance_usage_audit_period=hour notify_on_state_change=vm_and_task_state notification_driver=nova.openstack.common.notifier.rpc_notifier notification_driver=ceilometer.compute.nova_notifier enable_plugin networking-sfc https://github.com/openstack/networking-sfc.git master enable_plugin tacker https://git.openstack.org/openstack/tacker stable/mitaka
运行stack.sh 安装openstack
cd /home/stack/devstack ./stack.sh
整个过程大概需要一个小时,具体时间视各地网络速度而定
2、搭建开发环境(本地调试)
下载安装Eclipse
https://www.eclipse.org/downloads/download.php?file=/technology/epp/downloads/release/luna/SR1/eclipse-java-luna-SR1-linux-gtk-x86_64.tar.gz
安装pydev 和egit 插件,如果需要前端,可以用aptana插件
http://pydev.org/updates
http://download.eclipse.org/egit/updates
安装完毕以后重启Eclipse
3、导入源码
导入源码之后需要设置成pydev项目:
4、远程调试
步骤:
Eclipse 配置端配置
1). 添加 /opt/eclipse/plugins/org.python.pydev_3.8.0.201409251235/pysrc 被调试项目Python path 中(可选)
2). 运行pydev debug
远程Keystone 服务器配置
3). 拷贝Eclipse Dydev 插件下 pysrc 目录到 Keystone 机器上,并添加该目录到Python path
export PYTHONPATH=$PYTHONPATH:/opt/eclipse/plugins/org.python.pydev_3.8.0.201409251235/pysrc
4). 在需要打断点的地方添加如下代码
import pysrc.pydevd as pydevd;pydevd.settrace('10.20.0.210',stdoutToServer=True, stderrToServer=True)
5).单步调试
好处:支持线上,在线调试
本环境步骤:
在代码端配置好eclipse环境:
export PYTHONPATH=$PYTHONPATH:/root/eclipse/plugins/org.python.pydev_5.1.2.201606231256/pysrc
eclipse客户端打开pydev server
会监听5678端口
在需要调试的地方,代码端增加断点信息:
import pysrc.pydevd as pydevd;pydevd.pydevd.settrace('192.168.3.191',stdoutToServer=True, stderrToServer=True)
5、devstack服务器重启了,openstack服务是不会重启的,怎么办呢?
只好写脚本重启了
[root@devstack-liberty stack]# cat start_devstack_service.sh #!/bin/bash # start glance service /usr/bin/python /usr/bin/glance-registry --config-file=/etc/glance/glance-registry.conf &> glance-registry.log & /usr/bin/python /usr/bin/glance-api --config-file=/etc/glance/glance-api.conf &> glance-api.log & # start nova service /usr/bin/python /usr/bin/nova-api &> /var/log/nova/api.log /usr/bin/python /usr/bin/nova-conductor --config-file /etc/nova/nova.conf &> /var/log/nova/conductor.log /usr/bin/python /usr/bin/nova-scheduler --config-file /etc/nova/nova.conf &> /var/log/nova/scheduler.log /usr/bin/python /usr/bin/nova-novncproxy --config-file /etc/nova/nova.conf --web /opt/stack/noVNC &> /var/log/nova/noVNC.log /usr/bin/python /usr/bin/nova-consoleauth --config-file /etc/nova/nova.conf &> /var/log/nova/consoleauth.log /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova.conf &> /var/log/nova/compute.log # start swift service /usr/bin/python /bin/swift-container-updater /etc/swift/container-server/1.conf /usr/bin/python /bin/swift-account-auditor /etc/swift/account-server/1.conf /usr/bin/python /bin/swift-object-replicator /etc/swift/object-server/1.conf /usr/bin/python /bin/swift-container-sync /etc/swift/container-server/1.conf /usr/bin/python /bin/swift-container-replicator /etc/swift/container-server/1.conf /usr/bin/python /bin/swift-object-auditor /etc/swift/object-server/1.conf /usr/bin/python /bin/swift-container-auditor /etc/swift/container-server/1.conf /usr/bin/python /bin/swift-object-reconstructor /etc/swift/object-server/1.conf /usr/bin/python /bin/swift-account-reaper /etc/swift/account-server/1.conf /usr/bin/python /bin/swift-account-replicator /etc/swift/account-server/1.conf /usr/bin/python /bin/swift-object-updater /etc/swift/object-server/1.conf python /opt/stack/swift/bin/swift-proxy-server /etc/swift/proxy-server.conf -v python /opt/stack/swift/bin/swift-object-server /etc/swift/object-server/1.conf -v python /opt/stack/swift/bin/swift-container-server /etc/swift/container-server/1.conf -v python /opt/stack/swift/bin/swift-account-server /etc/swift/account-server/1.conf -v # start cinder service /usr/bin/python /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf &> cinder-api.log & /usr/bin/python /usr/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf &> cinder-scheduler.log & /usr/bin/python /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf &> cinder-volume.log & /usr/bin/python /usr/bin/cinder-backup --config-file /etc/cinder/cinder.conf # start keystone service /usr/bin/python /opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.conf --log-config /etc/keystone/logging.conf -d --debug &> keystone-all.log & # start neutron service /usr/bin/python /usr/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini /usr/bin/python /usr/bin/neutron-openvswitch-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini sudo /usr/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf /usr/bin/python /usr/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf /usr/bin/python /usr/bin/neutron-dhcp-agent --config-file /etc/neutron/neutron.conf --config-file=/etc/neutron/dhcp_agent.ini sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface name,ofport,external_ids --format=json /usr/bin/python /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ovsdb-client monitor Interface name,ofport,external_ids --format=json /usr/bin/python /usr/bin/neutron-l3-agent --config-file /etc/neutron/neutron.conf --config-file=/etc/neutron/l3_agent.ini --config-file /etc/neutron/fwaas_driver.ini /usr/bin/python /usr/bin/neutron-metadata-agent --config-file /etc/neutron/neutron.conf --config-file=/etc/neutron/metadata_agent.ini /usr/bin/python /usr/bin/neutron-lbaas-agent --config-file /etc/neutron/neutron.conf --config-file=/etc/neutron/services/loadbalancer/haproxy/lbaas_agent.ini /usr/bin/python /usr/bin/neutron-metering-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/services/metering/metering_agent.ini #dnsmasq --no-hosts --no-resolv --strict-order --except-interface=lo --pid-file=/opt/stack/data/neutron/dhcp/48e39114-5b80-4272-9374-bcce8fb0b83c/pid --dhcp-hostsfile=/opt/stack/data/neutron/dhcp/48e39114-5b80-4272-9374-bcce8fb0b83c/host --addn-hosts=/opt/stack/data/neutron/dhcp/48e39114-5b80-4272-9374-bcce8fb0b83c/addn_hosts --dhcp-optsfile=/opt/stack/data/neutron/dhcp/48e39114-5b80-4272-9374-bcce8fb0b83c/opts --dhcp-leasefile=/opt/stack/data/neutron/dhcp/48e39114-5b80-4272-9374-bcce8fb0b83c/leases --dhcp-match=set:ipxe,175 --bind-interfaces --interface=tap4a58e372-83 --dhcp-range=set:tag0,10.0.0.0,static,86400s --dhcp-option-force=option:mtu,1450 --dhcp-lease-max=256 --conf-file= --domain=openstacklocal # /usr/bin/python /usr/bin/neutron-ns-metadata-proxy --pid_file=/opt/stack/data/neutron/external/pids/25524f59-d299-4d24-bba8-2f1318bfdafa.pid --metadata_proxy_socket=/opt/stack/data/neutron/metadata_proxy --router_id=25524f59-d299-4d24-bba8-2f1318bfdafa --state_path=/opt/stack/data/neutron --metadata_port=9697 --metadata_proxy_user=1000 --metadata_proxy_group=1000 --debug --verbose # radvd -C /opt/stack/data/neutron/ra/25524f59-d299-4d24-bba8-2f1318bfdafa.radvd.conf -p /opt/stack/data/neutron/external/pids/25524f59-d299-4d24-bba8-2f1318bfdafa.pid.radvd -m syslog # radvd -C /opt/stack/data/neutron/ra/25524f59-d299-4d24-bba8-2f1318bfdafa.radvd.conf -p /opt/stack/data/neutron/external/pids/25524f59-d299-4d24-bba8-2f1318bfdafa.pid.radvd -m syslog # start heat service /usr/bin/python /usr/bin/heat-engine --config-file=/etc/heat/heat.conf /usr/bin/python /usr/bin/heat-api --config-file=/etc/heat/heat.conf /usr/bin/python /usr/bin/heat-api-cfn --config-file=/etc/heat/heat.conf /usr/bin/python /usr/bin/heat-api-cloudwatch --config-file=/etc/heat/heat.conf