Openstack(Kilo)安装系列之neutron(九)
控制节点
Before you configure the OpenStack Networking (neutron) service, you must create a database, service credentials, and API endpoint.
一、创建neutron数据库并授权
1.登陆数据库
mysql -u root -p
2.创建数据库并授权
CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY 'NEUTRON_DBPASS'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY 'NEUTRON_DBPASS';
Replace NEUTRON_DBPASS
with a suitable password.
Source the admin
credentials to gain access to admin-only CLI commands:
source admin-openrc.sh
3.To create the service credentials, complete these steps:
Create the neutron
user:
openstack user create --password-prompt neutron
Add the admin
role to the neutron
user:
openstack role add --project service --user neutron admin
Create the neutron
service entity:
openstack service create --name neutron \ --description "OpenStack Networking" network
Create the Networking service API endpoint:
openstack endpoint create \ --publicurl http://controller:9696 \ --adminurl http://controller:9696 \ --internalurl http://controller:9696 \ --region RegionOne \ network
To install the Networking components
yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which
To configure the Networking server component
The Networking server component configuration includes the database, authentication mechanism, message queue, topology change notifications, and plug-in.
Edit the /etc/neutron/neutron.conf
file and complete the following actions:
In the [database]
section, configure database access:
[database] ... connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
Replace NEUTRON_DBPASS
with the password you chose for the database.
In the [DEFAULT]
and [oslo_messaging_rabbit]
sections, configure RabbitMQ message queue access:
[DEFAULT] ... rpc_backend = rabbit [oslo_messaging_rabbit] ... rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS
Replace RABBIT_PASS
with the password you chose for the openstack
account in RabbitMQ.
In the [DEFAULT]
and [keystone_authtoken]
sections, configure Identity service access:
[DEFAULT] ... auth_strategy = keystone [keystone_authtoken] ... auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = NEUTRON_PASS
Replace NEUTRON_PASS
with the password you chose for the neutron
user in the Identity service.
注意:Comment out or remove any other options in the [keystone_authtoken]
section.
In the [DEFAULT]
section, enable the Modular Layer 2 (ML2) plug-in, router service, and overlapping IP addresses:
[DEFAULT] ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = True
In the [DEFAULT]
and [nova]
sections, configure Networking to notify Compute of network topology changes:
[DEFAULT] ... notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller:8774/v2 [nova] ... auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = nova password = NOVA_PASS
Replace NOVA_PASS
with the password you chose for the nova
user in the Identity service.
(Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT]
section:
[DEFAULT]
...
verbose = True
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances. However, the controller node does not need the OVS components because it does not handle instance network traffic.
Edit the /etc/neutron/plugins/ml2/ml2_conf.ini
file and complete the following actions:
In the [ml2]
section, enable the flat, VLAN, generic routing encapsulation (GRE), and virtual extensible LAN (VXLAN) network type drivers, GRE tenant networks, and the OVS mechanism driver:
[ml2] ... type_drivers = flat,vlan,gre,vxlan tenant_network_types = gre mechanism_drivers = openvswitch
注意:Once you configure the ML2 plug-in, changing values in the type_drivers
option can lead to database inconsistency.
In the [ml2_type_gre]
section, configure the tunnel identifier (id) range:
[ml2_type_gre] ... tunnel_id_ranges = 1:1000
In the [securitygroup]
section, enable security groups, enable ipset, and configure the OVS iptables firewall driver:
[securitygroup] ... enable_security_group = True enable_ipset = True firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
To configure Compute to use Networking
By default, distribution packages configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
Edit the /etc/nova/nova.conf
file on the controller node and complete the following actions:
In the [DEFAULT]
section, configure the APIs and drivers:
[DEFAULT] ... network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver
In the [neutron]
section, configure access parameters:
[neutron] ... url = http://controller:9696 auth_strategy = keystone admin_auth_url = http://controller:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = NEUTRON_PASS
Replace NEUTRON_PASS
with the password you chose for the neutron
user in the Identity service.
To finalize installation
1.The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini
pointing to the ML2 plug-in configuration file, /etc/neutron/plugins/ml2/ml2_conf.ini
. If this symbolic link does not exist, create it using the following command:
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2.Populate the database:
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
注意:Database population occurs later for Networking because the script requires complete server and plug-in configuration files.
3.Restart the Compute services:
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \
openstack-nova-conductor.service
4.Start the Networking service and configure it to start when the system boots:
systemctl enable neutron-server.service
systemctl start neutron-server.service
Verify operation
注意:Perform these commands on the controller node.
1.Source the admin
credentials to gain access to admin-only CLI commands:
source admin-openrc.sh
2.List loaded extensions to verify successful launch of the neutron-server
process:
neutron ext-list