neutron部署
一:了解neutron的基础知识
主要的作用就是:为云主机提供网络
负责虚拟网络设备的创建和管理
1:基本的概念
网桥:类似于交换机,就是连接不同的网络设备,网桥分为,内部网桥和外部网桥,就是能不能访问外网的区别
网络:类似于vlan技术,就是可以划分网络,可以互相通信或者不能,再其次,就是2者中只有一个能上网(操作),网段
子网:是一个ipv4或者ipv6的地址段,子网中的云主机从地址段中分出去的,子网必须关联一个网络,网络与子网是一对多的关系,一个子网只能属于一个网络,但是一个网络可以有多个子网
端口:可以看成是一个虚拟交换机上的一个端口,端口上定义了mac和ip地址,当虚拟网卡绑定到某个端口上面时,端口就会把mac地址和ip地址发送给虚拟网卡,子网和端口是一对多的关系,一个端口必须属于某个子网,一个子网可以有多个端口,
子网和端口挂载某个网络上
2:neutron的组件架构
1:neutron模块
neutron-server:neutron服务的模块,对外提供服务的api请求,
neutron-plugin:插件对应某个具体的功能,各个厂商可以开发出属于自己的插件放入neutron服务中
neutron-agent:代理可以理解为物理设备上的对应代理,插件要实现某个具体的功能必须要通过代理,代理接收插件的各个参数,实现各个功能,如创建网桥,子网,出现问题的话,就会通知情况给neutron-plugin
2:neutron的网络分层模型
1)核心插件
即为二层模块(ml2),负责管理osi中的第二层的网络连接,
2)服务插件
除了核心插件之外的叫服务插件,实现第三层到第七层的网络服务
3:neutron的基本的工作流程
neutron-server接受到请求后,发送给neutron-plugin,通过neutron-agent来实现某个具体的功能
4:neutron支持的网络模式
1:flat网络模式
2:vlan网络模式
3:vxlan与gre网络模式
二:neutron的部署
1:前期的网络环境准备
1)将外网的网卡设置为混杂模式
设置混杂模式的意义
ifconfig ens33 promisc 查看网卡的信息 [root@controller ~]# ifconfig ens33 ens33: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST> mtu 1500 inet 192.168.20.10 netmask 255.255.255.0 broadcast 192.168.20.255 inet6 fe80::c1ad:a3ec:efe3:297 prefixlen 64 scopeid 0x20<link> 出现了promisc
设置开机后混杂模式生效
vim /etc/profile #添加这行内容 ifconfig ens33 promisc
2)加载桥接模式防火墙模块
[root@controller ~]# vim /etc/sysctl.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 #加载模块 [root@controller ~]# modprobe br_netfilter #检查模块的加载情况 [root@controller ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1
2:安装与配置控制节点上的neutron服务
1)安装neutron软件包
[root@controller ~]# yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge
2)创建neutron的数据库并授权
MariaDB [(none)]> create database neutron; #授权 MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'localhost' identified by '000000'; MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'%' identified by '000000';
3)修改neutron服务的配置文件
1:配置neutron组件的信息
配置文件为:/etc/neutron/neutron.conf 做个备份,去除空行和注释行 [DEFAULT] core_plugin = ml2 service_plugins = router transport_url = rabbit://rabbitmq:000000@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [database] connection = mysql+pymysql://neutron:000000@controller/neutron [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = neutron password = 000000 [oslo_concurrency] lock_path = /var/lib/neutron/tmp [nova] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default project_name = project username = nova password = 000000 region_name = RegionOne server_proxyclient_address = 192.168.10.10
2:修改二层模块插件(ml2 plugin)的配置文件
配置文件 /etc/neutron/plugins/ml2/ml2_conf.ini #做个备份,去除空行和注释行 [ml2] type_drivers = flat,local,vlan,gre,vxlan,geneve tenant_network_types= local,flat memchanism_drivers = linuxbridge extension_drivers = port_security [ml2_type_flat] flat_networks = provider [securitygroup] enable_ipset = true
启用ml2插件(就是建立一个软连接),因为只有在/etc/neutron下的插件才能生效
[root@controller ml2]#
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@controller neutron]# ls conf.d dhcp_agent.ini l3_agent.ini neutron.bak plugin.ini rootwrap.conf dhcp_agent.bak kill_scripts metadata_agent.ini neutron.conf plugins [root@controller neutron]# 这样ml2插件才能生效
3:修改网桥代理(linuxbridge_agent)的配置文件
配置文件 /etc/neutron/plugins/ml2/linuxbridge_agent.ini #备份和去掉注释和空行的操作 [linux_bridge] physical_interface_mappings = providers:ens33 [vxlan] enable_vxlan = false [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
4:修改dhcp代理(dhcp-agent)配置文件
配置文件 /etc/neutron/dhcp-agent.ini #备份 #去除空行和注释行的操作 [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true ~
5:修改元数据代理(metadata-agent)配置文件
配置文件 /etc/neutron/metadata_agent.ini #这个不用做备份 [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET
6:修改nova配置文件
配置文件 /etc/nova/nova.conf [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = Default user_domain_name = Default region_name = RegionOne project_name = project username = neutron password = 000000 service_metadata_proxy = true metadata_proxy_shared_secret = METADATA_SECRET
4)初始化数据库
同步数据库的目的是将安装文件的数据库的表信息填充到数据库中去了,neutron这个库
[root@controller neutron]# su neutron -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"
3:neutron组件初始化
1:创建neutron用户并分配角色
[root@controller /]# openstack user create --domain default --password 000000 neutron [root@controller /]# openstack role add --project project --user neutron admin
2:创建neutron服务以及服务端点
[root@controller /]# openstack service create --name neutron network 创建三个端点 [root@controller /]# openstack endpoint create --region RegionOne neutron public http://controller:9696 [root@controller /]# openstack endpoint create --region RegionOne neutron admin http://controller:9696 [root@controller /]# openstack endpoint create --region RegionOne neutron internal http://controller:9696
3:启动控制节点上的neutron服务
1:启动nova服务
[root@controller /]# systemctl restart openstack-nova-api
2:启动neutron服务
[root@controller /]# systemctl start neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent systemctl enable neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
4:检测控制节点上的neutron服务
1:查看端口占用情况
[root@controller /]# netstat -pant | grep 9696 tcp 0 0 0.0.0.0:9696 0.0.0.0:* LISTEN 1184/server.log [root@controller /]#
2:检验服务端点
[root@controller /]# curl http://controller:9696 {"versions": [{"status": "CURRENT", "id": "v2.0", "links": [{"href": "http://controller:9696/v2.0/", "rel": "self"}]}]}[root@controller /]# [root@controller /]#
3:查看服务的运行状态
[root@controller /]# systemctl status neutron-server ● neutron-server.service - OpenStack Neutron Server Loaded: loaded (/usr/lib/systemd/system/neutron-server.service; enabled; vendor preset: disabled) Active: active (running) since Sun 2023-11-26 20:28:49 CST; 38min ago Main PID: 1184 (/usr/bin/python) Tasks: 5 CGroup: /system.slice/neutron-server.service ├─1184 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dis... ├─2347 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dis... ├─2372 neutron-server: rpc worker (/usr/bin/python2 /usr/bin/neutron-server --config-file /u... ├─2382 neutron-server: rpc worker (/usr/bin/python2 /usr/bin/neutron-server --config-file /u... └─2386 neutron-server: periodic worker (/usr/bin/python2 /usr/bin/neutron-server --config-fi... Nov 26 20:28:31 controller systemd[1]: Starting OpenStack Neutron Server... Nov 26 20:28:42 controller neutron-server[1184]: /usr/lib/python2.7/site-packages/paste/deploy/loadw...ly. Nov 26 20:28:42 controller neutron-server[1184]: return pkg_resources.EntryPoint.parse("x=" + s).loa...se) Nov 26 20:28:49 controller systemd[1]: Started OpenStack Neutron Server. Hint: Some lines were ellipsized, use -l to show in full. [root@controller /]#
5:在计算节点上安装neutron服务
1:安装neutron软件包
[root@compute mnt]# yum -y install openstack-neutron-linuxbridge [root@compute mnt]# cat /etc/passwd|grep neutron neutron:x:985:979:OpenStack Neutron Daemons:/var/lib/neutron:/sbin/nologin [root@compute mnt]# cat /etc/group|grep neutron neutron:x:979: [root@compute mnt]#
2:修改neutron配置文件
1:修改neutron配置文件
备份和修改
[root@compute neutron]# cp /etc/neutron/neutron.conf ./neutron.bak [root@compute neutron]# grep -Ev '^$|#' /etc/neutron/neutron.bak > ./neutron.conf [DEFAULT] transport_url = rabbit://rabbitmq:000000@controller:5672 auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = project username = neutron password = 000000 [oslo_concurrency] lock_path = /var/lib/neutron/tmp
2:修改网桥代理配置文件
备份和去除空行
[root@compute ml2]# cp linuxbridge_agent.ini ./linuxbridge_agent.bak [root@compute ml2]# grep -Ev '^$|#' ./linuxbridge_agent.bak > ./linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:ens33 [vxlan] enable_vxlan = false [securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
3:修改nova的配置文件
#在default中添加2行 vif_plugging_is_fatal = false vif_plugging_timeout = 0 [neutron] auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = project username = neutron password = 000000
3:启动计算节点的neutron服务
1:重启计算节点的nova服务
[root@compute /]# systemctl start openstack-nova-compute
2:启动计算节点的网桥代理开机启动
[root@compute /]# systemctl enable neutron-linuxbridge-agent
3:启动网桥代理
[root@compute /]# systemctl start neutron-linuxbridge-agent
三:检测neutron服务
1:查看网络代理服务列表
[root@controller /]# openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | 01fc3171-bfb0-40dc-83f7-4c0e619c8f14 | Linux bridge agent | controller | None | :-) | UP | neutron-linuxbridge-agent | | 481184fd-4180-4263-a8d0-d10eff91d61d | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent | | b6280286-36f7-49b6-9a21-02b48fa67e44 | Linux bridge agent | compute | None | :-) | UP | neutron-linuxbridge-agent | | f943e8fa-0801-4945-ae09-5bdd110b2166 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent | +--------------------------------------+--------------------+------------+-------------------+-------+-------+-------------------
2:用neutron状态检测工具检测
[root@controller /]# neutron-status upgrade check +---------------------------------------------------------------------+ | Upgrade Check Results | +---------------------------------------------------------------------+ | Check: Gateway external network | | Result: Success | | Details: L3 agents can use multiple networks as external gateways. | +---------------------------------------------------------------------+ | Check: External network bridge | | Result: Success | | Details: L3 agents are using integration bridge to connect external | | gateways | +---------------------------------------------------------------------+ | Check: Worker counts configured | | Result: Warning | | Details: The default number of workers has changed. Please see | | release notes for the new values, but it is strongly | | encouraged for deployers to manually set the values for | | api_workers and rpc_workers. | +---------------------------------------------------------------------+ [root@controller /]#