OpenStack Neutron:openvswitch driver支持的网络类型(local、flat、vlan、vxlan);ovs初步接触
OpenStack Neutron:网络类型示意图(local、flat、vlan、vxlan);网络产品简介(dnsmasq、floating IP、安全组、FWaaS、LBaaS)
ovs试验网络环境与linux bridge的一致,区别只在于mechanism_drivers切换为ovs。

配置enable mechanism_drivers openvswitch;初始化后,底层网络变化; ================================================================================================= 修改 ML2 的配置文件 /etc/neutron/plugins/ml2/ml2_conf.ini:(双节点都修改) root@controller:~# cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep mechanism_drivers mechanism_drivers = openvswitch #修改配置后需要重启agent -------------------------------------- stack@controller:~$ neutron agent-list #看到 neutron-openvswitch-agent 已经在两个节点上运行。 neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. +--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+ | id | agent_type | host | availability_zone | alive | admin_state_up | binary | +--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+ | 01aeedef-841d-4280-8638-60d7e08ba84b | L3 agent | controller | nova | :-) | True | neutron-l3-agent | | 1d46ef5b-1531-4905-9bc5-0c4aba0e5e9c | DHCP agent | controller | nova | :-) | True | neutron-dhcp-agent | | a52eebd9-833a-4777-82da-9e7edb035b5a | Metadata agent | controller | | :-) | True | neutron-metadata-agent | | a63d3de3-4126-49ac-af50-c1e71c49f65a | Open vSwitch agent | computer | | :-) | True | neutron-openvswitch-agent | | ffa63b50-0bb8-40cc-a3e0-ad9df1888d6a | Open vSwitch agent | controller | | :-) | True | neutron-openvswitch-agent | +--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+ --------------------------------------------------------------------------------------------------- 初始化后,底层网络变化 初始化的系统:看到4个ovs+1个ovs_slave root@controller:~# ip -d a 7: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 6a:24:c2:2f:1e:54 brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch 8: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 06:a6:5d:a0:41:4f brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch 9: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether ca:ea:42:32:29:4a brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch inet 172.24.4.1/24 scope global br-ex valid_lft forever preferred_lft forever inet6 2001:db8::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::c8ea:42ff:fe32:294a/64 scope link valid_lft forever preferred_lft forever 10: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c2:49:6e:2d:23:4c brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch 15: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000 link/ether de:ea:7c:b8:57:e4 brd ff:ff:ff:ff:ff:ff promiscuity 1 vxlan id 0 srcport 0 0 dstport 4789 nolearning ageing 300 udpcsum udp6zerocsumrx openvswitch_slave

三个网桥 br-ex,br-int 和 br-tun;Open vSwitch 环境的设备命名规则 ===================================================================================================================== 三个网桥 br-ex,br-int 和 br-tun br-ex 连接外部(external)网络的网桥。 br-int 集成(integration)网桥,所有 instance 的虚拟网卡和其他虚拟网络设备都将连接到该网桥。 br-tun 隧道(tunnel)网桥,基于隧道技术的 VxLAN 和 GRE 网络将使用该网桥进行通信。 root@controller:~# ovs-vsctl show 9b50175f-152e-443a-a8dd-77309d2af748 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-tun #br-tun 隧道(tunnel)网桥,基于隧道技术的 VxLAN 和 GRE 网络将使用该网桥进行通信。 Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "vxlan-c0a80148" Interface "vxlan-c0a80148" type: vxlan options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="192.168.1.71", out_key=flow, remote_ip="192.168.1.72"} Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal Bridge br-ex #br-ex 连接外部(external)网络的网桥。 Controller "tcp:127.0.0.1:6633" #此处计算节点也有Bridge br-ex ,这一点与教程不一致 is_connected: true fail_mode: secure Port br-ex Interface br-ex type: internal Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Bridge br-int #br-int 集成(integration)网桥,所有 instance 的虚拟网卡和其他虚拟网络设备都将连接到该网桥。 Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port br-int Interface br-int type: internal ovs_version: "2.8.4" ---------------------------------------------------------------------------------------------------------------------- 在 Open vSwitch 环境中,一个数据包从 instance 发送到物理网卡大致会经过下面几个类型的设备: tap interface 命名为 tapXXXX。 linux bridge 命名为 qbrXXXX。 veth pair 命名为 qvbXXXX, qvoXXXX。 OVS integration bridge 命名为 br-int。 OVS patch ports 命名为 int-br-ethX 和 phy-br-ethX(X 为 interface 的序号)。 OVS provider bridge 命名为 br-ethX(X 为 interface 的序号)。 物理 interface 命名为 ethX(X 为 interface 的序号)。 OVS tunnel bridge 命名为 br-tun。 OVS provider bridge 会在 flat 和 vlan 网络中使用;OVS tunnel bridge 则会在 vxlan 和 gre 网络中使用。 Open vSwitch 支持 local, flat, vlan, vxlan 和 gre 所有五种 network type。vxlan 和 gre 非常类似。

ovs需要通过bridge连接到instance的原因;ovs网络隔离机制;veth pair 和patch port的区别 ====================================================================================================== ovs需要通过bridge连接到instance的原因: Open vSwitch 目前还不支持将 iptables 规则放在与它直接相连的 veth pair 设备上。 如果做不到这一点,就无法实现 Security Group 功能。为了支持 Security Group,不得不多引入一个 Linux Bridge 支持 iptables。 这样的后果就是网络结构更复杂了,路径上多了一个 linux bridge 和 一对 veth pair 设备。 -------------------------------------------------------------------------------------------- local网络间连通性分析(ovs网络隔离机制): 不同local网络使用不同的tag,从而达到网络隔离的目的;其实应该就是vlan技术 Open vSwitch 的每个网桥都可以看作一个真正的交换机,可以支持 VLAN,这里的 tag 就是 VLAN ID。 root@controller:~# ovs-vsctl show 9b50175f-152e-443a-a8dd-77309d2af748 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "qvo59492399-57" tag: 1 #local网络1使用tag1 Interface "qvo59492399-57" Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "qvocc644952-a8" tag: 1 #local网络1使用tag1 Interface "qvocc644952-a8" Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "tap42a1d679-f0" tag: 1 #local网络1使用tag1 Interface "tap42a1d679-f0" type: internal Port "qvo478fc984-b8" tag: 3 #local网络3使用tag3 Interface "qvo478fc984-b8" Port "tap0a53a2c0-98" tag: 3 #local网络3使用tag3 Interface "tap0a53a2c0-98" type: internal Port br-int Interface br-int type: internal ovs_version: "2.8.4" ------------------------------------------------------------------------------ veth pair 和patch port的区别: patch port 是 ovs bridge 自己特有的 port 类型,只能在 ovs 中使用。如果是连接两个 ovs bridge,优先使用 patch port,因为性能更好。 1. 连接两个 ovs bridge,优先使用 patch port。技术上veth pair 也能实现,但性能不如 patch port。 2. 连接 ovs bridge 和 linux bridge,只能使用 veth pair。 3. 连接两个 linux bridge,只能使用 veth pair。

OVS Local Network概述;配置enable local network;创建OVS local网络及instance的底层网络变化 =============================================================================== 与bridge local network一样,都是一台设备内部的网络。 ------------------------------------------------------------------------------- 配置enable local network root@controller:~# cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep 'tenant_network_types\|type_drivers\|mechanism_drivers' -B2 [ml2] tenant_network_types = local #定义默认的租户网络类型 mechanism_drivers = openvswitch #driver为openvswitch type_drivers = local,flat,vlan,gre,vxlan #加载的所有driver,此处有local,admin用户就能创建local网络 --------------------------------------------------- 创建OVS local网络及instance的底层变化 #观察ovs接口变化,新增port tap42a1d679-f0 root@controller:~# ovs-vsctl show 9b50175f-152e-443a-a8dd-77309d2af748 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "tap42a1d679-f0" #新增了该接口,连接到dhcp server tag: 1 Interface "tap42a1d679-f0" type: internal Port br-int Interface br-int type: internal ovs_version: "2.8.4" #dhcp server namespace的接口 tap42a1d679-f0;这个接口看起来是直接attach到ovs的 root@controller:~# ip netns exec qdhcp-d3e24469-5214-4eea-a41c-8d76c6d0735f ip -d a 16: tap42a1d679-f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:e4:82:1c brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch inet 100.0.0.2/24 brd 100.0.0.255 scope global tap42a1d679-f0 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fee4:821c/64 scope link valid_lft forever preferred_lft forever #root namespace 查看ip -d a,没有新增接口 ---------------------------------------------------------------------------- #创建instance后,底层网络变化: root@controller:~# ovs-vsctl show 9b50175f-152e-443a-a8dd-77309d2af748 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "qvo59492399-57" #创建instance后,新增了该接口 tag: 1 Interface "qvo59492399-57" Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "tap42a1d679-f0" tag: 1 Interface "tap42a1d679-f0" type: internal Port br-int Interface br-int type: internal ovs_version: "2.8.4" root@controller:~# ip -d a #新增bridge 18: qbr59492399-57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether a2:df:ed:93:b0:04 brd ff:ff:ff:ff:ff:ff promiscuity 0 bridge forward_delay 0 hello_time 200 max_age 2000 ageing_time 0 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q #新增veth pair,处于ovs侧 19: qvo59492399-57@qvb59492399-57: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000 link/ether be:8f:a8:86:84:be brd ff:ff:ff:ff:ff:ff promiscuity 2 veth openvswitch_slave inet6 fe80::bc8f:a8ff:fe86:84be/64 scope link valid_lft forever preferred_lft forever #新增veth pair,挂在bridge,连接到ovs 20: qvb59492399-57@qvo59492399-57: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue master qbr59492399-57 state UP group default qlen 1000 link/ether a2:df:ed:93:b0:04 brd ff:ff:ff:ff:ff:ff promiscuity 2 veth bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on inet6 fe80::a0df:edff:fe93:b004/64 scope link valid_lft forever preferred_lft forever #新增tun设备,就是instance网卡? 21: tap59492399-57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master qbr59492399-57 state UNKNOWN group default qlen 1000 link/ether fe:16:3e:c0:73:86 brd ff:ff:ff:ff:ff:ff promiscuity 1 tun bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on root@controller:~# brctl show bridge name bridge id STP enabled interfaces qbr59492399-57 8000.a2dfed93b004 no qvb59492399-57 tap59492399-57 ---------------------------------------------------- #新增第2台instance后,底层网络变化: root@controller:~# brctl show bridge name bridge id STP enabled interfaces qbr59492399-57 8000.a2dfed93b004 no qvb59492399-57 tap59492399-57 #新增的网桥,说明instance是部署到同一台host上了 qbrcc644952-a8 8000.86b75b775f9b no qvbcc644952-a8 tapcc644952-a8 #在tun设备上抓包 root@controller:~# tcpdump -nnvve -c4 -i tap59492399-57 tcpdump: listening on tap59492399-57, link-type EN10MB (Ethernet), capture size 262144 bytes 09:43:34.054344 fa:16:3e:c0:73:86 > fa:16:3e:72:ca:71, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 11466, offset 0, flags [DF], proto ICMP (1), length 84) 100.0.0.82 > 100.0.0.73: ICMP echo request, id 34817, seq 190, length 64 09:43:34.054963 fa:16:3e:72:ca:71 > fa:16:3e:c0:73:86, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 4416, offset 0, flags [none], proto ICMP (1), length 84) 100.0.0.73 > 100.0.0.82: ICMP echo reply, id 34817, seq 190, length 64 #尝试在ovs上抓包,直接提示网络设备down,抓包无结果 root@controller:~# tcpdump -nnvve -c4 -i br-int tcpdump: br-int: That device is not up root@controller:~# ip -d a show br-int 8: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 06:a6:5d:a0:41:4f brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch

OVS flat network概念;配置enable OVS flat network;创建 OVS flat network/instance后,底层网络变化 ========================================================================================================== flat network 是不带 tag 的网络,宿主机的物理网卡通过网桥与 flat network 连接,每个 flat network 都会占用一个物理网卡。 ---------------------------------------------------------------------------------------------------------- 在 ML2 中配置 OVS flat network 提前创建ovs br-ens37,并将物理网卡ens37 attach到br-ens37 ovs-vsctl add-br br-ens37 ovs-vsctl add-port br-ens37 ens37 root@controller:~# ip -d a show ens37 3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000 link/ether 00:0c:29:c9:1b:ad brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch_slave inet 10.0.0.71/24 brd 10.0.0.255 scope global ens37 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fec9:1bad/64 scope link valid_lft forever preferred_lft forever root@controller:~# ip -d a show br-ens37 40: br-ens37: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 00:0c:29:c9:1b:ad brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch 在 ML2 配置中 enable flat network #控制节点、计算节点都要修改配置 root@controller:~# cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep -PB3 'tenant_network_types|flat_networks|bridge_mappings' [ml2] tenant_network_types = flat #指定普通用户创建的网络类型为 flat -- [ml2_type_flat] flat_networks = default # 定义了一个 flat 网络,label 为 “default” #flat_networks = flat1,flat2 # 可以定义多个flat 网络标签 -- [ovs] datapath_type = system bridge_mappings = default:br-ens37 #通过 bridge_mappings 指明 default 对应的 Open vSwitch 网桥为 br-ens37。 #此处 linux bridge driver填写的是物理网卡;而ovs driver填写的逻辑不通,填写的是自定义ovs bridge(将物理网卡attach到该ovs bridge) #bridge_mappings = flat1:br-eth1,flat2:br-eth2 #多个flat标签就需要对应多张网卡 enable OVS flat network,并重启agent后,底层网络的变化 #手工创建了ovs br-ens37,并将物理网卡ens37 attach到br-ens37 #ovs br-ens37与ovs br-int 之间使用一对patch接口进行连接 root@controller:~# ovs-vsctl show 9b50175f-152e-443a-a8dd-77309d2af748 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge "br-ens37" #手工新增的ovs bridge Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "ens37" #手工绑定的物理网卡 Interface "ens37" Port "br-ens37" #这个接口在手工创建ovs bridge后就存在了,作用是???? Interface "br-ens37" type: internal Port "phy-br-ens37" #patch接口,在重启agent进程后出现 Interface "phy-br-ens37" type: patch options: {peer="int-br-ens37"} Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port br-int Interface br-int type: internal Port "int-br-ens37" Interface "int-br-ens37" type: patch #patch接口,在重启agent进程后出现 options: {peer="phy-br-ens37"} ovs_version: "2.8.4" ---------------------------------------------------------------------------------------------- 创建 OVS flat network;新增instance后,底层网络变化: root@controller:~# ovs-vsctl show 9b50175f-152e-443a-a8dd-77309d2af748 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge "br-ens37" Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "ens37" Interface "ens37" Port "br-ens37" Interface "br-ens37" type: internal Port "phy-br-ens37" Interface "phy-br-ens37" type: patch options: {peer="int-br-ens37"} Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "qvo0d3ade5c-72" #连接到instance tag: 1 Interface "qvo0d3ade5c-72" Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "tap4ae9b3e5-5e" #连接到dhcp server tag: 1 Interface "tap4ae9b3e5-5e" type: internal Port "qvo1663eac4-93" #连接到instance tag: 1 Interface "qvo1663eac4-93" Port br-int Interface br-int type: internal Port "int-br-ens37" Interface "int-br-ens37" type: patch options: {peer="phy-br-ens37"} ovs_version: "2.8.4" root@controller:~# brctl show bridge name bridge id STP enabled interfaces qbr0d3ade5c-72 8000.2ed47b793564 no qvb0d3ade5c-72 #连接到ovs tap0d3ade5c-72 #连接到instance qbr1663eac4-93 8000.f6df263bc719 no qvb1663eac4-93 tap1663eac4-93 root@controller:~# ip netns exec qdhcp-e0957e1f-5b10-4088-a2be-e6d5ede6f30f ip -d a 41: tap4ae9b3e5-5e: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:ad:c0:f2 brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch inet 109.0.0.2/24 brd 109.0.0.255 scope global tap4ae9b3e5-5e valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fead:c0f2/64 scope link valid_lft forever preferred_lft forever #双节点部署有问题,计算节点还是不会分配instance #ovs br-ens37 抓到arp请求包;ovs默认是down的,需要手工up; root@controller:~# tcpdump -nnvvei br-ens37 tcpdump: listening on br-ens37, link-type EN10MB (Ethernet), capture size 262144 bytes 18:35:11.639174 fa:16:3e:20:5b:ec > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 109.0.0.136 tell 109.0.0.7, length 28 #计算节点抓到arp请求包 root@computer:~# tcpdump -nnvvei ens37 tcpdump: listening on ens37, link-type EN10MB (Ethernet), capture size 262144 bytes 18:19:10.493555 fa:16:3e:20:5b:ec > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Ethernet (len 6), IPv4 (len 4), Request who-has 109.0.0.136 tell 109.0.0.7, length 46 ------------------------------------------------------------------------------------------ 132 - OVS local network 连通性分析 133 - 在 ML2 中配置 OVS flat network 134 - 创建 OVS flat network 135 - 部署 instance 到 OVS flat network

OVS vlan network概念;配置enable OVS vlan network;创建 ovs vlan网络及2台instance后,底层网络的变化;vlan网络间的隔离原理(ovs flow rule简单说明) ============================================================================================================== vlan network 是带 tag 的网络。 在 Open vSwitch 实现方式下,不同 vlan instance 的虚拟网卡都接到 br-int 上。 这一点与 linux bridge 非常不同,linux bridge 是不同 vlan 接到不同的网桥上。 -------------------------------------------------------------------------------------------------------------- 在 ML2 中配置 OVS vlan network root@controller:~# cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep -P "tenant_network_types|network_vlan_ranges|bridge_mappings" -B2 [ml2] tenant_network_types = vlan #指定普通用户创建的网络类型为 vlan -- [ml2_type_vlan] network_vlan_ranges = default:3001:4000 #定义了 label 为 “default” 的 vlan network;普通用户创建vlan的vlan id 的范围是 3001 - 4000。对于 admin 则没有 vlan id 的限制 -- [ovs] datapath_type = system bridge_mappings = default:br-ens37 #指明 default 对应的 Open vSwitch 网桥为 br-ens37。 #修改后,需要重启agent进程 --------------------------------------------------------------------------------------------------------------- 创建 ovs vlan网络及2台instance后,底层网络的变化 #ovs交换机新增了接口,连接到dhcp server,instance root@controller:~# ovs-vsctl show 9b50175f-152e-443a-a8dd-77309d2af748 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge "br-ens37" Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "ens37" Interface "ens37" Port "br-ens37" Interface "br-ens37" type: internal Port "phy-br-ens37" #patch接口,连接到 ovs br-int Interface "phy-br-ens37" type: patch options: {peer="int-br-ens37"} Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "qvobeeb2c54-aa" #连接到instance tag: 1 Interface "qvobeeb2c54-aa" Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "qvo458013d8-e2" #连接到instance tag: 1 Interface "qvo458013d8-e2" Port "tap18a66eac-38" #连接到dhcp server tag: 1 Interface "tap18a66eac-38" type: internal Port br-int Interface br-int type: internal Port "int-br-ens37" #patch接口,连接到 ovs br-ens37 Interface "int-br-ens37" type: patch options: {peer="phy-br-ens37"} ovs_version: "2.8.4" root@controller:~# brctl show bridge name bridge id STP enabled interfaces qbr458013d8-e2 8000.4a4a8e091fe5 no qvb458013d8-e2 tap458013d8-e2 qbrbeeb2c54-aa 8000.4ad8b6731332 no qvbbeeb2c54-aa tapbeeb2c54-aa root@controller:~# ip -d a s br-int 8: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 06:a6:5d:a0:41:4f brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch root@controller:~# ip -d a s br-ens37 40: br-ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 00:0c:29:c9:1b:ad brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch inet6 fe80::20c:29ff:fec9:1bad/64 scope link valid_lft forever preferred_lft forever #控制节点instance ping一个不存在的IP 66.0.0.188 #计算节点物理网卡可以抓到arp请求报文,vlan id 66 root@computer:~# tcpdump -nnvvei ens37 tcpdump: listening on ens37, link-type EN10MB (Ethernet), capture size 262144 bytes 21:12:15.577547 fa:16:3e:75:d5:a4 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 64: vlan 66, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 66.0.0.188 tell 66.0.0.157, length 46 21:12:16.580521 fa:16:3e:75:d5:a4 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 64: vlan 66, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 66.0.0.188 tell 66.0.0.157, length 46 21:12:17.584242 fa:16:3e:75:d5:a4 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 64: vlan 66, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 66.0.0.188 tell 66.0.0.157, length 46 #控制节点ovs br-ens37可以抓到arp请求报文,vlan id 66(ovs默认是down的,需要up才能抓包) root@controller:~# tcpdump -nnvvei br-ens37 tcpdump: listening on br-ens37, link-type EN10MB (Ethernet), capture size 262144 bytes 21:13:55.749049 fa:16:3e:75:d5:a4 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 66, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 66.0.0.188 tell 66.0.0.157, length 28 21:13:56.752976 fa:16:3e:75:d5:a4 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 66, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 66.0.0.188 tell 66.0.0.157, length 28 21:13:57.756762 fa:16:3e:75:d5:a4 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 66, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 66.0.0.188 tell 66.0.0.157, length 28 #控制节点ovs br-int抓到arp请求报文,vlan id 1(ovs默认是down的,需要up才能抓包) root@controller:~# tcpdump -nnvvei br-int tcpdump: listening on br-int, link-type EN10MB (Ethernet), capture size 262144 bytes 21:16:14.964956 fa:16:3e:75:d5:a4 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 66.0.0.188 tell 66.0.0.157, length 28 21:16:15.963156 fa:16:3e:75:d5:a4 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 66.0.0.188 tell 66.0.0.157, length 28 21:16:16.965451 fa:16:3e:75:d5:a4 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q (0x8100), length 46: vlan 1, p 0, ethertype ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 66.0.0.188 tell 66.0.0.157, length 28 创建第二个vlan 网络 ovs接口的tag真就是ovs内部有效;同一个vlan网络,在两台host上的tag居然是不一样的。。。。 也就是说对于vlan 网络,人工能分配的是处于br-ens37及物理网络的vlan;而br-int是系统分配的,同时不同host的br-int分配的内部vlan tag可能不一样 --------------------------------------------------------------------------------------------------------------- vlan网络间的隔离原理: 与 Linux Bridge driver 不同,Open vSwitch driver 并不通过 eth1.100, eth1.101 等 VLAN interface 来隔离不同的 VLAN。所有的 instance 都连接到同一个网桥 br-int, Open vSwitch 通过 flow rule(流规则)来指定如何对进出 br-int 的数据进行转发,进而实现 vlan 之间的隔离。 具体来说:当数据进出 br-int 时,flow rule 可以修改、添加或者剥掉数据包的 VLAN tag,Neutron 负责创建这些 flow rule 并将它们配置到 br-int,br-eth1 等 Open vSwitch 上。 #flow rule 配置 (根据接口进行过滤) #坑:flow rule 配置是根据端口编号来进行编写的;不进行grep匹配时,会友善地识别为端口名称;但是进行grep匹配,则需要根据端口号来进行匹配 root@controller:~# ovs-ofctl dump-flows br-ens37 |grep -P 'in_port=2' cookie=0x16459bb2ba15e486, duration=24691.332s, table=0, n_packets=23713, n_bytes=997658, idle_age=0, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:66,NORMAL cookie=0x16459bb2ba15e486, duration=24977.795s, table=0, n_packets=113, n_bytes=11358, idle_age=1944, priority=2,in_port=2 actions=drop ----------------------------------- root@controller:~# ovs-ofctl dump-flows br-int |grep -P 'in_port=15' cookie=0xdc4665db71852389, duration=25155.766s, table=0, n_packets=1583, n_bytes=314908, idle_age=107, priority=2,in_port=15 actions=drop cookie=0xdc4665db71852389, duration=24869.300s, table=0, n_packets=0, n_bytes=0, idle_age=24869, priority=3,in_port=15,dl_vlan=66 actions=mod_vlan_vid:1,resubmit(,60) ----------------------------------- # 全量 flow rule 配置 # 查看 port 编号 root@controller:~# ovs-ofctl show br-ens37 OFPT_FEATURES_REPLY (xid=0x2): dpid:0000000c29c91bad n_tables:254, n_buffers:0 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst 1(ens37): addr:00:0c:29:c9:1b:ad config: 0 state: 0 current: 1GB-FD COPPER AUTO_NEG advertised: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG supported: 10MB-HD 10MB-FD 100MB-HD 100MB-FD 1GB-FD COPPER AUTO_NEG speed: 1000 Mbps now, 1000 Mbps max 2(phy-br-ens37): addr:2e:57:ef:64:09:95 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max LOCAL(br-ens37): addr:00:0c:29:c9:1b:ad config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0 root@controller:~# ovs-ofctl show br-int OFPT_FEATURES_REPLY (xid=0x2): dpid:000006a65da0414f n_tables:254, n_buffers:0 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst 1(int-br-ex): addr:f6:6e:9e:e9:54:66 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max 2(patch-tun): addr:c2:9e:a6:e3:5d:19 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max 15(int-br-ens37): addr:a2:53:08:cb:ff:70 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max 19(tap18a66eac-38): addr:00:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN speed: 0 Mbps now, 0 Mbps max 20(qvobeeb2c54-aa): addr:1a:7d:bf:15:4c:3d config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 21(qvo458013d8-e2): addr:ae:1d:d8:d4:e9:d2 config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max LOCAL(br-int): addr:06:a6:5d:a0:41:4f config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0 查看 flow rule #流规则分别配置在2个ovs上,都是对入方向的流做控制,也做vlan转换 root@controller:~# ovs-ofctl dump-flows br-ens37 cookie=0x16459bb2ba15e486, duration=3289.700s, table=0, n_packets=2345, n_bytes=100202, priority=4,in_port="phy-br-ens37",dl_vlan=1 actions=mod_vlan_vid:66,NORMAL #接收到vlan 1报文,转换为 vlan 66 cookie=0x16459bb2ba15e486, duration=3576.163s, table=0, n_packets=92, n_bytes=9251, priority=2,in_port="phy-br-ens37" actions=drop cookie=0x16459bb2ba15e486, duration=3576.167s, table=0, n_packets=1196, n_bytes=227560, priority=0 actions=NORMAL root@controller:~# ovs-ofctl dump-flows br-int # cookie=0xdc4665db71852389, duration=3580.528s, table=0, n_packets=0, n_bytes=0, priority=65535,vlan_tci=0x0fff/0x1fff actions=drop cookie=0xdc4665db71852389, duration=3173.991s, table=0, n_packets=0, n_bytes=0, priority=10,icmp6,in_port="qvobeeb2c54-aa",icmp_type=136 actions=resubmit(,24) cookie=0xdc4665db71852389, duration=3109.972s, table=0, n_packets=0, n_bytes=0, priority=10,icmp6,in_port="qvo458013d8-e2",icmp_type=136 actions=resubmit(,24) cookie=0xdc4665db71852389, duration=3173.988s, table=0, n_packets=2241, n_bytes=94122, priority=10,arp,in_port="qvobeeb2c54-aa" actions=resubmit(,24) cookie=0xdc4665db71852389, duration=3109.968s, table=0, n_packets=75, n_bytes=3150, priority=10,arp,in_port="qvo458013d8-e2" actions=resubmit(,24) cookie=0xdc4665db71852389, duration=3580.515s, table=0, n_packets=336, n_bytes=63781, priority=2,in_port="int-br-ens37" actions=drop cookie=0xdc4665db71852389, duration=3173.994s, table=0, n_packets=49, n_bytes=5920, priority=9,in_port="qvobeeb2c54-aa" actions=resubmit(,25) cookie=0xdc4665db71852389, duration=3109.976s, table=0, n_packets=44, n_bytes=5496, priority=9,in_port="qvo458013d8-e2" actions=resubmit(,25) cookie=0xdc4665db71852389, duration=3294.049s, table=0, n_packets=0, n_bytes=0, priority=3,in_port="int-br-ens37",dl_vlan=66 actions=mod_vlan_vid:1,resubmit(,60) #接收到vlan 66报文,转换为vlan 1 cookie=0xdc4665db71852389, duration=3580.532s, table=0, n_packets=763, n_bytes=71993, priority=0 actions=resubmit(,60) cookie=0xdc4665db71852389, duration=3580.533s, table=23, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0xdc4665db71852389, duration=3173.992s, table=24, n_packets=0, n_bytes=0, priority=2,icmp6,in_port="qvobeeb2c54-aa",icmp_type=136,nd_target=fe80::f816:3eff:fe75:d5a4 actions=resubmit(,60) cookie=0xdc4665db71852389, duration=3109.973s, table=24, n_packets=0, n_bytes=0, priority=2,icmp6,in_port="qvo458013d8-e2",icmp_type=136,nd_target=fe80::f816:3eff:fe20:5915 actions=resubmit(,60) cookie=0xdc4665db71852389, duration=3173.990s, table=24, n_packets=2241, n_bytes=94122, priority=2,arp,in_port="qvobeeb2c54-aa",arp_spa=66.0.0.157 actions=resubmit(,25) cookie=0xdc4665db71852389, duration=3109.970s, table=24, n_packets=75, n_bytes=3150, priority=2,arp,in_port="qvo458013d8-e2",arp_spa=66.0.0.180 actions=resubmit(,25) cookie=0xdc4665db71852389, duration=3580.529s, table=24, n_packets=0, n_bytes=0, priority=0 actions=drop cookie=0xdc4665db71852389, duration=3173.997s, table=25, n_packets=2262, n_bytes=96488, priority=2,in_port="qvobeeb2c54-aa",dl_src=fa:16:3e:75:d5:a4 actions=resubmit(,60) cookie=0xdc4665db71852389, duration=3109.979s, table=25, n_packets=96, n_bytes=5516, priority=2,in_port="qvo458013d8-e2",dl_src=fa:16:3e:20:59:15 actions=resubmit(,60) cookie=0xdc4665db71852389, duration=3580.530s, table=60, n_packets=73641, n_bytes=6448269, priority=3 actions=NORMAL cookie=0xdc4665db71852389, duration=3580.527s, table=62, n_packets=0, n_bytes=0, priority=3 actions=NORMAL Bridge "br-ens37" Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "phy-br-ens37" #patch接口,连接到 ovs br-int Interface "phy-br-ens37" type: patch options: {peer="int-br-ens37"} Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "int-br-ens37" #patch接口,连接到 ovs br-ens37 Interface "int-br-ens37" type: patch options: {peer="phy-br-ens37"} ------------------------------------------------------------ 其中比较重要的属性有: priority rule 的优先级,值越大优先级越高。Open vSwitch 会按照优先级从高到低应用规则。 in_port inbound 端口编号,每个 port 在 Open vSwitch 中会有一个内部的编号。可以通过命令 ovs-ofctl show <bridge> 查看 port 编号。 dl_vlan 数据包原始的 VLAN ID。 actions 对数据包进行的操作。 示例:(这是br-eth1 的流配置) priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:100,NORMAL #从 br-eth1 的端口 phy-br-eth1(in_port=2)接收进来的包,如果 VLAN ID 是 1(dl_vlan=1),那么需要将 VLAN ID 改为 100(actions=mod_vlan_vid:100) priority=4,in_port=2,dl_vlan=5 actions=mod_vlan_vid:101,NORMAL 因为 br-int 中的 VLAN ID 跟物理网络中的 VLAN ID 并不相同,所以当 br-eth1 接收到 br-int 发来的数据包时,需要对 VLAN 进行转换。Neutron 负责维护 VLAN ID 的对应关系,并将转换规则配置在 flow rule 中。 示例2:(br-int 的 flow rule) priority=3,inport=1,dl_vlan=100 actions=mod_vlan_vid:1,NORMAL #这不就是和上面那条的反方向了嘛。。。。 priority=3,inport=1,dl_vlan=101 actions=mod_vlan_vid:5,NORMAL 所以tag就是vlan id;只不过br-eth1的vlan和物理网络的vlan的编号确实存在不一致的现象,那么为什么要这么设计呢???

配置enable OVS VxLAN;创建vxlan网络、instance后,底层网络的变化 ============================================================================================================== ML2 中配置 OVS VxLAN root@controller:~# cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep -P "tenant_network_types|mechanism_drivers|vni_ranges|tunnel_types|l2_population|local_ip" -B4 [ml2] tenant_network_types = vxlan #指定普通用户创建的网络类型为 vxlan mechanism_drivers = openvswitch,l2population #enable l2population mechanism driver -- [ml2_type_vxlan] vni_ranges = 1:1000 #指定普通用户创建的 vxlan 的范围 -- [agent] tunnel_types = vxlan #配置启用 vxlan l2_population = True #配置启用 l2population。 -- [ovs] datapath_type = system bridge_mappings = default:br-ens37,external:br-external tunnel_bridge = br-tun #vxlan tunnel 对应的网桥为 br-tun。 local_ip = 10.0.0.71 #配置 VTEP #修改配置后,需要重启agent #重启agent之后,ovs br-tun 和 ovs br-int之间新增了一对patch接口对 root@controller:~# ovs-vsctl show 9b50175f-152e-443a-a8dd-77309d2af748 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-int #新增patch接口对 Interface patch-int type: patch options: {peer=patch-tun} Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-tun #新增patch接口对 Interface patch-tun type: patch options: {peer=patch-int} ovs_version: "2.8.4" ------------------------------------------------------------------------------------------------------------ 创建vxlan网络、instance后,底层网络的变化 root@controller:~# ovs-vsctl show 9b50175f-152e-443a-a8dd-77309d2af748 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-int #patch接口,连接到 br-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal Port "vxlan-0a000048" #看起来是vxlan接口 Interface "vxlan-0a000048" #VXLAN 的隧道端点,指定了本地(devstack-controller)节点和远端(devstack-compute1)节点 VTEP 的 IP。 type: vxlan options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.0.71", out_key=flow, remote_ip="10.0.0.72"} Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "qvo6b5e4978-e2" #vxlan 77 VM tag: 1 Interface "qvo6b5e4978-e2" Port "qvo1e4584bd-fd" #vxlan 77 VM tag: 1 Interface "qvo1e4584bd-fd" Port "tap474ece45-69" #vxlan 77 DHCP server tag: 1 Interface "tap474ece45-69" type: internal Port patch-tun #patch接口,连接到 br-tun Interface patch-tun type: patch options: {peer=patch-int} Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "tap0c4e5ec8-46" tag: 3 #flat外网 DHCP server Interface "tap0c4e5ec8-46" type: internal Port int-br-external #patch接口,连接到 br-external Interface int-br-external type: patch options: {peer=phy-br-external} Port br-int Interface br-int type: internal Port "int-br-ens37" #patch接口,连接到 br-ens37 Interface "int-br-ens37" type: patch options: {peer="phy-br-ens37"} ovs_version: "2.8.4" #ping包通的 #在ovs br-int的veth pair接口上可以抓到包 root@controller:~# tcpdump -nnvvi qvo6b5e4978-e2 tcpdump: listening on qvo6b5e4978-e2, link-type EN10MB (Ethernet), capture size 262144 bytes 21:52:52.746316 IP (tos 0x0, ttl 64, id 33829, offset 0, flags [DF], proto ICMP (1), length 84) 77.0.0.28 > 77.0.0.104: ICMP echo request, id 33537, seq 276, length 64 21:52:52.746578 IP (tos 0x0, ttl 64, id 55836, offset 0, flags [none], proto ICMP (1), length 84) 77.0.0.104 > 77.0.0.28: ICMP echo reply, id 33537, seq 276, length 64 #但是在ovs br-int设备上却抓不到包???? #qvo6b5e4978-e2接口编号为31 root@controller:~# ovs-ofctl show br-int OFPT_FEATURES_REPLY (xid=0x2): dpid:000006a65da0414f n_tables:254, n_buffers:0 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst 30(qvo1e4584bd-fd): addr:f2:5c:76:e0:92:3d config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 31(qvo6b5e4978-e2): addr:96:db:dc:a4:2d:82 config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max #根据接口编号过滤流配置 root@controller:~# ovs-ofctl dump-flows br-int |grep "in_port=31\|in_port=30" cookie=0xb24f44812d62c944, duration=1052.531s, table=0, n_packets=109, n_bytes=4578, idle_age=20, priority=10,arp,in_port=31 actions=resubmit(,24) cookie=0xb24f44812d62c944, duration=1052.541s, table=0, n_packets=556, n_bytes=55675, idle_age=0, priority=9,in_port=31 actions=resubmit(,25) cookie=0xb24f44812d62c944, duration=1052.534s, table=24, n_packets=109, n_bytes=4578, idle_age=20, priority=2,arp,in_port=31,arp_spa=77.0.0.104 actions=resubmit(,25) cookie=0xb24f44812d62c944, duration=1052.546s, table=25, n_packets=644, n_bytes=57316, idle_age=0, priority=2,in_port=31,dl_src=fa:16:3e:1b:ae:85 actions=resubmit(,60) cookie=0xb24f44812d62c944, duration=1412.721s, table=0, n_packets=120, n_bytes=5040, idle_age=10, priority=10,arp,in_port=30 actions=resubmit(,24) cookie=0xb24f44812d62c944, duration=1412.729s, table=0, n_packets=745, n_bytes=74119, idle_age=1, priority=9,in_port=30 actions=resubmit(,25) cookie=0xb24f44812d62c944, duration=1412.723s, table=24, n_packets=120, n_bytes=5040, idle_age=10, priority=2,arp,in_port=30,arp_spa=77.0.0.28 actions=resubmit(,25) cookie=0xb24f44812d62c944, duration=1412.733s, table=25, n_packets=838, n_bytes=75712, idle_age=1, priority=2,in_port=30,dl_src=fa:16:3e:cc:93:a8 actions=resubmit(,60) #或许是环境有问题??又或者是对ovs理解不够; #总结在linux 虚拟设备上是可以抓到包的,但是在ovs设备上抓不到包???why???? 146 - ML2 中配置 OVS VxLAN 147 - 创建 vxlan 并部署 instance 148 - OVS vxlan 底层结构分析 149 - OVS VxLAN Flow 分析 #确实有很多问题,待学习,放上链接备用 https://mp.weixin.qq.com/s?__biz=MzIwMTM5MjUwMg==&mid=2653587444&idx=1&sn=eaeac991ef097742cc6455430532a5ea&chksm=8d308fedba4706fbd4a1ff91eeea4e04a0a3a559bd7f71fb2da1a73b70103ff0ac648191817c&scene=21#wechat_redirect ---------------------------------------------------------- OVS 的数据流向都是由 Flow 规则控制的 br-int 的 rule 看上去虽然多,其实逻辑很简单,br-int 被当作一个二层交换机,其重要的 rule 是下面这条: cookie=0xaaa0e760a7848ec3, duration=52798.625s, table=0, n_packets=143, n_bytes=14594, idle_age=9415, priority=0 actions=NORMAL #根据 vlan 和 mac 进行转发 table 0 cookie=0xaaa0e760a7848ec3, duration=76707.867s, table=0, n_packets=70, n_bytes=6600, idle_age=33324, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2) #从 port 1(patch-int)进来的包,扔给 table 2 处理:actions=resubmit(,2) cookie=0xaaa0e760a7848ec3, duration=76543.287s, table=0, n_packets=56, n_bytes=4948, idle_age=33324, hard_age=65534, priority=1,in_port=2 actions=resubmit(,4) #从 port 2(vxlan-a642100b)进来的包,扔给 table 4 处理:actions=resubmit(,4) cookie=0xaaa0e760a7848ec3, duration=76707.867s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop table 4 cookie=0xaaa0e760a7848ec3, duration=76647.039s, table=4, n_packets=56, n_bytes=4948, idle_age=33324, hard_age=65534, priority=1,tun_id=0x64 actions=mod_vlan_vid:1,resubmit(,10) #如果数据包的 VXLAN tunnel ID 为 100(tun_id=0x64),action 是添加内部 VLAN ID 1(tag=1),然后扔给 table 10 去学习。 table 10 cookie=0xaaa0e760a7848ec3, duration=76707.865s, table=10, n_packets=56, n_bytes=4948, idle_age=33324, hard_age=65534, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,cookie=0xaaa0e760a7848ec3,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1 # 学习外部(从 tunnel)进来的包,往 table 20 中添加对返程包的正常转发规则,然后从 port 1(patch-int)扔给 br-int table 2 cookie=0xaaa0e760a7848ec3, duration=76707.866s, table=2, n_packets=28, n_bytes=3180, idle_age=33324, hard_age=65534, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20) #br-int 发过来数据如果是单播包,扔给 table 20 处理:resubmit(,20) cookie=0xaaa0e760a7848ec3, duration=76707.866s, table=2, n_packets=42, n_bytes=3420, idle_age=33379, hard_age=65534, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22) #br-int 发过来数据如果是多播或广播包,扔 table 22 处理:resubmit(,22) table 20 cookie=0xaaa0e760a7848ec3, duration=76543.287s, table=20, n_packets=28, n_bytes=3180, idle_age=33324, hard_age=65534, priority=2,dl_vlan=1,dl_dst=fa:16:3e:fd:8a:ed actions=strip_vlan,set_tunnel:0x64,output:2 #第一条规则就是 table 10 学习来的结果。内部 VLAN 号为 1(tag=1),目标 MAC 是 fa:16:3e:fd:8a:ed(virros-vm2)的数据包,即发送给 virros-vm2 的包,action 是去掉 VLAN 号,添加 VXLAN tunnel ID 100(十六进制 0x64),并从 port 2 (tunnel 端口 vxlan-a642100b) 发出。 cookie=0xaaa0e760a7848ec3, duration=76707.865s, table=20, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22) #对于没学习到规则的数据包,则扔给 table 22 处理。 table 22 cookie=0xaaa0e760a7848ec3, duration=76543.282s, table=22, n_packets=2, n_bytes=84, idle_age=33379, hard_age=65534, dl_vlan=1 actions=strip_vlan,set_tunnel:0x64,output:2 #如果数据包的内部 VLAN 号为 1(tag=1),action 是去掉 VLAN 号,添加 VXLAN tunnel ID 100(十六进制 0x64),并从 port 2 (tunnel 端口 vxlan-a642100b) 发出。 cookie=0xaaa0e760a7848ec3, duration=76707.82s, table=22, n_packets=40, n_bytes=3336, idle_age=65534, hard_age=65534, priority=0 actions=drop

Neutron Router 工作原理;配置enable 虚拟 router;2个租户网络打通后,底层网络变化 ================================================================================================================= Neutron Router 工作原理 与 linux bridge 实现方式一样, 虚拟router运行在自己的 namespace 中。 openstack白屏使用虚拟router打通2个租户网络 1.创建虚拟router 2.虚拟router界面创建接口,分别连接到2个租户网络 ---------------------------------------------------------------------------------------------------------------- 配置enable 虚拟 router l3 agent 需要正确配置才能工作,配置文件为 /etc/neutron/l3_agent.ini,位于控制节点或网络节点。 root@controller:~# cat /etc/neutron/l3_agent.ini |grep interface_driver interface_driver = openvswitch #Q版 ovs #interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver #N版 ovs #interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver #N版 linux bridge #external_network_bridge = br-ex #external_network_bridge 指定连接外网的网桥,默认是 br-ex。 ---------------------------------------------------------------------------------------------------------------- 2个租户网络打通后,底层网络变化: #ovs br-int 新增了2个接口,连接到虚拟router root@controller:~# ovs-vsctl show 9b50175f-152e-443a-a8dd-77309d2af748 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "qvobeeb2c54-aa" #vlan 66 instance tag: 1 Interface "qvobeeb2c54-aa" Port "qvoa143a2cc-6f" #vlan 88 instance tag: 2 Interface "qvoa143a2cc-6f" Port "tapf3d967a1-98" #vlan 88 dhcp server tag: 2 Interface "tapf3d967a1-98" type: internal Port "qr-54e6fd7e-1b" #新增接口,连接到虚拟router tag: 1 Interface "qr-54e6fd7e-1b" type: internal Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "qvo458013d8-e2" #vlan 66 instance tag: 1 Interface "qvo458013d8-e2" Port "tap18a66eac-38" #vlan 88 dhcp server tag: 1 Interface "tap18a66eac-38" type: internal Port "qr-7c0e6cea-bc" #新增接口,连接到虚拟router tag: 2 Interface "qr-7c0e6cea-bc" type: internal Port br-int Interface br-int type: internal Port "int-br-ens37" #连接到int-br-ens37(vlan网络出口、物理网卡) Interface "int-br-ens37" type: patch options: {peer="phy-br-ens37"} ovs_version: "2.8.4" root@controller:~# brctl show bridge name bridge id STP enabled interfaces qbr458013d8-e2 8000.4a4a8e091fe5 no qvb458013d8-e2 tap458013d8-e2 qbra143a2cc-6f 8000.4a74084d9049 no qvba143a2cc-6f tapa143a2cc-6f qbrbeeb2c54-aa 8000.4ad8b6731332 no qvbbeeb2c54-aa tapbeeb2c54-aa #虚拟router的2张网卡 root@controller:~# ip netns exec qrouter-e7f3fad7-0698-49f4-a53f-c47941805c7b ip -d a 64: qr-54e6fd7e-1b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:6d:fb:ea brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch inet 66.0.0.1/24 brd 66.0.0.255 scope global qr-54e6fd7e-1b valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe6d:fbea/64 scope link valid_lft forever preferred_lft forever 65: qr-7c0e6cea-bc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:f4:ae:52 brd ff:ff:ff:ff:ff:ff promiscuity 1 openvswitch inet 88.0.0.1/24 brd 88.0.0.255 scope global qr-7c0e6cea-bc valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fef4:ae52/64 scope link valid_lft forever preferred_lft forever

外网(访问公网能力);配置enable外网;创建外网、打通租户网络和物理网络后,底层网络的变化;外网功能的原理(iptables) ======================================================================================================================================================== Open vSwitch driver 环境中 floating IP 的实现与 Linux Bridge driver 完全一样:都是通过在 router 提供网关的外网 interface 上配置 iptables NAT 规则实现。 ----------------------------------------------------------------------------------------------------------- 提前准备好ovs br-ex,并将物理网卡attach到ovs root@controller:~# ovs-vsctl add-br br-external root@controller:~# ovs-vsctl add-port br-external ens38 ----------------------------------------------- enable flat 外网 root@controller:~# cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep -P "flat_network|bridge_mappings" -B2 [ml2_type_flat] flat_networks = default,external -- [ovs] datapath_type = system bridge_mappings = default:br-ens37,external:br-external ----------------------------------------------- enable vlan 外网 root@controller:~# cat /etc/neutron/plugins/ml2/ml2_conf.ini |grep -P "network_vlan_ranges|bridge_mappings" -B2 [ml2_type_vlan] network_vlan_ranges = default:3001:4000,external -- [ovs] datapath_type = system bridge_mappings = default:br-ens37,external:br-external ------------------------------------------------------------------------------------------------------ instance通过外网flat网络访问公网 0.enable flat 外网 1.创建flat网络,勾选"外部网络" 2.虚拟router界面,设置网络,勾选"SNAT" #enable flat 外网,底层网络的变化:br-int和br-external之间由一对patch相连 root@controller:~# ovs-vsctl show 9b50175f-152e-443a-a8dd-77309d2af748 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port int-br-external Interface int-br-external type: patch options: {peer=phy-br-external} Bridge br-external Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port phy-br-external Interface phy-br-external type: patch options: {peer=int-br-external} Port "ens38" Interface "ens38" Port br-external Interface br-external type: internal ovs_version: "2.8.4" --------------------------------------------------- instance访问公网 #虚拟router网卡 root@controller:~# ip netns exec qrouter-e7f3fad7-0698-49f4-a53f-c47941805c7b ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 64: qr-54e6fd7e-1b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:6d:fb:ea brd ff:ff:ff:ff:ff:ff inet 66.0.0.1/24 brd 66.0.0.255 scope global qr-54e6fd7e-1b valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe6d:fbea/64 scope link valid_lft forever preferred_lft forever 65: qr-7c0e6cea-bc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:f4:ae:52 brd ff:ff:ff:ff:ff:ff inet 88.0.0.1/24 brd 88.0.0.255 scope global qr-7c0e6cea-bc valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fef4:ae52/64 scope link valid_lft forever preferred_lft forever 68: qg-6afd63fc-71: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:3c:7f:d2 brd ff:ff:ff:ff:ff:ff inet 192.168.1.103/24 brd 192.168.1.255 scope global qg-6afd63fc-71 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe3c:7fd2/64 scope link valid_lft forever preferred_lft forever #虚拟router路由表 root@controller:~# ip netns exec qrouter-e7f3fad7-0698-49f4-a53f-c47941805c7b ip r s default via 192.168.1.1 dev qg-6afd63fc-71 66.0.0.0/24 dev qr-54e6fd7e-1b proto kernel scope link src 66.0.0.1 88.0.0.0/24 dev qr-7c0e6cea-bc proto kernel scope link src 88.0.0.1 192.168.1.0/24 dev qg-6afd63fc-71 proto kernel scope link src 192.168.1.103 #虚拟router iptables root@controller:~# ip netns exec qrouter-e7f3fad7-0698-49f4-a53f-c47941805c7b iptables-save # Generated by iptables-save v1.6.0 on Thu Jan 6 15:46:26 2022 *raw :PREROUTING ACCEPT [452:124155] :OUTPUT ACCEPT [0:0] :neutron-l3-agent-OUTPUT - [0:0] :neutron-l3-agent-PREROUTING - [0:0] -A PREROUTING -j neutron-l3-agent-PREROUTING -A OUTPUT -j neutron-l3-agent-OUTPUT COMMIT # Completed on Thu Jan 6 15:46:26 2022 # Generated by iptables-save v1.6.0 on Thu Jan 6 15:46:26 2022 *nat :PREROUTING ACCEPT [156:42088] :INPUT ACCEPT [4:935] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :neutron-l3-agent-OUTPUT - [0:0] :neutron-l3-agent-POSTROUTING - [0:0] :neutron-l3-agent-PREROUTING - [0:0] :neutron-l3-agent-float-snat - [0:0] :neutron-l3-agent-snat - [0:0] :neutron-postrouting-bottom - [0:0] -A PREROUTING -j neutron-l3-agent-PREROUTING -A OUTPUT -j neutron-l3-agent-OUTPUT -A POSTROUTING -j neutron-l3-agent-POSTROUTING -A POSTROUTING -j neutron-postrouting-bottom -A neutron-l3-agent-POSTROUTING ! -i qg-6afd63fc-71 ! -o qg-6afd63fc-71 -m conntrack ! --ctstate DNAT -j ACCEPT -A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697 -A neutron-l3-agent-snat -j neutron-l3-agent-float-snat -A neutron-l3-agent-snat -o qg-6afd63fc-71 -j SNAT --to-source 192.168.1.103 -A neutron-l3-agent-snat -m mark ! --mark 0x2/0xffff -m conntrack --ctstate DNAT -j SNAT --to-source 192.168.1.103 -A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat COMMIT # Completed on Thu Jan 6 15:46:26 2022 # Generated by iptables-save v1.6.0 on Thu Jan 6 15:46:26 2022 *mangle :PREROUTING ACCEPT [446:123651] :INPUT ACCEPT [288:82019] :FORWARD ACCEPT [8:622] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [8:622] :neutron-l3-agent-FORWARD - [0:0] :neutron-l3-agent-INPUT - [0:0] :neutron-l3-agent-OUTPUT - [0:0] :neutron-l3-agent-POSTROUTING - [0:0] :neutron-l3-agent-PREROUTING - [0:0] :neutron-l3-agent-float-snat - [0:0] :neutron-l3-agent-floatingip - [0:0] :neutron-l3-agent-mark - [0:0] :neutron-l3-agent-scope - [0:0] -A PREROUTING -j neutron-l3-agent-PREROUTING -A INPUT -j neutron-l3-agent-INPUT -A FORWARD -j neutron-l3-agent-FORWARD -A OUTPUT -j neutron-l3-agent-OUTPUT -A POSTROUTING -j neutron-l3-agent-POSTROUTING -A neutron-l3-agent-POSTROUTING -o qg-6afd63fc-71 -m connmark --mark 0x0/0xffff0000 -j CONNMARK --save-mark --nfmask 0xffff0000 --ctmask 0xffff0000 -A neutron-l3-agent-PREROUTING -j neutron-l3-agent-mark -A neutron-l3-agent-PREROUTING -j neutron-l3-agent-scope -A neutron-l3-agent-PREROUTING -m connmark ! --mark 0x0/0xffff0000 -j CONNMARK --restore-mark --nfmask 0xffff0000 --ctmask 0xffff0000 -A neutron-l3-agent-PREROUTING -j neutron-l3-agent-floatingip -A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j MARK --set-xmark 0x1/0xffff -A neutron-l3-agent-float-snat -m connmark --mark 0x0/0xffff0000 -j CONNMARK --save-mark --nfmask 0xffff0000 --ctmask 0xffff0000 -A neutron-l3-agent-mark -i qg-6afd63fc-71 -j MARK --set-xmark 0x2/0xffff -A neutron-l3-agent-scope -i qr-54e6fd7e-1b -j MARK --set-xmark 0x4000000/0xffff0000 -A neutron-l3-agent-scope -i qg-6afd63fc-71 -j MARK --set-xmark 0x4000000/0xffff0000 -A neutron-l3-agent-scope -i qr-7c0e6cea-bc -j MARK --set-xmark 0x4000000/0xffff0000 COMMIT # Completed on Thu Jan 6 15:46:26 2022 # Generated by iptables-save v1.6.0 on Thu Jan 6 15:46:26 2022 *filter :INPUT ACCEPT [288:82019] :FORWARD ACCEPT [8:622] :OUTPUT ACCEPT [0:0] :neutron-filter-top - [0:0] :neutron-l3-agent-FORWARD - [0:0] :neutron-l3-agent-INPUT - [0:0] :neutron-l3-agent-OUTPUT - [0:0] :neutron-l3-agent-local - [0:0] :neutron-l3-agent-scope - [0:0] -A INPUT -j neutron-l3-agent-INPUT -A FORWARD -j neutron-filter-top -A FORWARD -j neutron-l3-agent-FORWARD -A OUTPUT -j neutron-filter-top -A OUTPUT -j neutron-l3-agent-OUTPUT -A neutron-filter-top -j neutron-l3-agent-local -A neutron-l3-agent-FORWARD -j neutron-l3-agent-scope -A neutron-l3-agent-INPUT -m mark --mark 0x1/0xffff -j ACCEPT -A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP -A neutron-l3-agent-scope -o qr-54e6fd7e-1b -m mark ! --mark 0x4000000/0xffff0000 -j DROP -A neutron-l3-agent-scope -o qr-7c0e6cea-bc -m mark ! --mark 0x4000000/0xffff0000 -j DROP COMMIT # Completed on Thu Jan 6 15:46:26 2022 #arp请求包经过虚拟router SNAT转换到达了物理网络,也算是符合现象了 root@controller:~# tcpdump -nnvvi ens38 host 192.168.1.1 tcpdump: listening on ens38, link-type EN10MB (Ethernet), capture size 262144 bytes 15:55:24.447992 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.1 tell 192.168.1.103, length 28 15:55:25.475156 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.1 tell 192.168.1.103, length 28 15:55:27.336911 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 192.168.1.1 tell 192.168.1.103, length 28
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· 单线程的Redis速度为什么快?
· SQL Server 2025 AI相关能力初探
· AI编程工具终极对决:字节Trae VS Cursor,谁才是开发者新宠?
· 展开说说关于C#中ORM框架的用法!