阿里山QQ

导航

linux网卡桥接问题与docker网卡桥接问题

一、linux网卡桥接问题

在linux上创建桥接网卡,与真实的物理网卡进行绑定,相当于在linux中创建了一个虚拟的交换机,以linux网卡地址为源地址的数据,从桥接网卡br0进入,从实际的物理网卡eth0发出;

 

创建桥接网卡br0,需要在网卡的配置文件目录下创建br0的配置文件,并与实际的物理网卡eth0进行绑定,配置如下:

[root@linux-node1 network-scripts]# cat ifcfg-br0
DEVICE=br0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.74.20
NETMASK=255.255.255.0

[root@linux-node1 network-scripts]# cat ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none                #必须为none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
BRIDGE=br0

重启机器之后,就可以看到两者绑定了,并且地址存在于网卡br0上,如下:

[root@linux-node1 ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.000c29896a8f       no              eth0
                                                        veth0pl8880
docker0         8000.02426fb6b621       no
virbr0          8000.000000000000       yes
[root@linux-node1 ~]# ifconfig 
br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.74.20  netmask 255.255.255.0  broadcast 192.168.74.255
        inet6 fe80::f816:37ff:fe69:8763  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:89:6a:8f  txqueuelen 0  (Ethernet)
        RX packets 26941  bytes 14148328 (13.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 23408  bytes 2718697 (2.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 00:0c:29:89:6a:8f  txqueuelen 1000  (Ethernet)
        RX packets 77954  bytes 97712837 (93.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 23714  bytes 3017447 (2.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

但是,这样配置之后,还存在一个问题,对于同一个网段内的地址(192.168.74.0/24),通信是没有问题的,不跨网段,不涉及路由,不需要都网关就可以到达,但是如果是其他网段的地址,就无法通信了,给所有的网卡加上网关。给所有的网卡配置上网关,而不需要在每一个网卡上面进行配置,需要如下配置:

[root@linux-node1 ~]# cat /etc/sysconfig/network
# Created by anaconda

NETWORKING=yes
HOSTNAME=linux-node1
GATEWAY=192.168.74.2

这样,重启机器,就可以全部生效了

[root@linux-node1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.74.2    0.0.0.0         UG    425    0        0 br0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.74.0    0.0.0.0         255.255.255.0   U     425    0        0 br0
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

 

二、docker网卡使用桥接,网关地址设置为宿主机地址后,数据转发问题

首先创建一个docker,并分配指定的地址和网关

docker create -it -h myhost --cap-add SYS_PTRACE --net=none --name myhost_192.168.74.30 --cpu-quota 1200000 --cpu-period=10000   841c208badec  "/sbin/init"

docker start myhost_192.168.74.30

pipework br0 -i eth0 myhost_192.168.74.30 192.168.74.30/24@192.168.74.20

查看docker状态如下:

[root@linux-node1 ~]# docker exec -it myhost_192.168.74.30 bash
[root@myhost /]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.74.30  netmask 255.255.255.0  broadcast 192.168.74.255
        inet6 fe80::88cf:29ff:fe44:5ec8  prefixlen 64  scopeid 0x20<link>
        ether 8a:cf:29:44:5e:c8  txqueuelen 1000  (Ethernet)
        RX packets 744  bytes 141673 (138.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9  bytes 690 (690.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@myhost /]# route  -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.74.20   0.0.0.0         UG    0      0        0 eth0
192.168.74.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0

但是此时,在容器ping外网地址,不通:

[root@linux-node1 ~]# sysctl -a|grep forwarding|grep br0.for
net.ipv4.conf.br0.forwarding = 0
net.ipv4.conf.virbr0.forwarding = 1
net.ipv6.conf.br0.forwarding = 0
net.ipv6.conf.virbr0.forwarding = 0

可以看到br0转发默认是关闭的,打开br0的转发就可以了:

[root@linux-node1 ~]# sysctl -w net.ipv4.conf.br0.forwarding=1
net.ipv4.conf.br0.forwarding = 1
[root@linux-node1 ~]# 

  

 

 三、多块网卡使用同一个网关时,数据转发问题

我们添加另外一块网卡eth0:

[root@linux-node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=static
NAME=eth1
UUID=8f53b0c4-abe1-49ac-964a-7add4b5809d4
DEVICE=eth1
ONBOOT=yes
IPADDR=10.0.0.4

路由表如下:

[root@linux-node1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.74.2    0.0.0.0         UG    100    0        0 eth1
0.0.0.0         192.168.74.2    0.0.0.0         UG    425    0        0 br0
10.0.0.0        0.0.0.0         255.255.255.0   U     100    0        0 eth1
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.74.0    0.0.0.0         255.255.255.0   U     425    0        0 br0
192.168.74.2    0.0.0.0         255.255.255.255 UH    100    0        0 eth1
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

但是,如果以eth1为源地址,ping外网地址,可以通,以br0位源地址,ping外网地址,不通,也就是说,只有一块网卡可以与外网通信,如何解决:

[root@linux-node1 ~]# sysctl -a|grep rp_filter               
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.br0.arp_filter = 0
net.ipv4.conf.br0.rp_filter = 1
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.docker0.arp_filter = 0
net.ipv4.conf.docker0.rp_filter = 1
net.ipv4.conf.eth0.arp_filter = 0
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth1.arp_filter = 0
net.ipv4.conf.eth1.rp_filter = 1
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.rp_filter = 0
net.ipv4.conf.veth0pl4654.arp_filter = 0
net.ipv4.conf.veth0pl4654.rp_filter = 1
net.ipv4.conf.virbr0.arp_filter = 0
net.ipv4.conf.virbr0.rp_filter = 1
net.ipv4.conf.virbr0-nic.arp_filter = 0
net.ipv4.conf.virbr0-nic.rp_filter = 1
可以看到rp_filter参数为1,如果设置为0,就可以了
sysctl -w net.ipv4.conf.br0.rp_filter=0

如果想要使得配置永久生效,可以将其写入/etc/sysctl.conf中就可以了;

 

 

三、测试网卡是否有误网线连接

[root@BASE-SERVER-1-10-10 network-scripts]# mii-tool em2
em2: negotiated 1000baseT-FD flow-control, link ok
[root@BASE-SERVER-1-10-10 network-scripts]# mii-tool em1
em1: negotiated 1000baseT-FD flow-control, link ok
[root@BASE-SERVER-1-10-10 network-scripts]# mii-tool br2

 

 

  

  

 

posted on 2017-03-15 14:38  阿里山QQ  阅读(1956)  评论(0编辑  收藏  举报