CentOS 双网卡绑定实现平衡负载

绑定两块网卡主要为了解决网卡故障、负载均衡等问题。

1、在vm加一块网卡,登录后检查网卡是否识别。

分别用ip addr和nmcli查看网卡的情况
[root@bigdata-senior01 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:ea:31:47 brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.10/24 brd 192.168.31.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feea:3147/64 scope link 
       valid_lft forever preferred_lft forever
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:ea:31:51 brd ff:ff:ff:ff:ff:ff

新加入的网卡接口是ens37

2、常见的绑定模式

常用的有三种
mode=0:平衡负载模式,有自动备援,但需要交换机支持及设定,
两个交换机端口需要做聚合,该模式下bond所绑定的网卡的IP都被修改成相同的mac地址,交换机做了聚合后,聚合下的几个端口也被捆绑成一个mac地址 mode
=1:自动备援模式,其中一条线若断线,其他线路将会自动备援。 mode=6:平衡负载模式,有自动备援,不用交换机支持,绑定的是不同的MAC地址。

3、用例,环境CentOS7.x(CentOS6是另外的配置)

#查看bonding模块是否加载
lsmod | grep bonding
bonding 136705 0

#如果没有加载,手动先加载
modprobe bonding

#在/etc/sysconfig/network-scripts目录下配置ens33,ens37,bond0三个网卡接口,bond0是抽象网卡 [root@bigdata-senior01 network-scripts]# cat ifcfg-ens33 TYPE=Ethernet BOOTPROTO=none DEVICE=ens33 ONBOOT=yes USERCTL=no MASTER=bond0 SLAVE=yes
BONDING_MASTER=yes [root@bigdata
-senior01 network-scripts]# cat ifcfg-ens37 TYPE=Ethernet BOOTPROTO=none DEVICE=ens37 ONBOOT=yes USERCTL=no MASTER=bond0 SLAVE=yes
BONDING_MASTER=yes
#实际是把原来网卡1的ip配置放入了bond0里
BOOTPROTO=none
DEVICE=bond0
TYPE=Bond
ONBOOT=yes
ZONE=public
IPADDR=192.168.31.10
NETMASK=255.255.255.0
GATEWAY=192.168.31.2
DNS1=192.168.31.2
USERCTL=no
NM_CONTROLLED=no
BONDING_MASTER=yes
BONDING_OPTS="mode=6 miimon=100"
重启网络systemctl restart network,如果原来network就没启动,那么使用systemctl start network

#ens33和ens37应该是没有配置IP的。
[root@bigdata-senior01 modprobe.d]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 00:0c:29:ea:31:47 brd ff:ff:ff:ff:ff:ff
3: ens37: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 00:0c:29:ea:31:47 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 00:0c:29:ea:31:47 brd ff:ff:ff:ff:ff:ff
    inet 192.168.31.10/24 brd 192.168.31.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feea:3147/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever


查看bonding的状态
[root@localhost ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: ens33
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: ens33
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:ea:31:47
Slave queue ID: 0

Slave Interface: ens37
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:ea:31:51
Slave queue ID: 0

#测试,在ping的过程中中断一个网卡的连接(可以在VM的属性里取消连接勾选),丢包2%,网卡从ens33自动切成ens34
--- 192.168.1.103 ping statistics ---
86 packets transmitted, 84 received, 2% packet loss, time 85275ms
rtt min/avg/max/mdev = 0.471/1.057/1.684/0.348 ms



 4、删除bond0

删除网卡配置文件 ifcfg-bond0
卸载模块:rmmod bonding
重新配置物理网卡:
重启网络:systemctl restart network

 

posted @ 2019-01-18 23:07  我是属车的  阅读(882)  评论(0编辑  收藏  举报