bonding的系统初始化介绍
bond0模块的加载
Bonding原理
为方便理解bonding的配置及实现,顺便阐述一下Linux的网络接口及其配置文件。在 Linux 中,所有的网络通讯都发生在软件接口与物理网络设备之间。与网络接口配置相关的文件,以及控制网络接口状态的脚本文件,全都位于 /etc/sysconfig/netwrok-scripts/ 目录下。网络接口配置文件用于控制系统中的软件网络接口,并通过这些接口实现对网络设备的控制。当系统启动时,系统通过这些接口配置文件决定启动哪些接口,以及如何对这些接口进行配置。接口配置文件的名称通常类似于 ifcfg-
Questions?
- 怎么看当前bond的mode?
#cat /proc/net/bonding/bond0
- 方式1:
#vim /etc/sysconfig/network-scripts/ifcfg-bond0
的BONDING_OPTS参数; - 方式2:
/etc/modprobe.d/bond.conf
增加options参数行:options bond0 miimon=100 mode=4 xmit_hash_policy=layer3+4
- 为何配置当前mode?
- mac地址为何一样?
- bonding的原理:在正常情况下,网卡只接收目的硬件地址(MAC Address)是自身MAC的以太网帧,过滤别的数据帧,以减轻驱动程序的负担;但是网卡也支持另外一种被称为混杂promisc的模式,可以接收网络上所有的帧,bonding就运行在这种模式下,而且修改了驱动程序中的mac地址,将两块网卡的MAC地址改成相同,可以接收特定MAC的数据帧。然后把相应的数据帧传送给bond驱动程序处理
Bonding工作模式
bonding的模式一共有7种,常用的为0、1两种:
- round-robin(balance-rr) 0 网卡的负载均衡模式
- active-backup 1 网卡的容错模式
- balance-xor 2 需要交换机支持
- broadcast 3 广播模式
- ieee802.3ad 4 动态链路聚合模式,需要交换机支持
- mode-tlb 5 自适应模式
- mode-alb 6 网卡虚拟化方式
mode=0表示load balancing(round-robin)为负载均衡方式,两块网卡都工作。在这种模式下,能在提供带宽的负载均衡的同时提供失效保护。
mode=1表示fault-tolerance(active-backup)提供冗余功能,工作方式是主备的工作方式,也就是说默认情况下只有一块网卡工作,另一块做备份。bonding定义了网卡的4个链路状态:正常状态(BOND_LINK_UP)、网卡出现故障(BOND_LINK_FAIL)、失效状态(BOND_LINK_DOWN)及网上恢复状态(BOND_LINK_BACK)。mii的作用就是依次检查网卡链路状态是否处于这些状态,然后通过标记某个变量来说明当前是否需要切换slave网卡。在这种模式下,两块网卡有一块是不工作的,同时,bond虚设备的MAC地址均一致,所以这张备用网卡不会被外界察觉,交换机也不存在向该端口发包的情况。当bond的mii检测到当前的active设备失效了以后,bonding会迅速将另外一块网卡设置为首选slave设备。
在以上模式中,虚拟网卡的MAC地址始终是第一个slave网卡的MAC。由于外界学习到的服务器MAC地址始终是不变的,在网络上确定了IP和MAC的唯一对应关系,保证了上层业务传输的逻辑一致性,所以链路的状态不会受很大的影响。
系统设置
kernel support:
#cat /boot/config-4.9.151-015.xxx.xxxx.x86_64 | grep -i BONDING
CONFIG_BONDING=m
设置开机加载:
以使系统在启动时加载bonding模块,对外虚拟网络接口设备为 bond0
[root@localhost /data/sandbox/dpdk-17.11/usertools]
#cat /etc/modprobe.d/bonding.conf
alias netdev-bond0 bonding
[root@localhost /data/sandbox/dpdk-17.11/usertools]
#lsmod | grep bond
bonding 151552 0
bond0 配置文件
[root@localhost /etc/sysconfig/network-scripts]
#cat ifcfg-eth0
DEVICE=eth0
TYPE="Ethernet"
HWADDR=28:31:52:AA:9E:4A
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
PEERDNS=no
ETHTOOL_OPTS="speed 1000 duplex full autoneg on"
RX_MAX=`ethtool -g "$DEVICE" | grep 'Pre-set' -A1 | awk '/RX/{print $2}'`
RX_CURRENT=`ethtool -g "$DEVICE" | grep "Current" -A1 | awk '/RX/{print $2}'`
[[ "$RX_CURRENT" -lt "$RX_MAX" ]] && ethtool -G "$DEVICE" rx "$RX_MAX"
说明:这里使用了BONDING_OPTS选项,则不需要再使用 /etc/modprobe.conf 配置文件对绑定设备进行配置。参数mode=0,指负载均衡模式,详见下文。miimon是用来进行链路监测的,其原理是检测网上的链路状态,一般将miimon值设为100,表示系统每100ms监测一次链路连接状态,如果有一条线路不通就转入另一条线路。
说明:修改单个网卡配置,主要是去掉IP 地址、子网掩码等信息。同时添加MASTER及SLAVE两项参数。
MASTER=
SLAVE=<yes|no>:yes - 表示此设备可以由 MASTER 指令中配置的通道绑定接口进行控制。 no - 表示此设备不能由 MASTER 指令中配置的通道绑定接口进行控制。
[root@localhost /etc/sysconfig/network-scripts]
#cat ifcfg-eth1
DEVICE=eth1
TYPE="Ethernet"
HWADDR=28:31:52:AA:9E:4B
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
PEERDNS=no
ETHTOOL_OPTS="speed 1000 duplex full autoneg on"
RX_MAX=`ethtool -g "$DEVICE" | grep 'Pre-set' -A1 | awk '/RX/{print $2}'`
RX_CURRENT=`ethtool -g "$DEVICE" | grep "Current" -A1 | awk '/RX/{print $2}'`
[[ "$RX_CURRENT" -lt "$RX_MAX" ]] && ethtool -G "$DEVICE" rx "$RX_MAX"
- 如何查看 bond mode是?
- BONDING_OPTS --> mode=4
[root@localhost /etc/sysconfig/network-scripts]
#cat ifcfg-bond0
DEVICE=bond0
BOOTPROTO=static
TYPE="ethernet"
IPADDR=10.137.16.5
NETMASK=255.255.255.0
ONBOOT=yes
USERCTL=no
PEERDNS=no
BONDING_OPTS="miimon=100 mode=4 xmit_hash_policy=layer3+4"
[root@localhost /etc/sysconfig/network-scripts]
#ethtool -g eth1
Ring parameters for eth1:
Pre-set maximums:
RX: 4096
RX Mini: 0
RX Jumbo: 0
TX: 4096
Current hardware settings:
RX: 4096
RX Mini: 0
RX Jumbo: 0
TX: 256
- bond0的mac地址是多少? 取决于?
[root@localhost /etc/sysconfig/network-scripts]
#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
link/ether 28:31:52:aa:9e:4a brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
link/ether 28:31:52:aa:9e:4a brd ff:ff:ff:ff:ff:ff
4: ens6f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether d8:49:0b:8c:d1:3c brd ff:ff:ff:ff:ff:ff
5: ens6f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether d8:49:0b:8c:d1:3d brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 28:31:52:aa:9e:4a brd ff:ff:ff:ff:ff:ff
inet 10.137.16.5/24 brd 10.137.16.255 scope global bond0
valid_lft forever preferred_lft forever
bond0 的生效信息
[root@localhost /data/sandbox/dpdk-17.11/usertools]
#cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation <<===bonding mode 4
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 28:31:52:aa:9e:4a
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 9
Partner Key: 10
Partner Mac Address: 74:25:8a:bc:2f:1c
Slave Interface: eth0 <<======eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 28:31:52:aa:9e:4a
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 28:31:52:aa:9e:4a
port key: 9
port priority: 255
port number: 1
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: 74:25:8a:bc:2f:1c
oper key: 10
port priority: 32768
port number: 10
port state: 61
Slave Interface: eth1 <<======eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 28:31:52:aa:9e:4b
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 28:31:52:aa:9e:4a
port key: 9
port priority: 255
port number: 2
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: 74:25:8a:bc:2f:1c
oper key: 10
port priority: 32768
port number: 117
port state: 61
查看当前速率
[root@localhost /etc/sysconfig/network-scripts]
#ethtool bond0
Settings for bond0:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 2000Mb/s
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Link detected: yes
内核网卡接口
[root@localhost /etc/sysconfig/network-scripts]
#ll /sys/class/net/eth0/
total 0
-r--r--r-- 1 root root 4096 May 7 14:11 addr_assign_type
-r--r--r-- 1 root root 4096 May 7 03:48 address
-r--r--r-- 1 root root 4096 May 7 14:11 addr_len
drwxr-xr-x 2 root root 0 May 7 14:11 bonding_slave
-r--r--r-- 1 root root 4096 May 7 14:11 broadcast
-rw-r--r-- 1 root root 4096 May 7 10:21 carrier
-r--r--r-- 1 root root 4096 May 7 14:11 carrier_changes
lrwxrwxrwx 1 root root 0 May 6 17:07 device -> ../../../0000:01:00.0
-r--r--r-- 1 root root 4096 May 7 14:11 dev_id
-r--r--r-- 1 root root 4096 May 7 10:21 dev_port
-r--r--r-- 1 root root 4096 May 7 14:11 dormant
-r--r--r-- 1 root root 4096 May 7 14:11 duplex
-rw-r--r-- 1 root root 4096 May 7 14:11 flags
-rw-r--r-- 1 root root 4096 May 7 14:11 gro_flush_timeout
-rw-r--r-- 1 root root 4096 May 7 05:20 ifalias
-r--r--r-- 1 root root 4096 May 7 14:11 ifindex
-r--r--r-- 1 root root 4096 May 7 14:11 iflink
-r--r--r-- 1 root root 4096 May 7 14:11 link_mode
lrwxrwxrwx 1 root root 0 May 7 14:11 master -> ../../../../../virtual/net/bond0
-rw-r--r-- 1 root root 4096 May 7 03:48 mtu
-r--r--r-- 1 root root 4096 May 7 14:11 name_assign_type
-rw-r--r-- 1 root root 4096 May 7 14:11 netdev_group
-r--r--r-- 1 root root 4096 May 6 17:07 operstate
-r--r--r-- 1 root root 4096 May 7 14:11 phys_port_id
-r--r--r-- 1 root root 4096 May 7 14:11 phys_port_name
-r--r--r-- 1 root root 4096 May 7 14:11 phys_switch_id
drwxr-xr-x 2 root root 0 May 7 10:38 power
-rw-r--r-- 1 root root 4096 May 7 14:11 proto_down
drwxr-xr-x 18 root root 0 May 7 10:38 queues
-r--r--r-- 1 root root 4096 May 7 14:11 speed
drwxr-xr-x 2 root root 0 May 7 10:38 statistics
lrwxrwxrwx 1 root root 0 May 7 14:11 subsystem -> ../../../../../../class/net
-rw-r--r-- 1 root root 4096 May 7 14:11 tx_queue_len
-r--r--r-- 1 root root 4096 May 7 14:11 type
-rw-r--r-- 1 root root 4096 May 7 14:11 uevent
lrwxrwxrwx 1 root root 0 May 7 14:11 upper_bond0 -> ../../../../../virtual/net/bond0
从ifup看bond0的初始化过程
ifup eth0
/etc/sysconfig/network-scripts/ifup-eth ifcfg-eth0
network-functions
install_bonding_driver // 如果 #cat /sys/class/net/bonding_masters 不存在,就执行:modprobe bonding
在 /etc/sysconfig/network-scripts/ifup-eth
文件中, 完成bond 驱动的加载和初始化 :
下面看看对于slave device的处理,比如eth0,如果eth0不在/sys/class/net/bond0/bonding/slaves
文件中,那么需要先down掉(/sbin/ip link set dev ${DEVICE} down
)eth0设备,将eth0 写入到bond0的配置/sys/class/net/bond0/bonding/slaves
中;
先看对slave设备的处理
# slave device?
if [ "${SLAVE}" = yes -a "${ISALIAS}" = no -a "${MASTER}" != "" ]; then
install_bonding_driver ${MASTER}
grep -wq "${DEVICE}" /sys/class/net/${MASTER}/bonding/slaves 2>/dev/null || {
/sbin/ip link set dev ${DEVICE} down
echo "+${DEVICE}" > /sys/class/net/${MASTER}/bonding/slaves 2>/dev/null
}
ethtool_set <<<=====这里根据 #cat /etc/sysconfig/network-scripts/ifcfg-eth0 | grep ETHTOOL_OPTS 参数,
### 对eth0网卡进行配置,实际执行命令是:`/sbin/ethtool -s eth0 speed 1000 duplex full autoneg on`
exit 0
fi
[root@localhost /data/sandbox/cyberstar]
#cat /etc/sysconfig/network-scripts/ifcfg-eth0 | grep ETHTOOL_OPTS
ETHTOOL_OPTS="speed 1000 duplex full autoneg on"
ethtool_set()
{
oldifs=$IFS;
IFS=';';
[ -n "${ETHTOOL_DELAY}" ] && /bin/usleep ${ETHTOOL_DELAY}
for opts in $ETHTOOL_OPTS ; do
IFS=$oldifs;
if [[ "${opts}" =~ [[:space:]]*- ]]; then
/sbin/ethtool $opts
else
/sbin/ethtool -s ${REALDEVICE} $opts
fi
IFS=';';
done
IFS=$oldifs;
}
再看对master设备的处理
# Bonding initialization. For DHCP, we need to enslave the devices early,
# so it can actually get an IP.
if [ "$ISALIAS" = no ] && is_bonding_device ${DEVICE} ; then
install_bonding_driver ${DEVICE}
/sbin/ip link set dev ${DEVICE} up
for device in $(LANG=C grep -l "^[[:space:]]*MASTER=\"\?${DEVICE}\"\?\([[:space:]#]\|$\)" /etc/sysconfig/network-scripts/ifcfg-*) ; do
is_ignored_file "$device" && continue
/sbin/ifup ${device##*/} || net_log "Unable to start slave device ${device##*/} for master ${DEVICE}." warning
done
[ -n "${LINKDELAY}" ] && /bin/sleep ${LINKDELAY}
# add the bits to setup the needed post enslavement parameters
for arg in $BONDING_OPTS ; do
key=${arg%%=*};
value=${arg##*=};
if [ "${key}" = "primary" ]; then
echo $value > /sys/class/net/${DEVICE}/bonding/$key
fi
done
fi
判断依据
主要看install_bonding_driver
函数的逻辑,来分析bond的初始化过程:
[root@localhost /data/sandbox/cyberstar]
#cat /sys/class/net/bonding_masters
bond0
如果/sys/class/net/bond0/bonding/slaves 文件少于1行,表示,bonding没有配置好,同样判断一个设备是否是 bonding设备,也可以根据这个文件是否存在来判断:
[root@localhost /data/sandbox/cyberstar]
#cat /sys/class/net/bond0/bonding/slaves
eth1 eth0
如果bond没有配置好,则,继续,配置如下几个kernel 参数:
[root@localhost /data/sandbox/cyberstar]
#cat /sys/class/net/bond0/bonding/mode
802.3ad 4
[root@localhost /data/sandbox/cyberstar]
#cat /sys/class/net/bond0/bonding/miimon
100