Linux_网络进阶管理

网络进阶管理

1. 链路聚合

  • 异地灾备(在另外的地方进行灾备,防止全部报废不能及时灾备)
    • 负载均衡(提升宽带) //一个失效全部失效
    • 高可用(提升可用性)//当一个失效可以用另外一个

网卡的链路聚合就是将多块网卡连接起来,当一块网卡损坏,网络依旧可以正常运行,可以有效的防止因为网卡损坏带来的损失,同时也可以提高网络访问速度。

网卡的链路聚合方式:

  • bond :最多添加两块网卡
  • team :最多添加八块网卡

band的常用的2种模式:

  • bond0(balance-rr)
    • bond0用于负载轮询(2个网单独都是100MB,聚合为1个网络传输带宽为200MB
  • band1(active-backup)
    • bond1用于高可用,其中一条线若断线,其他线路将会自动备援
                           --> eth0  ----\
    app  --发送数据到--> bond0          <---> switch 
                            --> eth1  ----/

2. 链路聚合配置

2.1 Centos7/RHEL7配置bond聚合链路

2.1.1 Centos7/RHEL7配置bond0(至 少两块网卡)

bond0配置歩驟:

  • nmcli con add type bond mode ba lance -rr con-name bond0ifname bond0 ipv4 . method manual
    ipv4. addresses 192.168.153.250/24
    ipv4. gateway 192.168.153.2
    ipv4. dns 114.114.114.114
  • nmcli con addtype bond slave con-name s lave1 ifname eth1 master bond0
  • nmcli conadd type bond slave con-name slave2 ifname eth2 master bond0
  1. 查看网络接口
[root@localhost ~]# nmcli dev
DEVICE  TYPE      STATE         CONNECTION 
eth0    ethernet  connected     eth0       
eth1    ethernet  disconnected  --         
eth2    ethernet  disconnected  --         
lo      loopback  unmanaged     --   
[root@localhost ~]#      
  1. 创建bond0,模式为balance-rr
[root@localhost ~]# nmcli con add type bond mode balance-rr con-name bond0 ifname bond0 ipv4.method manual ipv4.addresses 192.168.153.250/24 ipv4.gateway 192.168.153.2 ipv4.dns 114.114.114.114
Connection 'bond0' (845e3359-0477-4e66-aa40-ceaf4f66a796) successfully added.
[root@localhost ~]# 
  1. 添加物理网卡连接至bond0
[root@localhost ~]# nmcli con add type bond-slave con-name slave1 ifname eth1 master bond0
Connection 'slave1' (a142de94-39d4-471a-9e93-e1e363e42e9a) successfully added.
[root@localhost ~]# nmcli con add type bond-slave con-name slave2 ifname eth2 master bond0
Connection 'slave2' (12975294-2e89-464a-81bc-25803dc4c491) successfully added.
[root@localhost ~]# 
  1. 查看是否配置成功
[root@localhost ~]# nmcli dev
DEVICE  TYPE      STATE      CONNECTION 
eth0    ethernet  connected  eth0       
bond0   bond      connected  bond0      
eth1    ethernet  connected  slave1     
eth2    ethernet  connected  slave2     
lo      loopback  unmanaged  --         
[root@localhost ~]# nmcli con
NAME    UUID                                  TYPE      DEVICE 
eth0    3bcf613b-a0c4-43f4-bdd1-39b51c3c6a3f  ethernet  eth0   
bond0   845e3359-0477-4e66-aa40-ceaf4f66a796  bond      bond0  
slave1  a142de94-39d4-471a-9e93-e1e363e42e9a  ethernet  eth1   
slave2  12975294-2e89-464a-81bc-25803dc4c491  ethernet  eth2   
[root@localhost ~]# 

显示配置成功

  1. 查看bond配置信息
[root@localhost ~]# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: load balancing (round-robin) //负载均衡模式
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: eth1 //第一个网卡
MII Status: up
Speed: 10000 Mbps //千兆网
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:b1:eb:1d
Slave queue ID: 0

Slave Interface: eth2 //第二个网卡
MII Status: up
Speed: 10000 Mbps //千兆网
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:b1:eb:27
Slave queue ID: 0
[root@localhost ~]# 
  1. 停掉eth2网卡,测试bond0是否正常
[root@localhost ~]# nmcli dev disconnect eth2 //停掉eth2
Device 'eth2' successfully disconnected.
[root@localhost ~]# nmcli dev
DEVICE  TYPE      STATE         CONNECTION 
eth0    ethernet  connected     eth0       
bond0   bond      connected     bond0      
eth1    ethernet  connected     slave1     
eth2    ethernet  disconnected  --      //显示没有连接    
lo      loopback  unmanaged     --         
[root@localhost ~]# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: load balancing (round-robin) //负载均衡模式
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: eth1 //第一个网卡
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:b1:eb:1d
Slave queue ID: 0

2.1.2 Centos7/RHEL7配置bond1

删除刚才的配置文件

[root@localhost ~]# nmcli dev
DEVICE  TYPE      STATE      CONNECTION 
eth0    ethernet  connected  eth0       
bond0   bond      connected  bond0      
eth1    ethernet  connected  slave1     
eth2    ethernet  connected  slave2     
lo      loopback  unmanaged  --         
[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# ls
ifcfg-bond0  ifcfg-eth0  ifcfg-slave1  ifcfg-slave2
[root@localhost network-scripts]# rm -f ifcfg-bond0 ifcfg-slave*
[root@localhost network-scripts]# ls
ifcfg-eth0
[root@localhost ~]# nmcli dev   //已经删除完成
DEVICE  TYPE      STATE         CONNECTION 
eth0    ethernet  connected     eth0       
eth1    ethernet  disconnected  --         
eth2    ethernet  disconnected  --         
lo      loopback  unmanaged     --         
[root@localhost ~]# 

bond1配置歩驟:

  • nmcli con add type bond mode active-backup con-name bond1 ifname bond1 ifname bond1 ipv4.method manual
    ipv4.addresse 192.168.153.200/24
    ipv4.gateway 192.168.153.2
    ipv4.dns 114.114.114.114
  • nmcli con add type bond-slave con-name slave1 ifname eth1 master bond1
  • nnmcli con add type bond-slave con-name slave2 ifname eth2 master bond1
  1. 查看网络接口
[root@localhost ~]# nmcli dev
DEVICE  TYPE      STATE      CONNECTION 
eth0    ethernet  connected  eth0       
bond1   bond      connected  bond1      
eth1    ethernet  connected  slave1     
eth2    ethernet  connected  slave2     
lo      loopback  unmanaged  --         
[root@localhost ~]#     
  1. 创建bond1,模式为active-backup
[root@localhost ~]# nmcli con add type bond mode active-backup con-name bond1 ifname bond1 ifname bond1 ipv4.method manual ipv4.addresse 192.168.153.200/24 ipv4.gateway 192.168.153.2 ipv4.dns 114.114.114.114
Connection 'bond1' (71356ca8-d2ed-46ac-bb96-5adb04fa6725) successfully added.
[root@localhost ~]# 
  1. 添加物理网卡连接至bond1
[root@localhost ~]# nmcli con add type bond-slave con-name slave1 ifname eth1 master bond1
Connection 'slave1' (eb344131-6a66-404f-a86c-5af4ea1d6c6b) successfully added.
[root@localhost ~]# nmcli con add type bond-slave con-name slave2 ifname eth2 master bond1
Connection 'slave2' (2ee77d63-7388-42d3-ba0e-84824305eafd) successfully added.
[root@localhost ~]# 
  1. 查看是否配置成功
[root@localhost network-scripts]# nmcli con
NAME    UUID                                  TYPE      DEVICE 
eth0    3bcf613b-a0c4-43f4-bdd1-39b51c3c6a3f  ethernet  eth0   
bond1   9a7e97dd-6dbb-4b08-82c1-68d9bcde3402  bond      bond1  
slave1  eb344131-6a66-404f-a86c-5af4ea1d6c6b  ethernet  eth1   
slave2  2ee77d63-7388-42d3-ba0e-84824305eafd  ethernet  eth2   
[root@localhost network-scripts]# nmcli dev
DEVICE  TYPE      STATE      CONNECTION 
eth0    ethernet  connected  eth0       
bond1   bond      connected  bond1      
eth1    ethernet  connected  slave1     
eth2    ethernet  connected  slave2     
lo      loopback  unmanaged  --         

显示配置成功

  1. 启用连接
[root@localhost ~]# nmcli con up bond1
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
[root@localhost ~]# nmcli con up slave1
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/8)
[root@localhost ~]# nmcli con up slave2
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/9)
  1. 验证
[root@localhost ~]# cat /proc/net/bonding/bond1 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup) //高可用模式
Primary Slave: None
Currently Active Slave: eth1 //当前活跃的是eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: eth1 //第一张网卡
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:b1:eb:1d
Slave queue ID: 0

Slave Interface: eth2 //第二张网卡
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:b1:eb:27
Slave queue ID: 0
[root@localhost ~]# 
  1. 停掉eth1物理网卡
 [root@localhost ~]# nmcli dev disconnect eth1
Device 'eth1' successfully disconnected.
[root@localhost ~]# cat /proc/net/bonding/bond1 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth2 //eth2物理网卡在活跃
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: eth2 //只有eth2物理网卡
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:b1:eb:27
Slave queue ID: 0
[root@localhost ~]# 

2.2 Centos6/RHEL6配置bond聚合链路

适用于RedHat6以及CentOS6

系统 网卡 bond地址 bond模式 bond功能
Centos6.5 eth0: 172.16.12.128 eth1: 172.16.12.129 172.16.12.250 模式0 负载均衡
//1.创建绑定网卡配置文件
[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
TYPE=Ethernet
ONBOOT=yes
USERCTL=no
BOOTPROTO=static
IPADDR=172.16.12.250
NETMASK=255.255.255.0
GATEWAY=172.16.12.2
DNS1=172.16.12.2
BONDING_OPTS="mode=0 miimon=50" //如果使用模式1将mode修改为1即可

//2.修改eth0和eth1网卡配置文件
[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
USERCTL=no
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
USERCTL=no
BOOTPROTO=none
MASTER=bond0
SLAVE=yes

//3.添加驱动支持bond0
[root@localhost ~]# vim /etc/modprobe.d/bonding.conf
alias bond0 bonding

2.3 Centos8/RHEL8配置bond聚合链路

[root@localhost network-scripts]# cat ifcfg-bond1 
BONDING_OPTS=mode=active-backup
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=192.168.153.200
PREFIX=24
GATEWAY=192.168.153.2
DNS1=114.114.114.114
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=bond1
UUID=9a7e97dd-6dbb-4b08-82c1-68d9bcde3402
DEVICE=bond1
ONBOOT=yes
[root@localhost network-scripts]# cat ifcfg-slave2
TYPE=Ethernet
NAME=slave2
UUID=2ee77d63-7388-42d3-ba0e-84824305eafd
DEVICE=eth2
ONBOOT=yes
MASTER=bond1
SLAVE=yes
[root@localhost network-scripts]# cat ifcfg-slave1
TYPE=Ethernet
NAME=slave1
UUID=eb344131-6a66-404f-a86c-5af4ea1d6c6b
DEVICE=eth1
ONBOOT=yes
MASTER=bond1
SLAVE=yes
[root@localhost network-scripts]# 

2.4 Centos7/RHEL7配置team聚合链路

team可以实现一下模式的聚合电路

  • broadcast 广播容错
  • roundrobin 负载轮询
  • activebackup 主备(必考) 高可用
  • loadbalance 负载均衡
  • lacp 需要交换机支持lacp协议
  1. 使用命令配置,图形化配置不稳定
[root@localhost ~]# nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name":"activebackup"}}' ipv4.address 192.168.153.245/24 ipv4.gateway 192.168.153.2 ipv4.dns 114.114.114.114 ipv4.method manual
Connection 'team0' (bd18e120-8687-4450-88b1-846cf69b6fd5) successfully added.

  1. 添加物理网卡到team0
[root@localhost ~]# nmcli con add type team-slave con-name slave1 ifname eth1 master team0
Connection 'slave1' (cd68ccd8-e2c0-45e4-a8f6-50c35689e8fd) successfully added.
[root@localhost ~]# nmcli con add type team-slave con-name slave2 ifname eth2 master team0
Connection 'slave2' (9709b871-b52a-4afe-a449-8dd5f43b9457) successfully added.
[root@localhost ~]# 
  1. 查看连接情况
[root@localhost ~]# nmcli dev
DEVICE  TYPE      STATE      CONNECTION 
eth0    ethernet  connected  eth0       
team0   team      connected  team0      
eth1    ethernet  connected  slave1     
eth2    ethernet  connected  slave2     
lo      loopback  unmanaged  --         
[root@localhost ~]# nmcli con
NAME    UUID                                  TYPE      DEVICE 
eth0    3bcf613b-a0c4-43f4-bdd1-39b51c3c6a3f  ethernet  eth0   
team0   bd18e120-8687-4450-88b1-846cf69b6fd5  team      team0  
slave1  cd68ccd8-e2c0-45e4-a8f6-50c35689e8fd  ethernet  eth1   
slave2  9709b871-b52a-4afe-a449-8dd5f43b9457  ethernet  eth2   
[root@localhost ~]# 
  1. 检查team0状态
[root@localhost ~]# teamdctl team0 state
setup:
  runner: activebackup
ports:
  eth1
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
  eth2
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
runner:
  active port: eth1
[root@localhost ~]#
  1. 关闭eth1查看team0状态
[root@localhost ~]# nmcli dev
DEVICE  TYPE      STATE      CONNECTION 
eth0    ethernet  connected  eth0       
team0   team      connected  team0      
eth1    ethernet  connected  slave1     
eth2    ethernet  connected  slave2     
lo      loopback  unmanaged  --         
[root@localhost ~]# nmcli dev disconnect eth1
Device 'eth1' successfully disconnected.
[root@localhost ~]# teamdctl team0 state
setup:
  runner: activebackup
ports:
  eth2
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
runner:
  active port: eth2 //物理网卡eth2在运行,没有显示eth1物理网卡
  1. 重启eth1查看team0状态
[root@localhost ~]# nmcli dev connect eth1
Device 'eth1' successfully activated with 'cd68ccd8-e2c0-45e4-a8f6-50c35689e8fd'.
[root@localhost ~]# teamdctl team0 state
setup:
  runner: activebackup
ports:
  eth1
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
  eth2
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
runner:
  active port: eth2 //依然是物理网卡eth2在运行,但是可以看到eth1物理网卡
[root@localhost ~]# 

动态修改team聚合的模式

  1. 导出配置进行修改(man teamd.conf)
[root@localhost ~]# teamdctl team0 config dump > /tmp/team.conf
[root@localhost ~]# vi /tmp/team.cof
{
    "device": "team0",
    "mcast_rejoin": {
        "count": 1
    },
    "notify_peers": {
        "count": 1
    },
    "ports": {
        "eth1": {
            "link_watch": {
                "name": "ethtool"
            }
        },
        "eth2": {
            "link_watch": {
                "name": "ethtool"
            }
        }
    },
    "runner": {
        "name": "roundrobin" //在此处修改为roudrobin
    }
}
  1. 以最新修改的配置选项修改team0属性
[root@localhost ~]# nmcli con mod team0 team.config /tmp/team.conf 
[root@localhost ~]# 
  1. 修改之后需要重启team0
[root@localhost ~]# nmcli con mod team0 team.config /tmp/team.conf  //导入修改的配置文件
[root@localhost ~]# nmcli con down team0;nmcli con up team0 //先down再up
Connection 'team0' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2)
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
[root@localhost ~]# nmcli con up slave1
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/9)
[root@localhost ~]# nmcli con up slave2
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/10)
[root@localhost ~]# teamdctl team0 state
setup:
  runner: roundrobin  
ports:
  eth1
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
  eth2
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
        down count: 0
[root@localhost ~]# 
posted @ 2020-11-30 16:37  我爱吃芹菜~  阅读(226)  评论(0编辑  收藏  举报
Title