经过验证,集群管理结点mgt的IP配置应为如下所示:
[root@mgt zmq]# ifconfig
//外部网卡 eth0 Link encap:Ethernet HWaddr 5C:F3:FC:E9:61:78 inet addr:192.168.253.100 Bcast:192.168.253.255 Mask:255.255.255.0 inet6 addr: 2001:cc0:2034:253:5ef3:fcff:fee9:6178/64 Scope:Global inet6 addr: fe80::5ef3:fcff:fee9:6178/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4444046 errors:0 dropped:0 overruns:0 frame:0 TX packets:35919995 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:450270622 (429.4 MiB) TX bytes:53072585625 (49.4 GiB) Interrupt:28 Memory:92000000-92012800 //外部虚拟网卡,是LVS的Virtual IP(VIP)所绑定的网卡 eth0:0 Link encap:Ethernet HWaddr 5C:F3:FC:E9:61:78 inet addr:192.168.253.110 Bcast:192.168.253.110 Mask:255.255.255.255 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:28 Memory:92000000-92012800 //内部网卡,与集群各个计算节点之间通信 eth1 Link encap:Ethernet HWaddr 5C:F3:FC:E9:61:7A inet addr:172.20.0.1 Bcast:172.20.0.255 Mask:255.255.255.0 inet6 addr: 2001:cc0:2034:253:5ef3:fcff:fee9:617a/64 Scope:Global inet6 addr: fe80::5ef3:fcff:fee9:617a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:70397 errors:0 dropped:0 overruns:0 frame:0 TX packets:192069 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8765554 (8.3 MiB) TX bytes:229986000 (219.3 MiB) Interrupt:40 Memory:94000000-94012800 //本地回环设备 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6429439 errors:0 dropped:0 overruns:0 frame:0 TX packets:6429439 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:893758138 (852.3 MiB) TX bytes:893758138 (852.3 MiB)
如果把eth0配置为内部网卡,eth1设置为外部网卡的话,将导致GDOS集群的MapScapeClient客户端访问变慢,且算法工具对话框无法选择数据。因此,LVS的VIP需要虚拟并绑定在面向外部网络的eth0上,如果虚拟并绑定在面向内部网络的eth1上的话,LVS也无法实现任务分发。在计算节点,LVS的VIP需要绑定到节点自身的回环设备上。一开始不知道这两块网卡哪个是哪个,走了点弯路,在这里记录一下。