姜小嫌

  博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

把各节点host拷贝到一台节点

这可以采用读主机名配置的方式 我这里偷懒了

echo 'starting'
ssh hadoop01         "cp /etc/hosts ~/hadoop01-hosts && scp -P50022  ~/hadoop01-hosts hadoop01:~/jxx"
ssh hadoop02         "cp /etc/hosts ~/hadoop02-hosts && scp -P50022  ~/hadoop02-hosts hadoop01:~/jxx"
ssh hadoop03         "cp /etc/hosts ~/hadoop03-hosts && scp -P50022  ~/hadoop03-hosts hadoop01:~/jxx"
ssh m6-data-hadoop04 "cp /etc/hosts ~/hadoop04-hosts && scp -P50022  ~/hadoop04-hosts hadoop01:~/jxx"
ssh m6-data-hadoop05 "cp /etc/hosts ~/hadoop05-hosts && scp -P50022  ~/hadoop05-hosts hadoop01:~/jxx"
ssh m6-data-hadoop11 "cp /etc/hosts ~/hadoop11-hosts && scp -P50022  ~/hadoop11-hosts hadoop01:~/jxx"
ssh m6-data-hadoop12 "cp /etc/hosts ~/hadoop12-hosts && scp -P50022  ~/hadoop12-hosts hadoop01:~/jxx"
ssh m6-data-hadoop14 "cp /etc/hosts ~/hadoop14-hosts && scp -P50022  ~/hadoop14-hosts hadoop01:~/jxx"
ssh m6-data-hadoop15 "cp /etc/hosts ~/hadoop15-hosts && scp -P50022  ~/hadoop15-hosts hadoop01:~/jxx"
ssh m6-data-hadoop16 "cp /etc/hosts ~/hadoop16-hosts && scp -P50022  ~/hadoop16-hosts hadoop01:~/jxx"
ssh m6-data-hadoop18 "cp /etc/hosts ~/hadoop18-hosts && scp -P50022  ~/hadoop18-hosts hadoop01:~/jxx"
ssh m6-data-hadoop19 "cp /etc/hosts ~/hadoop19-hosts && scp -P50022  ~/hadoop19-hosts hadoop01:~/jxx"
ssh m6-data-hadoop20 "cp /etc/hosts ~/hadoop20-hosts && scp -P50022  ~/hadoop20-hosts hadoop01:~/jxx"
ssh m6-data-hadoop21 "cp /etc/hosts ~/hadoop21-hosts && scp -P50022  ~/hadoop21-hosts hadoop01:~/jxx"
ssh m6-data-hadoop22 "cp /etc/hosts ~/hadoop22-hosts && scp -P50022  ~/hadoop22-hosts hadoop01:~/jxx"
ssh m6-data-hadoop06 "cp /etc/hosts ~/hadoop06-hosts && scp -P50022  ~/hadoop06-hosts hadoop01:~/jxx"
ssh m6-data-hadoop07 "cp /etc/hosts ~/hadoop07-hosts && scp -P50022  ~/hadoop07-hosts hadoop01:~/jxx"
echo 'ending'

diff 各节点host

其实就是标准host 与 其他host 取差集 看其他节点host有无漏配情况

# encoding=utf8
import re
import os

files_list = []
base_host = []
# 处理基础host,作为标准
try:
    f = open('/home/hadoop/jxx/base_host', 'r')
    for line in f.readlines():
        line = line.strip()
        if line != '' and not line.startswith('#'):
            splits = re.split('\s+', line.strip())
            ip = splits[0].strip()
            host = splits[1].strip()
            base_host.append(ip + ' ' + host)
finally:
    f.close()

# 遍历所有'hosts',读取到file列表
files = os.listdir('/home/hadoop/jxx/')
for file in files:
    if file.endswith('-hosts'):
        files_list.append(file)
files_list.sort()
print 'diff host length',len(files_list)
for file in files_list:
    print file
print '#' * 50


for file in files_list:
    old_host = []
    new_host = []
    try:
        print '>' * 10, file, '\n'
        f = open(file, 'r')
        for line in f.readlines():
            if line.strip() != '' and not line.strip().startswith('#') and not line.strip().startswith('ff02') and not line.strip().startswith('::') and 'data0' not in line:
                old_host.append(line)

        for line in old_host:
            splits = re.split(r'\s+', line.strip())
            ip = splits[0].strip()
            count = 0
            for em in splits:
                if count != 0:
                    str = ' '.join([ip, em.strip()])
                    new_host.append(str)
                count += 1
        if len(new_host) != 0:
            print set(base_host).difference(set(new_host)), '\n'
    finally:
        f.close()
base_host.sort()
print '========== %s ==========' % ('base_host')
for line in base_host:
    print line
print '#' * 50
posted on 2018-10-17 23:55  姜小嫌  阅读(251)  评论(0编辑  收藏  举报