集群同步hive的脚本

程序员就是把一切手工做的事情变成让计算机来做,从而可以让自己偷偷懒。

以下就是个非常low的hive文件夹同步程序,至于节点超过100个或者1000个的,可以加个循环了。

#!/bin/sh

#================ hive 安装包同步 =================#
# 该脚本用来将name节点的hive文件夹同步到data节点   #
# 当hive安装包变动时,需要同步data节点,否则oozie  #
# 通过shell调用hive程序时,会因为分配的节点hive安  #
# 装包不同步而引起错误                             #
#==================================================#

# 1.清理旧的hive
ssh -t hadoop@dwprod-dataslave1 rm -r /opt/local/hive
ssh -t hadoop@dwprod-dataslave2 rm -r /opt/local/hive
ssh -t hadoop@dwprod-dataslave3 rm -r /opt/local/hive
ssh -t hadoop@dwprod-dataslave4 rm -r /opt/local/hive
ssh -t hadoop@dwprod-dataslave5 rm -r /opt/local/hive
ssh -t hadoop@dwprod-dataslave6 rm -r /opt/local/hive
ssh -t hadoop@dwprod-dataslave7 rm -r /opt/local/hive
ssh -t hadoop@dwprod-dataslave8 rm -r /opt/local/hive
ssh -t hadoop@dwprod-dataslave9 rm -r /opt/local/hive
ssh -t hadoop@dwprod-dataslave10 rm -r /opt/local/hive

# 2.拷贝新的hive
scp -r -q /opt/local/hive hadoop@dwprod-dataslave1:/opt/local/
scp -r -q /opt/local/hive hadoop@dwprod-dataslave2:/opt/local/
scp -r -q /opt/local/hive hadoop@dwprod-dataslave3:/opt/local/
scp -r -q /opt/local/hive hadoop@dwprod-dataslave4:/opt/local/
scp -r -q /opt/local/hive hadoop@dwprod-dataslave5:/opt/local/
scp -r -q /opt/local/hive hadoop@dwprod-dataslave6:/opt/local/
scp -r -q /opt/local/hive hadoop@dwprod-dataslave7:/opt/local/
scp -r -q /opt/local/hive hadoop@dwprod-dataslave8:/opt/local/
scp -r -q /opt/local/hive hadoop@dwprod-dataslave9:/opt/local/
scp -r -q /opt/local/hive hadoop@dwprod-dataslave10:/opt/local/

 

 

 

posted @ 2018-04-10 10:08  硅谷工具人  阅读(869)  评论(0编辑  收藏  举报
成功之道,在于每个人生阶段都要有不同的目标,并且通过努力实现自己的目标,毕竟人生不过百年! 所有奋斗的意义在于为个人目标实现和提升家庭幸福,同时能推进社会进步和国家目标! 正如古人讲的正心诚意格物致知,修身齐家治国平天下。