Linux虚拟机的命令分发工具。

deploy.sh工具的目的是,将一个文件,发送到其他服务器上面去。

runRemoteCmd.sh工具的目的是,将一个命令,在多台服务器上执行。

depoly.conf是上面两个工具的配置文件。



 

deploy.sh的源码:

#!/bin/bash
#set -x

if [ $# -lt 3 ]
then
  echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag"
  echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag confFile"
  exit
fi

src=$1
dest=$2
tag=$3
if [ 'a'$4'a' == 'aa' ]
then
  confFile=/home/hadoop/tools/deploy.conf
else
  confFile=$4
fi

if [ -f $confFile ]
then
  if [ -f $src ]
  then
    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'`
    do
       scp $src $server":"${dest}
    done
  elif [ -d $src ]
  then
    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'`
    do
       scp -r $src $server":"${dest}
    done
  else
      echo "Error: No source file exist"
  fi

else
  echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"
fi

  

 

runRemoteCmd.sh的源码

#!/bin/bash
#set -x

if [ $# -lt 2 ]
then
  echo "Usage: ./runRemoteCmd.sh Command MachineTag"
  echo "Usage: ./runRemoteCmd.sh Command MachineTag confFile"
  exit
fi

cmd=$1
tag=$2
if [ 'a'$3'a' == 'aa' ]
then

  confFile=/home/hadoop/tools/deploy.conf
else
  confFile=$3
fi

if [ -f $confFile ]
then
    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'`
    do
       echo "*******************$server***************************"
       ssh $server "source /etc/profile; $cmd"
    done
else
  echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"
fi

  

 

depoly.conf文件

#第一列是linux的hostname,后面的列是该服务器支持的功能。
namenode1,all,namenode,resourcemanager,
namenode2,all,namenode,resourcemanager,
datanode1,all,datanode,zookeeper,journalnode,resourcemanager,
datanode2,all,datanode,zookeeper,journalnode,resourcemanager,
datanode3,all,datanode,zookeeper,journalnode,resourcemanager,

  

[hadoop@namenode2 conf]$ type cd deploy.sh
cd is a shell builtin
deploy.sh is /home/hadoop/tools/deploy.sh

 

 

具体例子

 

快速启动各个组件:(下面的组件首先都要配置环境变量,不然路径就要完整)
首先添加环境变量,同时增加两个工具,deploy.sh 和runRemoteCmd.sh

将.bashrc文件发送到所有服务器的当前目录:
[hadoop@datanode1 ~]$ deploy.sh .bashrc ~ all
分发执行工具,前提是要先做ssh免密:

启动datanode1,datanode2,datanode3三台服务器上的zookeeper:
[hadoop@namenode1 ~]$ runRemoteCmd.sh "zkServer.sh start" zookeeper

启动hdfs:
[hadoop@namenode1 ~]$ start-dfs.sh
启动后查看状态:
[hadoop@namenode2 ~]$ hdfs haadmin -getServiceState nn1
[hadoop@namenode2 ~]$ hdfs dfsadmin -safemode get


启动hbase:
[hadoop@namenode1 ~]$ start-hbase.sh

 

 

给大家看下在一台服务器上启动三台服务器的zookeeper的效果:

[hadoop@datanode1 tmp]$ runRemoteCmd.sh "zkServer.sh start" zookeeper
*******************datanode1***************************
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.12/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
*******************datanode2***************************
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.12/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
*******************datanode3***************************
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.12/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@datanode1 tmp]$

 

posted on 2019-07-13 20:26  坚守梦想  阅读(1067)  评论(0编辑  收藏  举报