hadoop-ha QJM 架构部署

 

公司之前老的hadoop集群namenode有单点风险,最近学习此链接http://www.binospace.com/index.php /hdfs-ha-quorum-journal-manager/ 牛人上的hadoop高可用部署,受益非浅,自己搞了一个和自己集群比较匹配的部署逻辑图,供要用hadoop的兄弟们使用, 

 

部署过程,有时间整理完了,给兄弟们奉上,供大家参考少走变路,哈哈! 一,安装准备

操作系统 centos6.2 
7台虚拟机

1 192.168.10.138  yum-test.h.com      #需要从 cloudera 取最新稳定的yum包到本地,
2 192.168.10.134  namenode.h.com
3 192.168.10.139  snamenode.h.com
4 192.168.10.135  datanode1.h.com
5 192.168.10.140  datanode2.h.com
6 192.168.10.141  datanode3.h.com
7 192.168.10.142  datanode4.h.com

以上对应的主机名和域名加到七台主机的 /etc/hosts中,

二,安装篇

master-namenode 上安装如下包
1 yum install hadoop-yarn  hadoop-mapreduce hadoop-hdfs-zkfc hadoop-hdfs-journalnode  impala-lzo*   hadoop-hdfs-namenode impala-state-store impala-catalog   hive-metastore -y

注:最后安装 standby-namenode 上安装如下包

1 yum install hadoop-yarn  hadoop-yarn-resourcemanager hadoop-hdfs-namenode hadoop-hdfs-zkfc   hadoop-hdfs-journalnode   hadoop-mapreduce  hadoop-mapreduce-historyserver  -y

datanode 集群安装(4台) 以下简称为dn节点:

yum install zookeeper zookeeper-server hive-hbase hbase-master  hbase  hbase-regionserver  impala impala-server impala-shell impala-lzo* hadoop-hdfs hadoop-hdfs-datanode  hive hive-server2 hive-jdbc  hadoop-yarn hadoop-yarn-nodemanager -y 

 

三,服务配置篇:
nn 节点:
  1 cd /etc/hadoop/conf/
  2 
  3 vim core-site.xml
  4 <configuration>
  5 <property>
  6     <name>fs.defaultFS</name>
  7     <value>hdfs://namenode.h.com:8020/</value>
  8 </property>
  9 <property>
 10    <name>fs.default.name</name>
 11     <value>hdfs://namenode.h.com:8020/</value>
 12 </property>
 13 
 14 <property>
 15 <name>ha.zookeeper.quorum</name>
 16 <value>namenode.h.com,datanode01.h.com,datanode02.h.com,datanode03.h.com,datanode04.h.com</value>
 17 </property>
 18 <property>
 19 <name>fs.trash.interval</name>
 20 <value>14400</value>
 21 </property>
 22 <property>
 23 <name>io.file.buffer.size</name>
 24 <value>65536</value>
 25 </property>
 26 <property>
 27 <name>io.compression.codecs</name>
 28 <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
 29 </property>
 30 <property>
 31 <name>io.compression.codec.lzo.class</name>
 32 <value>com.hadoop.compression.lzo.LzoCodec</value>
 33 </property>
 34 
 35 </configuration>
 36 
 37 cat hdfs-site.xml
 38 
 39 <?xml version="1.0"?>
 40 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 41 
 42 <configuration>
 43   <property>
 44     <name>dfs.namenode.name.dir</name>
 45     <value>file:///data/dfs/nn</value>
 46   </property>
 47 <!-- hadoop-datanode-- >
 48     <!-- 
 49   <property>
 50     <name>dfs.datanode.data.dir</name>
 51     <value>/data1/dfs/dn,/data2/dfs/dn,/data3/dfs/dn,/data4/dfs/dn,/data5/dfs/dn,/data6/dfs/dn,/data7/dfs/dn</value>
 52   </property>
 53   -->
 54 
 55 <!--  hadoop  HA -->
 56 <property>
 57 <name>dfs.nameservices</name>
 58 <value>wqkcluster</value>
 59 </property>
 60 <property>
 61 <name>dfs.ha.namenodes.wqkcluster</name>
 62 <value>nn1,nn2</value>
 63 </property>
 64 <property>
 65 <name>dfs.namenode.rpc-address.wqkcluster.nn1</name>
 66 <value>namenode.h.com:8020</value>
 67 </property>
 68 <property>
 69 <name>dfs.namenode.rpc-address.wqkcluster.nn2</name>
 70 <value>snamenode.h.com:8020</value>
 71 </property>
 72 <property>
 73 <name>dfs.namenode.http-address.wqkcluster.nn1</name>
 74 <value>namenode.h.com:50070</value>
 75 </property>
 76 <property>
 77 <name>dfs.namenode.http-address.wqkcluster.nn2</name>
 78 <value>snamenode.h.com:50070</value>
 79 </property>
 80 <property>
 81 <name>dfs.namenode.shared.edits.dir</name>
 82 <value>qjournal://namenode.h.com:8485;snamenode.h.com:8485;datanode01.h.com:8485/wqkcluster</value>
 83 </property>
 84 <property>
 85 <name>dfs.journalnode.edits.dir</name>
 86 <value>/data/dfs/jn</value>
 87 </property>
 88 <property>
 89 <name>dfs.client.failover.proxy.provider.wqkcluster</name>
 90 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
 91 </property>
 92 <property>
 93 <name>dfs.ha.fencing.methods</name>
 94 <value>sshfence(hdfs)</value>
 95 </property>
 96 <property>
 97 <name>dfs.ha.fencing.ssh.private-key-files</name>
 98 <value>/var/lib/hadoop-hdfs/.ssh/id_rsa</value>
 99 </property>
100 <property>
101 <name>dfs.ha.automatic-failover.enabled</name>
102 <value>true</value>
103 </property>
104 
105   <property>
106     <name>dfs.https.port</name>
107     <value>50470</value>
108   </property>
109   <property>
110     <name>dfs.replication</name>
111     <value>3</value>
112   </property>
113   <property>
114     <name>dfs.block.size</name>
115     <value>134217728</value>
116   </property>
117   <property>
118     <name>dfs.datanode.max.xcievers</name>
119     <value>8192</value>
120   </property>
121   <property>
122     <name>fs.permissions.umask-mode</name>
123     <value>022</value>
124   </property>
125   <property>
126     <name>dfs.permissions.superusergroup</name>
127     <value>hadoop</value>
128   </property>
129   <property>
130     <name>dfs.client.read.shortcircuit</name>
131     <value>true</value>
132   </property>
133   <property>
134     <name>dfs.domain.socket.path</name>
135     <value>/var/run/hadoop-hdfs/dn._PORT</value>
136   </property>
137   <property>
138     <name>dfs.client.file-block-storage-locations.timeout</name>
139     <value>10000</value>
140   </property>
141   <property>
142       <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
143       <value>true</value>
144   </property>
145   <property>
146     <name>dfs.client.domain.socket.data.traffic</name>
147     <value>false</value>
148   </property>
149 </configuration>
150 
151 cat yarn-site.xml
152 
153 <?xml version="1.0"?>
154 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
155 
156 <configuration>
157   <property>
158     <name>yarn.resourcemanager.resource-tracker.address</name>
159     <value>snamenode.h.com:8031</value>
160   </property>
161   <property>
162     <name>yarn.resourcemanager.address</name>
163     <value>snamenode.h.com:8032</value>
164   </property>
165   <property>
166     <name>yarn.resourcemanager.admin.address</name>
167     <value>snamenode.h.com:8033</value>
168   </property>
169   <property>
170     <name>yarn.resourcemanager.scheduler.address</name>
171     <value>snamenode.h.com:8030</value>
172   </property>
173   <property>
174     <name>yarn.resourcemanager.webapp.address</name>
175     <value>snamenode.h.com:8088</value>
176   </property>
177   <property>
178     <name>yarn.nodemanager.local-dirs</name>
179     <value>/data1/yarn/local,/data2/yarn/local,/data3/yarn/local,/data4/yarn/local</value>
180   </property>
181   <property>
182       <name>yarn.nodemanager.log-dirs</name>
183     <value>/data1/yarn/logs,/data2/yarn/logs,/data3/yarn/logs,/data4/yarn/logs</value>
184   </property>
185   <property>
186     <name>yarn.nodemanager.remote-app-log-dir</name>
187     <value>/yarn/apps</value>
188   </property>
189   <property>
190       <name>yarn.nodemanager.aux-services</name>
191       <value>mapreduce_shuffle</value>
192   </property>
193   <property>
194       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
195       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
196   </property>
197   <property>
198     <name>yarn.log-aggregation-enable</name>
199     <value>true</value>
200   </property>
201   <property>
202     <name>yarn.application.classpath</name>
203     <value>
204 $HADOOP_CONF_DIR,
205 $HADOOP_COMMON_HOME/*,
206 $HADOOP_COMMON_HOME/lib/*,
207 $HADOOP_HDFS_HOME/*,
208 $HADOOP_HDFS_HOME/lib/*,
209 $HADOOP_MAPRED_HOME/*,
210 $HADOOP_MAPRED_HOME/lib/*,
211 $YARN_HOME/*,
212 $YARN_HOME/lib/*</value>
213   </property>
214 <!--
215   <property>
216     <name>yarn.resourcemanager.scheduler.class</name>
217     <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler</value>
218   </property>
219 -->
220   <property>
221     <name>yarn.resourcemanager.max-completed-applications</name>
222     <value>10000</value>
223   </property>
224 </configuration>

 

配置服务过程中,往其它节点分发配置的脚本:

 1 cat /root/cmd.sh
 2 #!/bin/sh
 3 
 4 for ip in 134 139 135 140 141 142;do
 5     echo "==============="$node"==============="
 6     ssh 10.168.35.$ip  $1
 7 done
 8 
 9  cat /root/syn.sh 
10 #!/bin/sh
11 
12 for ip in 127 120 121 122 123;do
13     scp -r $1 10.168.35.$ip:$2
14 done

 

 

journalnode部署在 namenode ,snamenode,datanode1三个节点上创建目录:

1 namenode: 
2     mkdir -p /data/dfs/jn ; chown -R hdfs:hdfs /data/dfs/jn
3 snamenode:
4     mkdir -p /data/dfs/jn ; chown -R hdfs:hdfs /data/dfs/jn
5  dn1
6     mkdir -p /data/dfs/jn ; chown -R hdfs:hdfs /data/dfs/jn
7  启动三个journalnode
8  /root/cmd.sh  "for x in `ls /etc/init.d/|grep hadoop-hdfs-journalnode` ; do service $x start ; done"

 

格式化集群hdfs存储(primary):

namenode上创建目及给相关权限:
1 mkdir -p /data/dfs/nn ; chown hdfs.hdfs /data/dfs/nn -R
2 sudo -u hdfs hdfs namenode -format;/etc/init.d/hadoop-hdfs-namenode start

snamenode上操作(standby)

1 mkdir -p /data/dfs/nn ; chown hdfs.hdfs /data/dfs/nn -R
2 ssh snamenode  'sudo -u hdfs hdfs namenode -bootstrapStandby ; sudo service hadoop-hdfs-namenode start'

datanode上创建目录及权限:

1 hdfs:
2 mkdir  -p  /data{1,2}/dfs  ; chown hdfs.hdfs /data{1,2}/dfs -R
3 
4 yarn:
5 
6 mkdir -p /data{1,2}/yarn; chown yarn.yarn /data{1,2}/yarn -R

在namenode和snamenode上配置hdfs用户间无密码登陆

 1 namenode:
 2 
 3 #passwd hdfs
 4 #su - hdfs
 5 $ ssh-keygen
 6 $ ssh-copy-id   snamenode
 7 
 8 snamenode:
 9 
10 #passwd hdfs
11 #su - hdfs
12 $ ssh-keygen
13 $ ssh-copy-id   namenode

在两个NameNode上安装hadoop-hdfs-zkfc

1 yum install hadoop-hdfs-zkfc
2 hdfs zkfc -formatZK
3 service hadoop-hdfs-zkfc start

测试执行手动切换:

1 sudo -u hdfs hdfs haadmin -failover nn1 nn2

查看某Namenode的状态:

1 sudo -u hdfs hdfs haadmin -getServiceState nn2
2 sudo -u hdfs hdfs haadmin -getServiceState nn1

 

配置启动yarn

在 hdfs 上创建目录:

1 sudo -u hdfs hadoop fs -mkdir -p /yarn/apps
2 sudo -u hdfs hadoop fs -chown yarn:mapred /yarn/apps
3 sudo -u hdfs hadoop fs -chmod -R 1777 /yarn/apps
4 sudo -u hdfs hadoop fs -mkdir  /user
5 sudo -u hdfs hadoop fs -chmod 777 /user
6 sudo -u hdfs hadoop fs -mkdir -p /user/history
7 sudo -u hdfs hadoop fs -chmod -R 1777 /user/history
8 sudo -u hdfs hadoop fs -chown mapred:hadoop /user/history

snamenode 启动yarn-mapred-historyserver

sh /root/cmd.sh ' for x in `ls /etc/init.d/|grep hadoop-mapreduce-historyserver` ; do service $x start ; done'

 

为每个 MapReduce 用户创建主目录,比如说 hive 用户或者当前用户:

1 sudo -u hdfs hadoop fs -mkdir /user/$USER
2 sudo -u hdfs hadoop fs -chown $USER /user/$USER

 





每个节点启动 YARN :
1 sh /root/cmd.sh ' for x in `ls /etc/init.d/|grep hadoop-yarn` ; do service $x start ; done'

 



检查yarn是否启动成功:
1 sh /root/cmd.sh ' for x in `ls /etc/init.d/|grep hadoop-yarn` ; do service $x status ; done'

 



测试yarn
1 sudo -u hdfs hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar randomwriter out

 

安装hive(在namenode上进行)

1 sh /root/cmd.sh 'yum install hive hive-hbase hvie-server hive server2 hive-jdbc  -y'  上面可能已安装这些包,检查一下,

 



下载mysql jar并设置软连接:
1 ln -s /usr/share/java/mysql-connector-java-5.1.25-bin.jar /usr/lib/hive/lib/mysql-connector-java.jar

 



创建数据库和用户:
 1 mysql -e "
 2     CREATE DATABASE metastore;
 3     USE metastore;
 4     SOURCE /usr/lib/hive/scripts/metastore/upgrade/mysql/hive-schema-0.10.0.mysql.sql;
 5     CREATE USER 'hiveuser'@'%' IDENTIFIED BY 'redhat';
 6     CREATE USER 'hiveuser'@'localhost' IDENTIFIED BY 'redhat';
 7     CREATE USER 'hiveuser'@'bj03-bi-pro-hdpnameNN' IDENTIFIED BY 'redhat';
 8     REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'hiveuser'@'%';
 9     REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'hiveuser'@'localhost';
10     REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'hiveuser'@'bj03-bi-pro-hdpnameNN';
11     GRANT SELECT,INSERT,UPDATE,DELETE,LOCK TABLES,EXECUTE ON metastore.* TO 'hiveuser'@'%';
12     GRANT SELECT,INSERT,UPDATE,DELETE,LOCK TABLES,EXECUTE ON metastore.* TO 'hiveuser'@'localhost';
13     GRANT SELECT,INSERT,UPDATE,DELETE,LOCK TABLES,EXECUTE ON metastore.* TO 'hiveuser'@'bj03-bi-pro-hdpnameNN';
14     FLUSH PRIVILEGES;
15 "

 


修改hive配置文件:
 1  cat /etc/hive/conf/hive-site.xml
 2 
 3     <?xml version="1.0"?>
 4 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 5 
 6 <configuration>
 7     <property>
 8         <name>javax.jdo.rootion.ConnectionURL</name>
 9         <value>jdbc:mysql://namenode.h.com:3306/metastore?useUnicode=true&amp;characterEncoding=UTF-8</value>
10     </property>
11 
12     <property>
13         <name>javax.jdo.rootion.ConnectionDriverName</name>
14         <value>com.mysql.jdbc.Driver</value>
15     </property>
16 
17     <property>
18         <name>javax.jdo.rootion.ConnectionUserName</name>
19         <value>hiveuser</value>
20     </property>
21 
22     <property>
23         <name>javax.jdo.rootion.ConnectionPassword</name>
24         <value>redhat</value>
25     </property>
26 
27     <property>
28         <name>datanucleus.autoCreateSchema</name>
29         <value>false</value>
30     </property>
31 
32     <property>
33         <name>mapreduce.framework.name</name>
34         <value>yarn</value>
35     </property>
36     <property>
37         <name>hive.metastore.local</name>
38         <value>false</value>
39     </property>
40 
41     <property>
42         <name>hive.files.umask.value</name>
43         <value>0002</value>
44     </property>
45 
46     <property>
47         <name>hive.metastore.uris</name>
48         <value>thrift://namenode.h.com:9083</value>
49     </property>
50 
51     <property>
52         <name>yarn.resourcemanager.resource-tracker.address</name>
53         <value>namenode.h.com:8031</value>
54     </property>
55     <property>
56         <name>hive.metastore.warehouse.dir</name>
57         <value>/user/hive/warehouse</value>
58     </property>
59 
60     <property>    
61         <name>hive.metastore.cache.pinobjtypes</name>    
62         <value>Table,Database,Type,FieldSchema,Order</value>    
63     </property>
64 </configuration>

 




创建目录并设置权限:

sudo -u hdfs hadoop fs -mkdir /user/hive
sudo -u hdfs hadoop fs -chown hive /user/hive
sudo -u hdfs hadoop fs -mkdir /user/hive/warehouse
sudo -u hdfs hadoop fs -chmod 1777 /user/hive/warehouse
sudo -u hdfs hadoop fs -chown hive /user/hive/warehouse

启动metastore:

service hive-metastore start    

安装zk:(安装namenode和4个dn节点上)

sh /root/cmd.sh 'yum install zookeeper*  -y'

修改zoo.cfg,添加下面代码:

server.1=namenode.h.com:2888:3888
server.2=datanode01.h.com:2888:3888
server.3=datanode02.h.com:2888:3888
server.4=datanode03.h.com:2888:3888
server.5=datanode04.h.com:2888:3888

将配置文件同步到其他节点:

sh /root/syn.sh /etc/zookeeper/conf  /etc/zookeeper/

在每个节点上初始化并启动 zookeeper,注意 n 的值需要和 zoo.cfg 中的编号一致。

sh /root/cmd.sh 'mkdir -p /data/zookeeper; chown -R zookeeper:zookeeper /data/zookeeper ; rm -rf /data/zookeeper/*'
ssh 192.168.10.134 'service zookeeper-server init --myid=1'
ssh 192.168.10.135 'service zookeeper-server init --myid=2'
ssh 192.168.10.140 'service zookeeper-server init --myid=3'
ssh 192.168.10.141'service zookeeper-server init --myid=4'
ssh 192.168.10.142 'service zookeeper-server init --myid=5'

检查是否初始化成功:

sh /root/cmd.sh 'cat /data/zookeeper/myid'

启动zk:

sh /root/cmd.sh 'service zookeeper-server start'

通过下面命令测试是否启动成功:

zookeeper-client -server namenode.h.com:2181

安装hbase(部署在4个dn节点上)

设置时钟同步:

sh /root/cmd.sh 'yum install ntpdate -y; ntpdate pool.ntp.org; 

sh /root/cmd.sh ' ntpdate pool.ntp.org'

设置crontab:

sh /root/cmd.sh ‘echo "* 3 * * * ntpdate pool.ntp.org" > /var/spool/cron/root’

在4个数据节点上安装hbase:
注:上面yum 已完成安装

在 hdfs 中创建 /hbase 目录

sudo -u hdfs hadoop fs -mkdir /hbase;sudo -u hdfs hadoop fs -chown hbase:hbase /hbase

修改hbase配置文件,
配置 cat /etc/hbase/conf/hbase-site.xml

    <?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://wqkcluster/hbase</value>
    </property>
    <property>
        <name>dfs.support.append</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.hregion.max.filesize</name>
        <value>3758096384</value>
    </property>
    <property>
        <name>hbase.hregion.memstore.flush.size</name>
        <value>67108864</value>
    </property>
    <property>
        <name>hbase.security.authentication</name>
        <value>simple</value>
    </property>
    <property>
        <name>zookeeper.session.timeout</name>
        <value>180000</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>datanode01.h.com,datanode02.h.com,datanode03.h.com,datanode04.h.com,namenode.h.com</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value>
    </property>

    <property>
        <name>hbase.hregion.memstore.mslab.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.regions.slop</name>
        <value>0</value>
    </property>
    <property>
        <name>hbase.regionserver.handler.count</name>
        <value>20</value>
    </property>
    <property>
        <name>hbase.regionserver.lease.period</name>
        <value>600000</value>
    </property>
    <property>
        <name>hbase.client.pause</name>
        <value>20</value>
    </property>
    <property>
        <name>hbase.ipc.client.tcpnodelay</name>
        <value>true</value>
    </property>
    <property>
        <name>ipc.ping.interval</name>
        <value>3000</value>
    </property>
    <property>
        <name>hbase.client.retries.number</name>
        <value>4</value>
    </property>
    <property>
        <name>hbase.rpc.timeout</name>
        <value>60000</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.maxClientCnxns</name>
        <value>2000</value>
    </property>
</configuration>

同步到其它四个dn节点:
 sh /root/syn.sh /etc/hbase/conf /etc/hbase/

创建本地目录:

sh /root/cmd.sh 'mkdir /data/hbase ; chown -R hbase:hbase /data/hbase/'

启动HBase:

sh /root/cmd.sh ' for x in `ls /etc/init.d/|grep hbase` ; do service $x start ; done'

检查是否启动成功:

sh /root/cmd.sh ' for x in `ls /etc/init.d/|grep hbase` ; do service $x status ; done'  

安装impala(安装在namenode和4个dn节点上)

在namenode节点安装impala-state-store impala-catalog

安装过程参考上面

在4个dn节点上安装impala impala-server impala-shell  impala-udf-devel:

安装过程参考上面

拷贝mysql jdbc jar到impala目录,并分发到四个dn节点上

sh /root/syn.sh /usr/lib/hive/lib/mysql-connector-java.jar  /usr/lib/impala/lib/

在每个节点上创建/var/run/hadoop-hdfs:

sh /root/cmd.sh 'mkdir -p /var/run/hadoop-hdfs'

将hive和hdfs配置文件拷贝到impala conf,并分发到4个dn节点上。

cp /etc/hive/conf/hive-site.xml /etc/impala/conf/
cp /etc/hadoop/conf/hdfs-site.xml /etc/impala/conf/
cp /etc/hadoop/conf/core-site.xml /etc/impala/conf/

sh /root/syn.sh /etc/impala/conf /etc/impala/


修改 /etc/default/impala,然后将其同步到impala节点上:

IMPALA_CATALOG_SERVICE_HOST=bj03-bi-pro-hdpnameNN
IMPALA_STATE_STORE_HOST=bj03-bi-pro-hdpnameNN
IMPALA_STATE_STORE_PORT=24000
IMPALA_BACKEND_PORT=22000
IMPALA_LOG_DIR=/var/log/impala

sh /root/syn.sh /etc/default/impala /etc/default/

启动 impala:

sh /root/cmd.sh ' for x in `ls /etc/init.d/|grep impala` ; do service $x start ; done'

检查是否启动成功:

sh /root/cmd.sh ' for x in `ls /etc/init.d/|grep impala` ; do service $x status ; done'

四,测试篇:

hdfs 服务状态测试

sudo -u hdfs hadoop dfsadmin -report

hdfs 文件上传,下载

su - hdfs hadoop dfs -put test.txt  /tmp/

mapreduce 任务测试

bin/Hadoop jar \
share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar \
wordcount \
-files hdfs:///tmp/text.txt \
/test/input \
/test/output

测试yarn

sudo -u hdfs hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar randomwriter out

namenode 自动切换测试 执行手动切换:

1 sudo -u hdfs hdfs haadmin -failover nn1 nn2
2 
3 [root@snamenode ~]#sudo -u hdfs hdfs haadmin -getServiceState nn1 active
4 
5 [root@snamenode ~]# sudo -u hdfs hdfs haadmin -getServiceState nn2 standby

 

posted @ 2015-06-19 22:30  shantuwqk  阅读(886)  评论(0编辑  收藏  举报