hadoop 0.20.205 +HBASE 0.90.5完全分布式安装

HBase集群搭建

Hadoop版本 hadoop-0.20.205.0
HBase 版本 hbase-0.90.5
 
(1)   下载HBase,hbase-0.90.5.tar.gz
(2)   解压 tar -zxf hbase-0.90.5.tar.gz,解压后,HBase所在目录为:/usr/local/hadoop/hbase-0.90.5
(3)   设置 /etc/profile,文件尾部加入,HBASE_HOME=/usr/local/hadoop/hbase-0.90.5
和PATH=$HADOOP_HOME/bin:$HBASE_HOME/bin:$PATH。
[root@NameNode conf]# chmod +x /etc/profile
[root@NameNode conf]# source /etc/profile
[root@NameNode conf]# echo $HBASE_HOME
/usr/local/hadoop/hbase-0.90.5
[root@NameNode conf]#
(4)   进入$HBASE_HOME/conf目录,
vi hbase-env.sh,设置
export JAVA_HOME=/usr/local/java/jdk1.6.0_41
export HBASE_CLASSPATH=$HBASE_CLASSPATH:/usr/local/hadoop/hadoop-0.20.205.0/conf
export HBASE_HOME=/usr/local/hbase-0.90.5
export HBASE_MANAGES_ZK=true
export HADOOP_CONF_DIR=/usr/local/hadoop/hadoop-0.20.205.0/conf
vi hbase-site.xml,增加
<configuration>
     <property>
        <name>hbase.master</name>
       <value>NameNode:60000</value>
     </property>
     <property>
        <name>hbase.rootdir</name>
       <value>hdfs://NameNode:9000/hbase</value>
     </property>
     <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
     </property>
     <property>
       <name>hbase.zookeeper.quorum</name>
        <value>DataNode1,DataNode2,DataNode3</value>
    </property>
</configuration>
注:hbase.rootdir属性一定是机器名,不可以是IP
(5)   vi regionservers,增加slave节点IP。
DataNode1
DataNode2
DataNode3
 
 
(6)   cd /usr/local/hbase-0.90.5/lib
[root@NameNode lib]# cp /usr/local/hadoop/hadoop-0.20.205.0/hadoop-ant-0.20.205.0.jar  ./
[root@NameNode lib]# cp /usr/local/hadoop/hadoop-0.20.205.0/hadoop-core-0.20.205.0.jar ./
[root@NameNode lib]# cp /usr/local/hadoop/hadoop-0.20.205.0/hadoop-tools-0.20.205.0.jar ./
[root@NameNode lib]# cp /usr/local/hadoop/hadoop-0.20.205.0/lib/commons-configuration-1.6.jar ./
[root@NameNode lib]# mv hadoop-core-0.20-append-r1056497.jar  hadoop-core-0.20-append-r1056497.sav
 
(7)   把hbase文件夹复制到其他节点
scp -R /usr/local/hadoop-0.20.205.0 root@datanode1:/usr/local
scp -R /usr/local/hadoop-0.20.205.0 root@datanode2:/usr/local
scp -R /usr/local/hadoop-0.20.205.0 root@datanode3:/usr/local
 
修改各个节点下hbase文件夹的属主
chown -r hadoop:hadoop /usr/local/hadoop-0.20.205.0
(8)       已准备好了,进入$HBASE_HOME目录,执行 bin/start-hbase.sh启动HBase,启动后,在master机器上,输入jps查看以下内容:
           [root@NameNode ~]# jps
25293 HMaster
4373 JobTracker
4087 NameNode
28261 Jps
4277 SecondaryNameNode
      在DataNode节点下
       [root@DataNode1 local]# jps
22532 DataNode
43372 HQuorumPeer
43447 HRegionServer
43543 Jps
22639 TaskTracker
 
(9)  bin/hbase shell,进入后,执行list,列出表名
(10)  http://hbase.apache.org/book/quickstart.html
测试HBase
hbase(main):003:0> create 'test', 'cf'
0 row(s) in 1.2200 seconds
hbase(main):003:0> list 'test'
..
1 row(s) in 0.0550 seconds
hbase(main):004:0> put 'test', 'row1', 'cf:a', 'value1'
0 row(s) in 0.0560 seconds
hbase(main):005:0> put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0370 seconds
hbase(main):006:0> put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0450 seconds

 

 

posted on 2016-08-03 02:09  liermao12  阅读(227)  评论(0编辑  收藏  举报

导航