1、在hadoop上构建hbase

前置条件已经搭建好了hadoop集群

1、安装zookeeper

zookeerper下载地址如下

https://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.5.8/apache-zookeeper-3.5.8-bin.tar.gz

下载完毕传到centos /usr/local/software下解压,并且修改名称为zookeeper。

配置zookeeper

  首先在zookeeper目录下创建一个data文件夹

  然后修改zookeeper conf目录下配置文件如下

  

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/local/software/zookeeper/data
# the port at which the clients will connect
clientPort=2181
server.1=hbase1:2888:3888
server.2=hbase2:2888:3888
server.3=hbase3:2888:3888
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
~

启动zookeeper

2、配置hbase

下载hbase

我的hadoop是2.10.0,匹配的hbase版本是2.3.3,要下载bin版本不然会报错

https://mirror.bit.edu.cn/apache/hbase/2.3.3/hbase-2.3.3-bin.tar.gz

配置hbase-site文件和hbase-env.sh

如下

<configuration>
        <property>
                <name>hbase.rootdir</name>
                <value>hdfs://hbase1:9000/hbase</value>
        </property>
        <!-- 完全分布式下为true,单机或伪分布为false -->
        <property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>
        <property>
                <name>hbase.master</name>
                <value>hdfs://hbase1:60000</value>
        </property>
        <!-- 此处为连接zookeeper时的默认端口,之前我的zookeeper不是有hbase管理的,所以手工添加这个配置 -->
         <property>
                <name>hbase.zookeeper.property.clientPort</name>
                <value>2181</value>
                <description>Property fromZooKeeper's config zoo.cfg. The port at which the clients willconnect.</description>
        </property>
        <property>
                <name>hbase.zookeeper.quorum</name>
                <value>hbase1,hbase2,hbase3</value>
        </property>
        <!-- 此处为zookeeper的默认配置目录,不修改的时候hbase默认将目录放在/tem下面 -->
        <property>
                <name>hbase.zookeeper.property.dataDir</name>
                <value>/usr/local/software/zookeeper/conf/data</value>
                <description>Property from ZooKeeper's config zoo.cfg. The directory where the snapshot is stored. </description>
        </property>
        <!-- 这是设置hbase的web端 端口 -->
        <property>
                <name>hbase.master.info.port</name>
                <value>60010</value>
        </property>
</configuration>
~
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073"
# export HBASE_REST_OPTS="$HBASE_REST_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8074"

# A string representing this instance of hbase. $USER by default.
# export HBASE_IDENT_STRING=$USER

# The scheduling priority for daemon processes.  See 'man nice'.
# export HBASE_NICENESS=10

# The directory where pid files are stored. /tmp by default.
# export HBASE_PID_DIR=/var/hadoop/pids

# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HBASE_SLAVE_SLEEP=0.1

# Tell HBase whether it should manage it's own instance of ZooKeeper or not.
# export HBASE_MANAGES_ZK=true

# The default log rolling policy is RFA, where the log file is rolled as per the size defined for the
# RFA appender. Please refer to the log4j.properties file to see more details on this appender.
# In case one needs to do log rolling on a date change, one should set the environment property
# HBASE_ROOT_LOGGER to "<DESIRED_LOG LEVEL>,DRFA".
# For example:
# HBASE_ROOT_LOGGER=INFO,DRFA
# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as
# DRFA doesn't put any cap on the log size. Please refer to HBase-5655 for more context.

# Tell HBase whether it should include Hadoop's lib when start up,
# the default value is false,means that includes Hadoop's lib.
# export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true"

# Override text processing tools for use by these launch scripts.
# export GREP="${GREP-grep}"
# export SED="${SED-sed}"
export JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.272.b10-1.el7_9.x86_64/jre"
export HBASE_MANAGES_ZK="true"
export HADOOP_HOME="/usr/local/software/hadoop"
export HBASE_HOME="/usr/local/software/hbase"
"/usr/local/software/hbase/conf/hbase-env.sh" 147L, 7883C        

启动hbase即可

posted on 2020-12-08 10:48  清浊  阅读(156)  评论(0编辑  收藏  举报