快速搭建hadoop,zk,hbase的基础集群

1. ZK集群,Hadoop集群,Hbase集群安装

Linux121 Linux122 Linux123
Hadoop
MySQL
ZK
HBASE

1.1 安装Vmware,安装虚拟机集群

1.1.1 安装 (VMware-workstation-full-15.5.5-16285975)

许可证:

UY758-0RXEQ-M81WP-8ZM7Z-Y3HDA

1.1.2 安装 centos7

image-20241014050304076

image-20241014050318505

image-20241014050333480

image-20241014050402272

image-20241014050504602

image-20241014050523495

image-20241014050536912

image-20241014050556095

image-20241014050609260

image-20241014050626545

image-20241014050637798

image-20241014050652708

image-20241014050705268

image-20241014050714724

image-20241014050726469

image-20241014050734365

image-20241014050743125

image-20241014050750254

image-20241014050758160

image-20241014050808565

image-20241014050820687

image-20241014050832525

123456

image-20241014050843823

image-20241014050853790

1.1.3 配置静态IP

image-20241014052344247

image-20241014052357714

image-20241014052425714

vi /etc/sysconfig/network-scripts/ifcfg-ens33

image-20241014052512904

:wq
systemctl restart network
ip addr

image-20241014052617818

ping www.baidu.com
快照
安装jdk
mkdir -p /opt/lagou/software    --软件安装包存放目录
mkdir -p /opt/lagou/servers     --软件安装目录
rpm -qa | grep java
清理上面显示的包名
sudo yum remove java-1.8.0-openjdk

上传文件jdk-8u421-linux-x64.tar.gz
chmod 755 jdk-8u421-linux-x64.tar.gz
解压文件到/opt/lagou/servers目录下

tar -zxvf jdk-8u421-linux-x64.tar.gz -C /opt/lagou/servers

cd /opt/lagou/servers
ll

配置环境
vi /etc/profile

export JAVA_HOME=/opt/lagou/servers/jdk1.8.0_421
export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar
export PATH=$PATH:${JAVA_HOME}/bin
source /etc/profile
java -version

1.1.4 安装Xmanager

连接192.168.49.121:22

密码:123456

1.1.5 克隆2台机器,并配置

image-20241014060926596

image-20241014060935280

image-20241014060947805

image-20241014060955495

image-20241014061011326

vi /etc/sysconfig/network-scripts/ifcfg-ens33

image-20241014061039117

systemctl restart network
ip addr
hostnamectl
hostnamectl set-hostname linux121

关闭防火墙
systemctl status firewalld
systemctl stop firewalld
systemctl disable firewalld


关闭selinux
vi /etc/selinux/config

image-20241014061359074

三台机器免密登录
vi /etc/hosts

image-20241014061528189

192.168.49.121 linux121
192.168.49.122 linux122
192.168.49.123 linux123


image-20241014073307647

第一步: ssh-keygen -t rsa 在centos7-1和centos7-2和centos7-3上面都要执行,产生公钥
和私钥
ssh-keygen -t rsa

第二步:在centos7-1 ,centos7-2和centos7-3上执行:
ssh-copy-id linux121 将公钥拷贝到centos7-1上面去
ssh-copy-id linux122 将公钥拷贝到centos7-2上面去
ssh-copy-id linux123 将公钥拷贝到centos7-3上面去
ssh-copy-id linux121 
ssh-copy-id linux122 
ssh-copy-id linux123 
第三步:
centos7-1执行:
scp /root/.ssh/authorized_keys linux121:$PWD
scp /root/.ssh/authorized_keys linux122:$PWD
scp /root/.ssh/authorized_keys linux123:$PWD
三台机器时钟同步
sudo cp -a /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak

sudo curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

sudo yum clean all
sudo yum makecache



sudo yum install ntpdate

ntpdate us.pool.ntp.org

crontab -e

*/1 * * * * /usr/sbin/ntpdate us.pool.ntp.org;

快照

1.2 安装ZK,Hadoop,Hbase集群,安装mysql

1.2.1 安装hadoop集群

在/opt目录下创建文件夹
mkdir -p /opt/lagou/software    --软件安装包存放目录
mkdir -p /opt/lagou/servers     --软件安装目录
上传hadoop安装文件到/opt/lagou/software
https://archive.apache.org/dist/hadoop/common/hadoop-2.9.2/

hadoop-2.9.2.tar.gz

image-20241014082915813

linux121节点


tar -zxvf hadoop-2.9.2.tar.gz -C /opt/lagou/servers
ll /opt/lagou/servers/hadoop-2.9.2
yum install -y vim

添加环境变量
vim /etc/profile
##HADOOP_HOME
export HADOOP_HOME=/opt/lagou/servers/hadoop-2.9.2
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
source /etc/profile
hadoop version

HDFS集群配置
cd /opt/lagou/servers/hadoop-2.9.2/etc/hadoop
vim hadoop-env.sh

export JAVA_HOME=/opt/lagou/servers/jdk1.8.0_421
vim core-site.xml

 <!-- 指定HDFS中NameNode的地址 -->
<property>
 <name>fs.defaultFS</name>
 <value>hdfs://linux121:9000</value>
</property>
 <!-- 指定Hadoop运行时产生文件的存储目录 -->
<property>
 <name>hadoop.tmp.dir</name>
 <value>/opt/lagou/servers/hadoop-2.9.2/data/tmp</value>
</property>

 vim slaves
 
linux121
linux122
linux123
vim mapred-env.sh

export JAVA_HOME=/opt/lagou/servers/jdk1.8.0_421
mv mapred-site.xml.template mapred-site.xml
 vim mapred-site.xml
 <!-- 指定MR运行在Yarn上 -->
 <property>
 <name>mapreduce.framework.name</name>
 <value>yarn</value>
 </property>
vi mapred-site.xml
在该文件里面增加如下配置。
<!-- 历史服务器端地址 -->
 <property>
 <name>mapreduce.jobhistory.address</name>
 <value>linux121:10020</value>
 </property>
 <!-- 历史服务器web端地址 -->
 <property>
 <name>mapreduce.jobhistory.webapp.address</name>
 <value>linux121:19888</value>
 </property>
vim yarn-env.sh

export JAVA_HOME=/opt/lagou/servers/jdk1.8.0_421
 vim yarn-site.xml
 <!-- 指定YARN的ResourceManager的地址 -->
 <property>
 <name>yarn.resourcemanager.hostname</name>
 <value>linux123</value>
 </property>
 <!-- Reducer获取数据的方式 -->
 <property>
 <name>yarn.nodemanager.aux-services</name>
 <value>mapreduce_shuffle</value>
 </property>
vi yarn-site.xml
在该文件里面增加如下配置。
<!-- 日志聚集功能使能 -->
<property>
 <name>yarn.log-aggregation-enable</name>
 <value>true</value>
 </property>
 <!-- 日志保留时间设置7天 -->
 <property>
 <name>yarn.log-aggregation.retain-seconds</name>
     <value>604800</value>
 </property>
 <property>
     <name>yarn.log.server.url</name>
     <value>http://linux121:19888/jobhistory/logs</value>
 </property>
chown -R root:root /opt/lagou/servers/hadoop-2.9.2
分发配置
三台都要
sudo yum install -y rsync

touch rsync-script
vim rsync-script

#!/bin/bash
 #1 获取命令输入参数的个数,如果个数为0,直接退出命令
paramnum=$#
 if((paramnum==0)); then
 echo no params;
 exit;
 fi
 #2 根据传入参数获取文件名称
p1=$1
 file_name=`basename $p1`
 echo fname=$file_name
 #3 获取输入参数的绝对路径
pdir=`cd -P $(dirname $p1); pwd`
 echo pdir=$pdir
 #4 获取用户名称
user=`whoami`
 #5 循环执行rsync
 for((host=121; host<124; host++)); do
 echo ------------------- linux$host -------------- 
rsync -rvl $pdir/$file_name $user@linux$host:$pdir
 done

chmod 777 rsync-script
./rsync-script /home/root/bin
./rsync-script /opt/lagou/servers/hadoop-2.9.2
./rsync-script /opt/lagou/servers/jdk1.8.0_421
./rsync-script /etc/profile
在namenode,linux121上格式化节点

hadoop namenode -format

ssh localhost

集群群起
stop-dfs.sh

stop-yarn.sh 
sbin/start-dfs.sh

image-20241014182832478

datanode可能起不来

sudo rm -rf /opt/lagou/servers/hadoop-2.9.2/data/tmp/*

hadoop namenode -format

sbin/start-dfs.sh
注意:NameNode和ResourceManger不是在同一台机器,不能在NameNode上启动 YARN,应该
在ResouceManager所在的机器上启动YARN

sbin/start-yarn.sh

linux121:
sbin/mr-jobhistory-daemon.sh start historyserver
地址:

hdfs:

http://linux121:50070/dfshealth.html#tab-overview

日志:

http://linux121:19888/jobhistory

cd /opt/lagou/servers/hadoop-2.9.2

sbin/mr-jobhistory-daemon.sh stop historyserver

stop-yarn.sh

stop-dfs.sh

测试

hdfs dfs -mkdir /wcinput

cd /root/
touch wc.txt

vi wc.txt


hadoop mapreduce yarn
 hdfs hadoop mapreduce
 mapreduce yarn lagou
 lagou
 lagou
 
保存退出
: wq!

hdfs dfs -put wc.txt /wcinput


hadoop jar share/hadoop/mapreduce/hadoop mapreduce-examples-2.9.2.jar wordcount /wcinput /wcoutput

1.2.2 安装zk集群

上传并解压zookeeper-3.4.14.tar.gz 

tar -zxvf zookeeper-3.4.14.tar.gz -C ../servers/
修改配置⽂文件创建data与log⽬目录
#创建zk存储数据⽬目录
mkdir -p /opt/lagou/servers/zookeeper-3.4.14/data

 #创建zk⽇日志⽂文件⽬目录
mkdir -p /opt/lagou/servers/zookeeper-3.4.14/data/logs

 #修改zk配置⽂文件
cd /opt/lagou/servers/zookeeper-3.4.14/conf

 #⽂文件改名
mv zoo_sample.cfg zoo.cfg

mkdir -p /opt/lagou/servers/zookeeper-3.4.14/data
mkdir -p /opt/lagou/servers/zookeeper-3.4.14/data/logs
cd /opt/lagou/servers/zookeeper-3.4.14/conf
mv zoo_sample.cfg zoo.cfg


 vim zoo.cfg
 
 #更更新datadir
 dataDir=/opt/lagou/servers/zookeeper-3.4.14/data
 #增加logdir
 dataLogDir=/opt/lagou/servers/zookeeper-3.4.14/data/logs
 #增加集群配置
##server.服务器器ID=服务器器IP地址:服务器器之间通信端⼝口:服务器器之间投票选举端⼝口
server.1=linux121:2888:3888
server.2=linux122:2888:3888
server.3=linux123:2888:3888
 #打开注释
#ZK提供了了⾃自动清理理事务⽇日志和快照⽂文件的功能,这个参数指定了了清理理频率,单位是⼩小时
autopurge.purgeInterval=1

 cd /opt/lagou/servers/zookeeper-3.4.14/data
 echo 1 > myid
 
 安装包分发并修改myid的值
cd /opt/lagou/servers/hadoop-2.9.2/etc/hadoop


 ./rsync-script /opt/lagou/servers/zookeeper-3.4.14
 
 修改myid值 linux122
 echo 2 >/opt/lagou/servers/zookeeper-3.4.14/data/myid 
 
 修改myid值 linux123
 echo 3 >/opt/lagou/servers/zookeeper-3.4.14/data/myid 
 
 依次启动三个zk实例例
启动命令(三个节点都要执⾏行行)

/opt/lagou/servers/zookeeper-3.4.14/bin/zkServer.sh start

查看zk启动情况
/opt/lagou/servers/zookeeper-3.4.14/bin/zkServer.sh status

集群启动停⽌止脚本

vim zk.sh


 #!/bin/sh
 echo "start zookeeper server..."
 if(($#==0));then
 echo "no params";
 exit;
 fi
 hosts="linux121 linux122 linux123"
 for host in $hosts
 do
 ssh $host "source /etc/profile; /opt/lagou/servers/zookeeper-3.4.14/bin/zkServer.sh $1"
 done

chmod 777 zk.sh

./zk.sh start
./zk.sh stop
./zk.sh status




1.2.3 安装Hbase集群(先启动Hadoop和zk才能启动Hbase)

解压安装包到指定的规划目录 hbase-1.3.1-bin.tar.gz

tar -zxvf hbase-1.3.1-bin.tar.gz -C /opt/lagou/servers

修改配置文件

把hadoop中的配置core-site.xml 、hdfs-site.xml拷贝到hbase安装目录下的conf文件夹中

ln -s /opt/lagou/servers/hadoop-2.9.2/etc/hadoop/core-site.xml /opt/lagou/servers/hbase-1.3.1/conf/core-site.xml 
ln -s /opt/lagou/servers/hadoop-2.9.2/etc/hadoop/hdfs-site.xml /opt/lagou/servers/hbase-1.3.1/conf/hdfs-site.xml

修改conf目录下配置文件

cd /opt/lagou/servers/hbase-1.3.1/conf

vim hbase-env.sh

 #添加java环境变量
export JAVA_HOME=/opt/lagou/servers/jdk1.8.0_421
 #指定使用外部的zk集群
export HBASE_MANAGES_ZK=FALSE
 
vim hbase-site.xml


<configuration>
          <!-- 指定hbase在HDFS上存储的路径 -->
        <property>
                <name>hbase.rootdir</name>
                <value>hdfs://linux121:9000/hbase</value>
        </property>
                <!-- 指定hbase是分布式的 -->
        <property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>
                <!-- 指定zk的地址,多个用“,”分割 -->
        <property>
                <name>hbase.zookeeper.quorum</name>
                <value>linux121:2181,linux122:2181,linux123:2181</value>
        </property>
 </configuration>       

vim regionservers

linux121
linux122
linux123

vim backup-masters

linux122


vim /etc/profile

export HBASE_HOME=/opt/lagou/servers/hbase-1.3.1
export PATH=$PATH:$HBASE_HOME/bin

分发hbase目录和环境变量到其他节点
cd /opt/lagou/servers/hadoop-2.9.2/etc/hadoop
./rsync-script /opt/lagou/servers/hbase-1.3.1
./rsync-script /etc/profile
让所有节点的hbase环境变量生效
在所有节点执行  source /etc/profile
cd /opt/lagou/servers/hbase-1.3.1/bin

HBase集群的启动和停止
前提条件:先启动hadoop和zk集群
启动HBase:start-hbase.sh
停止HBase:stop-hbase.sh
HBase集群的web管理界面
启动好HBase集群之后,可以访问地址:HMaster的主机名:16010

linux121:16010

1.2.4 安装mysql

卸载系统自带的mysql

rpm -qa | grep mysql

rpm -e --nodeps mysql-libs-5.1.73-8.el6_8.x86_64


安装mysql-community-release-el6-5.noarch.rpm


rpm -ivh mysql-community-release-el6-5.noarch.rpm
安装mysql 服务器
yum -y install mysql-community-server

启动服务
service mysqld start

如果出现:serivce: command not found
安装service

yum install initscripts

配置数据库
设置密码
/usr/bin/mysqladmin -u root password '123'
# 进入mysql
mysql -uroot -p123

# 清空 mysql 配置文件内容
>/etc/my.cnf
修改
vi /etc/my.cnf

[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
[mysqld]
character-set-server=utf8

重启查看,授权远程连接

service mysqld restart
mysql -uroot -p123
show variables like 'character_set_%';
# 给root授权:既可以本地访问, 也可以远程访问
grant all privileges on *.* to 'root'@'%' identified by '123' with grant
option;
# 刷新权限(可选)
flush privileges;
快照

图片不显示无伤大雅,就是示例,步骤都在。

非常重要:
如果后续需要接入spark,flink,hive,需要提前找到匹配的版本,这是原生部署的通病。

如果服务器资源足够,建议直接使用cdh部署。

posted @ 2024-10-15 07:07  惊世智慧  阅读(29)  评论(0编辑  收藏  举报