Hadoop集群搭建

角色 IP
master 192.168.10.10
save-1 192.168.10.11
save-2 192.168.10.12

 

 

 

 

 

jdk安装,三台机器一样装

1
2
3
4
5
6
7
8
#tar xf jdk-8u161-linux-x64.tar.gz -C /usr/local/<br><br>#mv /usr/local/{jdk1.8.0_161,jdk}
#vim /etc/profile.d/jdk.sh
export JAVA_HOME=/usr/local/jdk
 
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
 
export PATH=$JAVA_HOME/bin:$PATH
#exec bash

  hosts 本地文件解析,三台机器一样

1
2
3
4
5
6
7
# vim /etc/hosts
 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.10 hadoop-1 masters
192.168.10.11 hadoop-2 slaves-1
192.168.10.12 hadoop-3 slaves-2

  配置免密码登录

1
2
3
4
5
6
7
8
9
10
11
12
主节点
ssh-keygen  一路回车
ssh-copy-id hadoop-2
ssh-copy-id hadoop-3
1从节点配置
ssh-keygen  一路回车
ssh-copy-id hadoop-1
ssh-copy-id hadoop-3
2从节点配置
ssh-keygen  一路回车
ssh-copy-id hadoop-1
ssh-copy-id hadoop-2

  安装hadoop

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
tar xf hadoop-1.2.1-bin.tar.gz -C /usr/local/src/
cd /usr/local/src/hadoop-1.2.1
mkdir tmp
cd conf/
# vim masters
masters
# vim slaves
slaves-1
slaves-2
[root@masters conf]# vim core-site.xml
 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 
<!-- Put site-specific property overrides in this file. -->
 
<configuration>
        <propert>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/src/hadoop-1.2.1/tmp</value>
        </propert>
        <propert>
                <name>fs.default.name</name>
                <value>hdfs://192.168.10.10:9000</value>
        </propert>
</configuration>
 
 
[root@masters conf]# vim mapred-site.xml
 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 
<!-- Put site-specific property overrides in this file. -->
 
<configuration>
        <propert>
                <name>mapred.job.tracker</name>
                <value>http://192.168.10.10:9001</value>
        </propert>
 
</configuration>
 
 
[root@masters conf]# vim hdfs-site.xml
 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 
<!-- Put site-specific property overrides in this file. -->
 
<configuration>
        <propert>
                <name>dfs.replication</name>
                <value>3</value>
        </propert>
 
</configuration>
 
# vim hadoop-env.sh
 
export JAVA_HOME=/usr/local/jdk

  将hadoop拷贝从节点

1
2
3
4
5
[root@masters conf]# cd ../../
[root@masters src]# scp -r hadoop-1.2.1 hadoop-2:/usr/local/src/
[root@masters src]# scp -r hadoop-1.2.1 hadoop-3:/usr/local/src/
[root@masters src]# cd hadoop-1.2.1/bin/
[root@masters bin]# ./start-all.sh

  

  

 

posted @   烟雨楼台,行云流水  阅读(313)  评论(0编辑  收藏  举报
编辑推荐:
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
阅读排行:
· 无需6万激活码!GitHub神秘组织3小时极速复刻Manus,手把手教你使用OpenManus搭建本
· C#/.NET/.NET Core优秀项目和框架2025年2月简报
· 葡萄城 AI 搜索升级:DeepSeek 加持,客户体验更智能
· 什么是nginx的强缓存和协商缓存
· 一文读懂知识蒸馏
点击右上角即可分享
微信分享提示