安装Hadoop系列 — 安装Hadoop

安装步骤如下:
1)下载hadoop:hadoop-1.0.3
    http://archive.apache.org/dist/hadoop/core/hadoop-1.0.3/
 
2)解压文件:
     我是把hadoop-1.0.3.tar.gz文件复制到/home/hadoop目录下,然后直接就解压,它会自动生成一个hadoop-1.0.3的文件夹
  #sudo tar zvxf hadoop-1.0.3.tar.gz
 
3)注意:因为要确保所有的操作都是在用户hadoop下完成的,所以要对该文件的权限进行配置
  #sudo chown -R hadoop:hadoop /home/hadoop/hadoop-1.0.3
  #sudo chmod 755 -R /home/hadoop/hadoop-1.0.3
 
4)在全局文件/etc/profile中配置hadoop的环境变量。
  #sudo vi /etc/profile
 
/etc/environment是设置整个系统的环境,而/etc/profile是设置所有用户的环境,前者与登录用户无关,后者与登录用户有关。
所以尽管设置里全局的JAVA_HOME,但是用户的环境还是可以设置一层,可以为不同用户使用不同版本的hadoop而不冲突
 
在文件最下面添加如下内容:
export JAVA_HOME=/usr/local/java/latest
export JRE_HOME=/usr/local/java/latest/jre
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export HADOOP_HOME=/home/hadoop/hadoop-1.0.3
export PATH=$PATH:$HADOOP_HOME/bin
 
保存退出,让设置立即生效要记得执行以下命令:
  #source /etc/profile
 
5)在hadoop里面的配置文件中也要配置环境变量,设定hadoop-env.sh(Java 安装路径)
     进入hadoop目录,打开conf目录下到hadoop-env.sh,
  # cd /home/hadoop/hadoop-1.0.3/conf
  #sudo gedit hadoop-env.sh
添加以下信息:
        export JAVA_HOME=/usr/local/java/latest
        export HADOOP_HOME=/home/hadoop/hadoop-1.0.3
        export PATH=$PATH:$HADOOP_HOME/bin
 
保存退出,记得要让环境变量配置生效:
  #source /home/hadoop/hadoop-1.0.3/conf/hadoop-env.sh

注意:source的时候要写绝对路径!!!!
 
6)查看安装是否成功
  #hadoop version
Warning: $HADOOP_HOME is deprecated.
Hadoop 1.0.3
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192
Compiled by hortonfo on Tue May 8 20:31:25 UTC 2012
From source with checksum e6b0c1e23dcf76907c5fecb4b832f3be
 
至此,hadoop的单机模式已经安装成功!
于是,可以运行一下hadoop自带的例子WordCount来感受一下MapRedurce的计算过程!
  • 在hadoop目录下新建一个input文件夹
         #mkdir input
 
     将conf中的所有文件都拷贝到input文件夹中
         #cp conf/* input
 
     运行WordCount程序,并将结果保存到output文件夹中,output文件夹是自动生成的。
         #cd /home/hadoop/hadoop-1.0.3
         #ls
             bin                          hadoop-ant-1.0.3.jar                 input                 NOTICE.txt
             build.xml                 hadoop-client-1.0.3.jar             ivy         
             c++                         hadoop-core-1.0.3.jar               ivy.xml             README.txt
             CHANGES.txt         hadoop-examples-1.0.3.jar       lib sbin
             conf                        hadoop-minicluster-1.0.3.jar     libexec              share
             contrib                    hadoop-test-1.0.3.jar                LICENSE.txt      src
             docs                       hadoop-tools-1.0.3.jar               logs                  webapps
 
         #bin/hadoop jar hadoop-examples-1.0.3.jar wordcount input output
注 意:这里的hadoop-examples-1.0.3.jar要根据hadoop-1.0.3文件夹中的examples的文件夹的名字为准,要充分利 用Tab键,当你打一个目录或文件的名字的一半的时候,按Tab键可以自动完成剩下的名字,或者会列出许多相关的名字,你再从中选择你需要的名字。
 
     运行的结果会保存到output文件夹中
         #cat output/*
     可以看到conf所有文件的单词和频数都被统计出来。
 
====================================================================================
7)下面继续的是伪分布模式的配置
     进入到usr/local/hadoop-1.0.3/conf文件夹中,需要对core-site.xml,hdfs-site.xml和mapred-site.xml这三个文件进行内容添加。    
  第一个文件:
    # gedit core-site.xml
 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>

<property>
<name>fs.checkpoint.dir</name>
<value>/home/hadoop/hadoop-1.0.3/hdfs/namesecondary</value>
</property>

<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-1.0.3/tmp/hadoop-${user.name}</value>
</property>

</configuration>
 
     第二个文件:
    # gedit hdfs-site.xml
 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

<property>
<name>dfs.http.address</name>
<value>0.0.0.0:50070</value>
</property>

<property>
<name>dfs.secondary.http.address</name>
<value>0.0.0.0:28680</value>
</property>

<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:50010</value>
</property>

<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:50075</value>
</property>

<property>
<name>dfs.datanode.ipc.address</name>
<value>0.0.0.0:50020</value>
</property>

<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/hadoop-1.0.3/hdfs/name</value>
</property>

<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/hadoop-1.0.3/hdfs/data</value>
</property>

</configuration>
 
     第三个文件:
    # gedit mapred-site.xml
 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>

<property>
<name>mapred.job.tracker.http.address</name>
<value>0.0.0.0:50030</value>
</property>

<property>
<name>mapred.task.tracker.http.address</name>
<value>0.0.0.0:50060</value>
</property>

<property>
<name>mapred.local.dir</name>
<value>/home/hadoop/hadoop-1.0.3/hdfs/data/mapred/local</value>
</property>

<property>
<name>mapred.system.dir</name>
<value>/home/hadoop/hadoop-1.0.3/hdfs/data/system</value>
</property>

</configuration>

8)格式化HDFS
         #cd /home/hadoop/hadoop-1.0.3
         # bin/hadoop namenode -format

Warning: $HADOOP_HOME is deprecated.
14/07/04 10:08:50 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop-ThinkPad/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.0.3
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
************************************************************/
Re-format filesystem in /home/hadoop/hadoop-1.0.3/hdfs/name ? (Y or N) Y   //注意是大写字母
14/07/04 10:08:52 INFO util.GSet: VM type       = 64-bit
14/07/04 10:08:52 INFO util.GSet: 2% max memory = 17.78 MB
14/07/04 10:08:52 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/07/04 10:08:52 INFO util.GSet: recommended=2097152, actual=2097152
14/07/04 10:08:52 INFO namenode.FSNamesystem: fsOwner=hadoop
14/07/04 10:08:53 INFO namenode.FSNamesystem: supergroup=supergroup
14/07/04 10:08:53 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/07/04 10:08:53 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/07/04 10:08:53 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/07/04 10:08:53 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/07/04 10:08:53 INFO common.Storage: Image file of size 112 saved in 0 seconds.
14/07/04 10:08:53 INFO common.Storage: Storage directory /home/hadoop/hadoop-1.0.3/hdfs/name has been successfully formatted.
14/07/04 10:08:53 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop-ThinkPad/127.0.1.1
************************************************************/

9)启动hadoop
     执行start-all.sh来启动所有服务,包括namenode,datanode,start-all.sh脚本用来装载守护进程。
          #cd /home/hadoop/hadoop-1.0.3
          # bin/start-all.sh

Warning: $HADOOP_HOME is deprecated.

starting namenode, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hadoop-namenode-hadoop-ThinkPad.out
localhost: starting datanode, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hadoop-datanode-hadoop-ThinkPad.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hadoop-secondarynamenode-hadoop-ThinkPad.out
starting jobtracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hadoop-jobtracker-hadoop-ThinkPad.out
localhost: starting tasktracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hadoop-tasktracker-hadoop-ThinkPad.out
 
10)用Java的jps命令列出所有守护进程来验证安装成功。
         #jps
   
11860 SecondaryNameNode
11621 DataNode
11944 JobTracker
22170 Jps
11326 NameNode
12175 TaskTracker

11)检查运行状态
     所有的设置已经完成,hadoop也启动了,现在可以通过下面的操作来查看服务是否正常,在hadoop中用于监控集群健康状态的Web里面:
 
http://localhost:50030/ - Hadoop 管理介面
http://localhost:50060/ - Hadoop Task Tracker 状态
http://localhost:50070/ - Hadoop DFS 状态

至此,hadoop的伪分布模式已经安装成功,于是,再次在伪分布模式下运行一下hadoop自带的例子WordCount来感受以下MapReduce过程:
     这时注意程序是在文件系统dfs运行的,创建的文件也都基于文件系统:
     首先在dfs中创建input目录
         # bin/hadoop dfs -mkdir input
     将conf中的文件拷贝到dfs中的input
         # bin/hadoop dfs -copyFromLocal conf/* input
     在伪分布式模式下运行WordCount
         # bin/hadoop jar hadoop-examples-1.0.3.jar wordcount input output

     运行成功时可以看到显示输出个过程:
14/07/03 21:58:42 INFO mapred.JobClient: Job Counters
14/07/03 21:58:42 INFO mapred.JobClient: Launched map tasks=19
14/07/03 21:58:42 INFO mapred.JobClient: Launched reduce tasks=1
14/07/03 21:58:42 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=66577
14/07/03 21:58:42 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/07/03 21:58:42 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=119704
14/07/03 21:58:42 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/07/03 21:58:42 INFO mapred.JobClient: Data-local map tasks=19
14/07/03 21:58:42 INFO mapred.JobClient: File Output Format Counters
14/07/03 21:58:42 INFO mapred.JobClient: Bytes Written=15997

        运行成功后结果会保存在output文件夹中:
             # hadoop dfs -cat output/*
 
want 1
when 1
where 2
where, 1
which 17
who 3
will 8
with 5
worker 1
would 7
xmlns:xsl="http://www.w3.org/1999/XSL/Transform" 1
you 1


12)当Hadoop结束时,可以通过stop-all.sh脚本来关闭Hadoop的守护进程
     # bin/stop-all.sh

PS:单机模式和伪分布模式均用于开发和调试的目的。真实Hadoop集群的运行采用的是第三种模式,即全分布模式。。
与Eclipse结合开发MapReduce程序
eclipse中开发Hadoop2.x的Map/Reduce项目汇总

13)下面继续的是分布模式的配置:
参考:

http://blog.csdn.net/hitwengqi/article/details/8008203#
http://blog.sina.com.cn/s/blog_61ef49250100uvab.html
http://www.cnblogs.com/welbeckxu/category/346329.html
posted @ 2015-12-05 10:19  我是一名老菜鸟  阅读(294)  评论(0编辑  收藏  举报