1. 用Python编写WordCount程序并提交任务

程序

WordCount

输入

一个包含大量单词的文本文件

输出

文件中每个单词及其出现次数(频数),并按照单词字母顺序排序,每个单词和其频数占一行,单词和频数之间有间隔

  1. 编写map函数,reduce函数
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    cd /home/hadoop/wc
    sudo gedit mapper.py
     
    # map函数
    import sys
    for i in stdin:
        i = i.strip()
        words = i.split()
        for word in words:
        print '%s\t%s' % (word,1)
     
    #reduce函数
    from operator import itemgetter
    import sys
     
    current_word = None
    current_count = 0
    word = None
     
    for i in stdin:
        i = i.strip()
        word, count = i.split('\t',1)
        try:
        count = int(count)
        except ValueError:
        continue
     
        if current_word == word:
        current_count += count
        else:
        if current_word:
            print '%s\t%s' % (current_word, current_count)
        current_count = count
        current_word = word
     
    if current_word == word:
        print '%s\t%s' % (current_word, current_count)

     

  2. 将其权限作出相应修改
    1
    chmod a+x /home/hadoop/mapper.py

     

  3. 本机上测试运行代码
    1
    2
    3
    echo "foo foo quux labs foo bar quux" | /home/hadoop/wc/mapper.py
     
    echo "foo foo quux labs foo bar quux" | /home/hadoop/wc/mapper.py | sort -k1,1 | /home/hadoop/wc/reducer.p

     

  4. 放到HDFS上运行
    1. 将之前爬取的文本文件上传到hdfs上
    2. 用Hadoop Streaming命令提交任务
  5. 查看运行结果
    1
    2
    3
    4
    5
    6
    7
    cd  /home/hadoop/wc
    wget http://www.gutenberg.org/files/5000/5000-8.txt
    wget http://www.gutenberg.org/cache/epub/20417/pg20417.txt
     
     
    cd /usr/hadoop/wc
    hdfs dfs -put /home/hadoop/hadoop/gutenberg/*.txt /user/hadoop/input

     

 

2. 用mapreduce 处理气象数据集

编写程序求每日最高最低气温,区间最高最低气温

  1. 气象数据集下载地址为:ftp://ftp.ncdc.noaa.gov/pub/data/noaa
  2. 按学号后三位下载不同年份月份的数据(例如201506110136号同学,就下载2013年以6开头的数据,看具体数据情况稍有变通)
  3. 解压数据集,并保存在文本文件中
  4. 对气象数据格式进行解析
  5. 编写map函数,reduce函数
  6. 将其权限作出相应修改
  7. 本机上测试运行代码
  8. 放到HDFS上运行
    1. 将之前爬取的文本文件上传到hdfs上
    2. 用Hadoop Streaming命令提交任务
  9. 查看运行结果
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    cd /usr/hadoop
    sodu mkdir qx
    cd /usr/hadoop/qx
     
    wget -D --accept-regex=REGEX -P data -r -c ftp://ftp.ncdc.noaa.gov/pub/data/noaa/2013/4*
     
    cd /usr/hadoop/qx/data/ftp.ncdc.noaa.gov/pub/data/noaa/2014
    sudo zcat 1*.gz >qxdata.txt
    cd /usr/hadoop/qx
     
     
    import sys
    for i in sys.stdin:
         i = i.strip()
         d = i[15:23]
         t = i[87:92]
     
         print '%s\t%s' % (d,t)
     
     
    from operator import itemggetter
    import sys
     
    current_word = None
    current_count = 0
    word = None
     
    for i in sys.stdin:
         i = i.strip()
         word,count = i.split('\t', 1)
         try:
              count = int(count)
         except ValueError:
              continue
     
         if current_word == word:
             if current_count > count:
                  current_count = count
         else:
             if current_word:
                 print '%s\t%s' % (current_word, current_count)
             current_count = count
             current_word = word
     
    if current_word == word:
         print '%s\t%s' % (current_word, current_count)
     
    chmod a+x /usr/hadoop/qx/mapper.py
    chmod a+x /usr/hadoop/qx/reducer.py