liboss

            夫为道者,譬如一人与万人战,挂铠出门,意或怯弱,或半路而退,或格斗而死,或得胜而还。沙门学道,应当坚持其心,精进勇锐,不畏前境,破灭众魔,而得道果!

  博客园 :: 首页 :: 博问 :: 闪存 :: 新随笔 :: 联系 :: :: 管理 ::

 

1、profile

export SCALA_HOME=/home/hadoop/scala-2.9.3
SPARK_080=/home/hadoop/spark-0.8.0
export SPARK_HOME=$SPARK_080
export SPARK_EXAMPLES_JAR=$SPARK_HOME/examples/target/spark-examples_2.9.3-0.8.0-incubating.jar
export CLASSPATH=$CLASSPATH:$SPARK_HOME/assembly/target/scala-2.9.3:$SPARK_HOME/assembly/target/scala-2.9.3/spark-assembly_2.9.3-0.8.0-incubating-hadoop2.0.0-mr1-cdh4.2.0.jar
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME

2、设置conf/slaves

3、测试Spark

单机运行:

run-example org.apache.spark.examples.SparkPi local

集群运行(运行Start-all.sh,启动各节点后):

run-example org.apache.spark.examples.SparkPi spark://kit-b5:7077

run-example org.apache.spark.examples.SparkLR spark://kit-b5:7077

run-example org.apache.spark.examples.SparkKMeans spark://kit-b5:7077 ./kmeans_data.txt 2 1

run-example org.apache.spark.examples.SparkKMeans spark://kit-b5:7077 hdfs://kit-b5:8020/kmeans_data.txt 2 1 同上

 

从HDFS读取文件并运行WordCount(启动hadoop、spark后):

$ MASTER=spark://kit-b5:7077 spark-shell

scala> val file = sc.textFile("hdfs://kit-b5:8020/input/README.txt")

scala> file.count()

或者:

scala> val file = sc.textFile("hdfs://kit-b5:8020/input/README.txt")

scala> val count = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_+_)

scala> count.collect()

posted on 2014-01-03 13:02  lam99v  阅读(453)  评论(0编辑  收藏  举报