spark
# Spark is a fast and general engine for large-scale data processing.
# Spark libraries
YARN
./bin/run-example SparkPi 10
./bin/spark-shell --master spark://IP:POR
./bin/spark-shell
http://192.168.1.112:8080/
http://192.168.1.112:4040/
RDD (Resilient Distributed Dataset)
# create RDD using hdfs
var textFile = sc.textFile("hdfs://localhost:9000/user/root/BUILDING.txt");
textFile.count()
textFile.first()
textFile.filter(line => line.contains("hadoop")).count()
val count = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_+_)
count.collect()
Some concepts
--------------------------------
RDD (resillient distributed dataset)
Task: Task is comprised of ShuffleMapTask and ResultTask. ShuffleMapTask and ResultTask are similar to Map and Reduce in Hadoop.
Job:
Stage:
Partition:
NarrowDependency:
ShuffleDependency:
DAG (Directed Acycle graph)
Core functions
--------------------------------
SparkContext
hadoop-2.7.2/etc/hadoop/core-site.xml