Spark一个简单案例
Spark是一个类似Map-Reduce的集群计算框架,用于快速进行数据分析。
在这个应用中,我们以统计包含"the"字符的行数为案例,.为建立这个应用,我们使用 Spark 1.0.1, Scala 2.10.4 & sbt 0.14.0.
1). 运行 mkdir SimpleSparkProject.
2). 创建一个.sbt 文件,在目录 SimpleSparkProject/simple.sbt
name := "Simple Project" version := "1.0" scalaVersion := "2.10.4" libraryDependencies += "org.apache.spark" %% "spark-core" % "1.0.1" resolvers += "Akka Repository" at "http://repo.akka.io/releases/"
3). 创建代码文件:SimpleSparkProject/src/main/scala/SimpleApp.scala
package main.scala import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ object SimpleApp { def main(args: Array[String]) { val logFile = "src/data/sample.txt" val sc = new SparkContext("local", "Simple App", "/path/to/spark-1.0.1-incubating", List("target/scala-2.10/simple-project_2.10-1.0.jar")) val logData = sc.textFile(logFile, 2).cache() val numTHEs = logData.filter(line => line.contains("the")).count() println("Lines with the: %s".format(numTHEs)) } }
4). 然后到SimpleSparkProject 目录
5). 运行 sbt package
6). 运行 sbt run