运维系列:08、Spark Shell

./bin/spark-shell --master spark://MASTER:PORT

启动

集群模式:

MASTER=spark://`hostname`:7077 bin/spark-shell
bin/spark-shell --master spark://es122:7077

单机模式:

bin/spark-shell local[4]
 
 

加载一个text文件

Spark context available as sc.
 
连接到Spark的master之后,若集群中没有分布式文件系统,Spark会在集群中每一台机器上加载数据,所以要确保集群中的每个节点上都有完整数据。
 
wget http://www-stat.stanford.edu/~tibs/ElemStatLearn/datasets/spam.data
 
单机时:
var inFile = sc.textFile("./spam.data")
集群时:
import org.apache.spark.SparkFiles;
var file = sc.addFile("spam.data")
var inFile = sc.textFile(SparkFiles.get("spam.data"))
 

处理

文件的一行,按空格拆分,然后转成double。
var nums = inFile.map(x => x.split(" ").map(_.toDouble))
注:x => x.toDouble 等价于_.toDouble
 
 

查看:

inFile.first()
nums.first()
 
 

逻辑回归:

import org.apache.spark.util.Vector
case class DataPoint(x: Vector, y: Double)
def parsePoint(x: Array[Double]): DataPoint = {
    DataPoint(new Vector(x.slice(0, x.size-2 )), x(x.size -1 ))
}
 
 
 
 
 
 
 
 
 
 
 

posted on 2014-09-12 11:13  宁 弘道  阅读(353)  评论(0编辑  收藏  举报

导航