window7 64位下spark2.1.1单节点环境及开发环境搭建及运行全流程走通

参考:

1,http://blog.csdn.net/nju_mc/article/details/54954999

2,http://blog.csdn.net/GYQJN/article/details/49421789

 

内容:

 第一步(环境):

  版本:
         JDK:1.8.0_121

         Spark:2.1.1(作为standalone单节点spark,spark-2.1.1-bin-hadoop2.7)
         Scala:2.11.0

 

第二步:External Libraries: jdk1.8 spark的jars scala-sdk-2.11.0

 

 

 

第三步:配置及代码

1,SparkTests2.pom
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.tangyong.test</groupId>
    <artifactId>SparkTests2</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.1.1</version>
        </dependency>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>2.11.0</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.4</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.specs</groupId>
            <artifactId>specs</artifactId>
            <version>1.2.5</version>
            <scope>test</scope>
        </dependency>
        <!--<dependency>
          <groupId>org.specs2</groupId>
          <artifactId>specs2-junit_${scala.compat.version}</artifactId>
          <version>2.4.16</version>
          <scope>test</scope>
        </dependency>-->

    </dependencies>


</project>

  

2,App.scala

/**
 * Hello world!
 *
 */

/*object App extends Application {
  println( "Hello World!" )
}*/

import scala.math.random
import org.apache.spark._

object App {
  def main(args: Array[String]) {
    //修改后1
    println(args(0))
    val conf = new SparkConf().setAppName("Spark Pi")
    conf.setMaster("local")
    //conf.setMaster("spark://localhost:4040")
    val spark = new SparkContext(conf)
    spark.addJar("D:\\Workspace\\IdeaProjects\\SparkTests\\out\\artifacts\\SparkTests_jar\\SparkTests.jar")

  /*def main(args: Array[String]) {
    val spark = SparkSession
      .builder
      .appName("Spark Pi").master("local")
      .getOrCreate()*/

    val slices = if (args.length > 0) args(0).toInt else 2
    val n = math.min(100000L * slices, Int.MaxValue).toInt

    val count = spark.parallelize(1 until n, slices)
      .map {
        i =>
          val x = random * 2 - 1
          val y = random * 2 - 1
          if (x * x + y * y < 1) 1 else 0
      }
      .reduce(_ + _)

    println("Pi is roughly " + 4.0 * count / (n - 1))

    spark.stop()
  }
}

  

3,SparkPi.scala

import scala.math.random

/**
  * Created by Administrator on 2017/5/25.
  */
object SparkPi {
  def main(args: Array[String]){
    var count = 0
    for(i<-1 to 100000) {
      val x=random*2-1;
      val y=random*2-1;
      if(x*x+y*y<1) count+=1
    }
    println("Pi is roughly " + 4*count/100000.0)

  }

}

  

 

第四步:运行结果

1,App.scala

 

 

2,SparkPi.scala

 

posted @ 2017-05-25 20:21  tangyongathuse  阅读(260)  评论(0编辑  收藏  举报