Spark之集群搭建

注意,这种安装方式是集群方式:然后有常用两种运行模式: standalone , on yarn 

区别就是在编写 standalone 与 onyarn 的程序时的配置不一样,具体请参照spar2中的例子.

提交方式:

standalone
spark-submit --class testkmeans.KMeans_jie spark2-1.0-SNAPSHOT.jar
onyarn :

spark-submit --class SaprkOnYarn spark2-1.0-SNAPSHOT.jar kmeans_data.txt kmeans_data_out.txt

 

spark 集群搭建: 


2台服务器
hadoop13 master
hadoop14 slaves

 


1.安装scala sdk


     下载scala2.11.4版本 下载地址为:http://www.scala-lang.org/download/2.11.4.html


2.解压和安装:


解压 : tar -xvf scala-2.11.4.tgz  ,安装: mv scala-2.11.4 ~/usr/local/scala


3.编辑 ~/.bash_profile文件 增加SCALA_HOME环境变量配置

 

export JAVA_HOME=/home/spark/opt/java/jdk1.6.0_37
export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
export SCALA_HOME=/home/spark/opt/scala-2.11.4
export HADOOP_HOME=/home/spark/opt/hadoop-2.6.0
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:${SCALA_HOME}/bin
立即生效 bash_profile  ,[spark@S1PA11 scala]$ source ~/.bash_profile 
4.验证scala: scala –version

Scala code runner version 2.11.4 -- Copyright 2002-2013, LAMP/EPFL

 


5.进入scala

scala
Welcome to Scala version 2.11.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_37).
Type in expressions to have them evaluated.
Type :help for more information.
scala> var str = "SB is"+"SB"
str: String = SB isSB 

scala> 


 
6.安装spark
下载spark,wget http://d3kbcqa49mib13.cloudfront.net/spark-1.2.0-bin-hadoop2.4.tgz
tar-zxvf
mv /usr    /lo
并配置环境变量 
7. 修改配置文件

 


first :修改slaves文件,增加两个slave节点S1PA11、S1PA222

second:配置spark-env.sh

首先把spark-env.sh.template copy spark-env.sh
vi spark-env.sh文件 在最下面增加: 

export JAVA_HOME=/usr/local/java/jdk1.7.0_79 

export SCALA_HOME=/usr/local/scala/scala-2.11.4
export SPARK_MASTER_IP=192.168.122.213
export SPARK_WORKER_MEMORY=1g
export HADOOP_CONF_DIR=/zzy/hadoop-2.6.0/etc/hadoop
HADOOP_CONF_DIR是Hadoop配置文件目录,
SPARK_MASTER_IP主机IP地址,SPARK_WORKER_MEMORY是worker使用的最大内存
完成配置后,
将spark目录copy slave机器 scp -r ~/zzy/spark-1.2.0-bin-hadoop2.4  /zzy/
8.启动 进入sbin 目录
start-all.sh(切记,hadoop 也有此脚本)
9. hadoop13:8080访问WEB页面查看(注意8080端口,storm 也用)
10.加载远程文件(加载本地失败)
  a.txt ( hello you
       hello me)
  var file = sc.textFile("hdfs://hadoop11:9000/a.txt").collect
11.wordcount
  var file = sc.textFile("hdfs://hadoop11:9000/a.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect;
posted @ 2015-08-26 15:03  农民阿姨  阅读(385)  评论(0编辑  收藏  举报