Apache Hadoop集群离线安装部署(二)——Spark-2.1.0 on Yarn安装

Apache Hadoop集群离线安装部署(一)——Hadoop(HDFS、YARN、MR)安装:http://www.cnblogs.com/pojishou/p/6366542.html

Apache Hadoop集群离线安装部署(二)——Spark-2.1.0 on Yarn安装:http://www.cnblogs.com/pojishou/p/6366570.html

Apache Hadoop集群离线安装部署(三)——Hbase安装:http://www.cnblogs.com/pojishou/p/6366806.html

 

〇、安装文件准备

Scala:http://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz

Spark:http://www.apache.org/dist/spark/spark-2.1.0/spark-2.1.0-bin-hadoop2.7.tgz

一、安装Scala

1、解压

tar -zxvf scala-2.11.8.tgz -C /opt/program/
ln -s /opt/program/scala-2.11.8 /opt/scala

2、设置环境变量

vi /etc/profile

export SCALA_HOME=/opt/scala
export PATH=$SCALA_HOME/bin:$JAVA_HOME/bin:$PATH

3、生效

source /etc/profile

4、scp到其它节点

 

二、安装Spark

1、解压

tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz -C /opt/program/
ln -s /opt/program/spark-2.1.0-bin-hadoop2.7 /opt/spark

2、修改配置文件

vi /opt/spark/conf/spark-env.sh

export JAVA_HOME=/opt/java
export SCALA_HOME=/opt/scala
export HADOOP_HOME=/opt/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

3、scp到其它节点

4、测试

/opt/hadoop/sbin/start-all.sh

/opt/spark/bin/spark-submit \
    --class org.apache.spark.examples.SparkPi \
    --master yarn \
    --deploy-mode client \
    --driver-memory 1g \
    --executor-memory 1g \
    --executor-cores 2 \
    /opt/spark/examples/jars/spark-examples*.jar \
    10

求出pi便ok了

 

Java为1.8时 ,默认配置Spark运行会报错,解决方案为切换成1.7或参考:http://www.cnblogs.com/pojishou/p/6358588.html

posted @ 2017-02-04 22:59  破击手  阅读(2417)  评论(0编辑  收藏  举报