spark安装部署

1、下载安装包spark-2.1.1-bin-hadoop2.7.tgz 解压

2、下载scala-2.12.4.rpm 安装

rpm -ivh scala-2.12.4.rpm

3、修改配置 spark-env.sh 追加如下

export SCALA_HOME=/usr/share/scala
export JAVA_HOME=/usr/lib/jvm/java-1.8.0
export HADOOP_HOME=/usr/lib/hadoop-current
export HADOOP_CONF_DIR=/etc/ecm/hadoop-conf
export SPARK_HOME=/opt/apps/ecm/service/spark211
export SPARK_MASTER_IP=192.168.1.86
export SPARK_EXECUTOR_MEMORY=2G
export SPARK_MASTER_WEBUI_PORT=8089
export SPARK_CONF_DIR=/opt/apps/ecm/service/spark211/conf

4、修改slaves

Slave1

Slave2

Slave3

5、修改 spark-defaults.conf

spark.master spark://emr-header-1:7077
spark.eventLog.enabled true
spark.eventLog.dir hdfs://emr-header-1:8020/sparklogs
spark.executor.logs.rolling.strategy size
spark.executor.logs.rolling.maxSize 134217728
spark.executor.extraClassPath=/data/libs/*
spark.driver.extraClassPath=/data/libs/*

6、启动

./sbin/start-all.sh

posted on 2018-06-25 11:04  海底死鱼  阅读(82)  评论(0编辑  收藏  举报

导航