十一 spark- Standalone 集群
1、准备
服务器:node180 ,node181 ,node182
spark-3.0.0-preview2-bin-hadoop3.2.tgz(http://spark.apache.org/downloads.html)
jdk-1.8.x
scala-2.12.11.tgz
2、解压
tar zxvf /opt/software/spark-3.0.0-preview2-bin-hadoop3.2.tgz -C /opt/module/ tar zxvf /opt/software/scala-2.12.11.tgz -C /opt/module/
3、配置文件
cd /opt/module/spark-3.0.0/conf cp spark-env.sh.template spark-env.sh
修改: spark-env.sh
SPARK_MASTER_WEBUI_PORT=9096 SPARK_WORKER_WEBUI_PORT=8096 export SCALA_HOME=${SCALA_HOME} export JAVA_HOME=/opt/module/jdk1.8.0_161 export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop export HADOOP_HOME=${HADOOP_HOME} SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node180:2181,node181:2181,node182:2181 -Dspark.deploy.zookeeper.dir=/myspark" SPARK_CONF_DIR=/opt/module/spark-3.0.0-hadoop3.2/conf SPARK_LOG_DIR=/root/spark/logs SPARK_PID_DIR=/root/spark/logs
修改 :slaves
node180 node181 node182
4、系统变量配置
vi /etc/profile
#spark export SPARK_HOME=/opt/module/spark-3.0.0-hadoop3.2 export PATH=$SPARK_HOME/bin:$PATH #scala export SCALA_HOME=/opt/module/scala-2.12.11 export PATH=$SCALA_HOME/bin:$PATH
source /etc/profile
5、同步其他机器
scp -r /opt/module/spark-3.0.0-hadoop3.2/ root@node181:/opt/module/ scp -r /opt/module/spark-3.0.0-hadoop3.2/ root@node182:/opt/module/
5、验证版本
cd /opt/module/spark-3.0.0-hadoop3.2/sbin ./start-all.sh