spark搭建测试

前提Hadoop环境已存在

下载:http://spark.apache.org/downloads.html

要根据自己机器中的hadoop版本选择对应的spark版本

Spark小知识:Spark会判断数剧处理时在哪个阶段要缓存数据,以及哪些数据应该缓存,有时候可能不会缓存数据,只是过滤一遍。

spark几点:RDD及转换执行操作、算子、懒执行、广播变量、累加器、RDD谱系、弹性

1. 集群环境

 
hadoop-2.7.3(hdp_2.6.5.0-292)
zookeeper-3.4.6(hdp_2.6.5.0-292)
scala-2.11.8

2.安装

(1) 把安装包上传到主节点服务器并解压

tar zxvf spark-2.2.0-bin-hadoop2.7.tgz 
mv spark-2.2.0-bin-hadoop2.7 spark-2.2.0
cd spark-2.2.0
cp spark-env.sh.template spark-env.sh
cp slaves.template slaves
cp spark-defaults.conf.template spark-defaults.conf
修改spark-env
export JAVA_HOME=/opt/jdk
export SCALA_HOME=/opt/scala-2.11.8
export SPARK_HOME=/home/hdfs/spark-2.2.0
export HADOOP_HOME=/usr/hdp/2.6.5.0-292/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_MASTER_HOST=nname1
export SPARK_MASTER_PORT=7077
#export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=dnode1:2181,dnode2:2181,dnode3:2181 -Dspark.deploy.zookeeper.dir=/spark"
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_WORKER_INSTANCES=1

export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop

export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native/:$LD_LIBRARY_PATH

修改slaves内容
# 里面的内容原来为localhost
hadoop01
hadoop02
hadoop03
hadoop04
修改spark-defaults.conf内容

spark.master spark://nname1:7077
spark.eventLog.enabled true
spark.eventLog.dir hdfs://nname1:8020/user/spark/history
spark.serializer org.apache.spark.serializer.KryoSerializer

分发到其他节点
scp spark-2.1.0-bin-hadoop2.7 root@workerN:/opt
 
集群所有节点修改环境变量

export SPARK_HOME=/spark-2.2.0
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
测试spark
#yarn的client模式
./bin/spark-submit --class org.apache.spark.examples.SparkPi \ --master yarn \ --deploy-mode client \ /examples/jars/spark-examples*.jar \ 10

#本地

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master local[*] \
/examples/jars/spark-examples_2.11-2.4.0-cdh6.3.1.jar

  



posted @ 2020-05-30 13:26  鱼丸河粉  阅读(243)  评论(0编辑  收藏  举报