Spark 0.9.0启动脚本——bin/spark-class
1. 判断是否cygwin环境
2. 设置SCALA_VERSION
3. 设置SPARK_HOME
4. 执行conf/spark-env.sh
5. 如果运行类是org.apache.spark.deploy.master.Master或org.apache.spark.deploy.worker.Worker,设置
否则,设置
OUR_JAVA_OPTS="$SPARK_JAVA_OPTS"
6.
1) org.apache.spark.deploy.master.Master:
OUR_JAVA_OPTS="$SPARK_DAEMON_JAVA_OPTS $SPARK_MASTER_OPTS"
2) org.apache.spark.deploy.worker.Worker
OUR_JAVA_OPTS="$SPARK_DAEMON_JAVA_OPTS $SPARK_WORKER_OPTS"3) org.apache.spark.executor.CoarseGrainedExecutorBackend
OUR_JAVA_OPTS="$SPARK_JAVA_OPTS $SPARK_EXECUTOR_OPTS"
4) org.apache.spark.executor.MesosExecutorBackend
同上
5) org.apache.spark.repl.Main
OUR_JAVA_OPTS="$SPARK_JAVA_OPTS $SPARK_REPL_OPTS"
7. 检测java,JAVA_HOME->java命令->退出
8. SPARK_MEM=${SPARK_MEM:-512m}
9. 如果不存在RELEASE目录,则检测assembly/target/scala-$SCALA_VERSION/spark-assembly.*hadoop.*.jar,存在多个包或没有包则提示并退出
10. 设置TOOLS_DIR为tools目录,
检测tools/target/scala-$SCALA_VERSION/*assembly*[0-9Tg].jar文件存在,则设置为SPARK_TOOLS_JAR
检测tools/target/spark-tools*[0-9Tg].jar存在,则设置为SPARK_TOOLS_JAR,覆盖之前的设置
11. 设置CLASSPATH为<bin/compute-classpath.sh计算的结果>
12. 如果执行org.apache.spark.tools.JavaAPICompletenessChecker,则CLASSPATH为SPARK_TOOLS_JAR:<bin/compute-classpath.sh计算的结果>
13. cygwin检测设置
14. java -cp $CLASSPATH $JAVA_OPTS $@