spark-shell --conf
1 spark-shell --conf -h 2 Usage: ./bin/spark-shell [options] 3 4 Options: 5 --master MASTER_URL spark://host:port, mesos://host:port, yarn, or local. 6 --deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or 7 on one of the worker machines inside the cluster ("cluster") 8 (Default: client). 9 --class CLASS_NAME Your application's main class (for Java / Scala apps). 10 --name NAME A name of your application. 11 --jars JARS Comma-separated list of local jars to include on the driver 12 and executor classpaths. 13 --packages Comma-separated list of maven coordinates of jars to include 14 on the driver and executor classpaths. Will search the local 15 maven repo, then maven central and any additional remote 16 repositories given by --repositories. The format for the 17 coordinates should be groupId:artifactId:version. 18 --repositories Comma-separated list of additional remote repositories to 19 search for the maven coordinates given with --packages. 20 --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place 21 on the PYTHONPATH for Python apps. 22 --files FILES Comma-separated list of files to be placed in the working 23 directory of each executor. 24 25 --conf PROP=VALUE Arbitrary Spark configuration property. 26 --properties-file FILE Path to a file from which to load extra properties. If not 27 specified, this will look for conf/spark-defaults.conf. 28 29 --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 512M). 30 --driver-java-options Extra Java options to pass to the driver. 31 --driver-library-path Extra library path entries to pass to the driver. 32 --driver-class-path Extra class path entries to pass to the driver. Note that 33 jars added with --jars are automatically included in the 34 classpath. 35 36 --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G). 37 38 --proxy-user NAME User to impersonate when submitting the application. 39 40 --help, -h Show this help message and exit 41 --verbose, -v Print additional debug output 42 --version, Print the version of current Spark 43 44 Spark standalone with cluster deploy mode only: 45 --driver-cores NUM Cores for driver (Default: 1). 46 --supervise If given, restarts the driver on failure. 47 --kill SUBMISSION_ID If given, kills the driver specified. 48 --status SUBMISSION_ID If given, requests the status of the driver specified. 49 50 Spark standalone and Mesos only: 51 --total-executor-cores NUM Total cores for all executors. 52 53 YARN-only: 54 --driver-cores NUM Number of cores used by the driver, only in cluster mode 55 (Default: 1). 56 --executor-cores NUM Number of cores per executor (Default: 1). 57 --queue QUEUE_NAME The YARN queue to submit to (Default: "default"). 58 --num-executors NUM Number of executors to launch (Default: 2). 59 --archives ARCHIVES Comma separated list of archives to be extracted into the 60 working directory of each executor.
澄轶: suanec -
http://www.cnblogs.com/suanec/
友链:marsggbo
声援博主:如果您觉得文章对您有帮助,可以点击文章右下角【推荐】一下。您的鼓励是博主的最大动力!
点个关注吧~
http://www.cnblogs.com/suanec/
友链:marsggbo
声援博主:如果您觉得文章对您有帮助,可以点击文章右下角【推荐】一下。您的鼓励是博主的最大动力!
点个关注吧~