spark restful 作业提交
spark1.4起,在启动master进程时候,同时会有一个restful的服务器,可以接受RESTFUL的请求,
以下是提交应用的示例
curl -X POST http://tssloginsight-spark:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data
'{ "action" : "CreateSubmissionRequest", "appArgs" : [ "/data/sparkSink.conf" ],
"appResource" : "file:/data/spark_es_sink-0.0.1-SNAPSHOT-driver.jar",
"clientSparkVersion" : "2.1.0",
"environmentVariables" : { "SPARK_ENV_LOADED" : "1" },
"mainClass" : "com.student.MainSparkStreaming",
"sparkProperties" : { "spark.jars" : "file:/myfilepath/spark-job-1.0.jar", "spark.driver.supervise" : "false", "spark.app.name" : "MyJob", "spark.eventLog.enabled": "false", "spark.submit.deployMode" : "cluster", "spark.master" : "spark://tssloginsight-spark:6066" }
}'
示例spark-streaming程从kafka中读取数据并写入到elasticsearch,都是在容器中使用的,遇到如下问题:
org.apache.spark.SparkException: Couldn't find leader offsets for Set([test,0])
以下是docker-compose.yml中的部分内容:
tssloginsight-kafka:
image: spotify/kafka
environment:
- ADVERTISED_HOST=0.0.0.0
- ADVERTISED_PORT=9092
ports:
- 9092:9092
- 2181:2181
这个问题的原因找到了,就是在启动kafka的时候,如果使用0.0.0.0绑定,就会报这个错误,如果使用具体的ip绑定,就没有这个问题。
0.0.0.0这个配置,某些应用就不知道具体的ip,就会出问题。
Looking for a job working at Home about MSBI