spark-env.sh增加HADOOP_CONF_DIR使得spark运行文件是hdfs文件

spark-env.sh增加HADOOP_CONF_DIR使得spark读写的是hdfs文件

刚装了spark,运行wordcount程序,local方式,执行的spark-submit,读和写的文件都是宿主机,而不是hdfs。测试命令修改了spark-env.sh导致spark-submit命令执行的时候读和写的都是hdfs文件。

yarn执行spark shell

spark-shell --master yarn-client

第一个报错

Exception in thread "main" org.apache.spark.SparkException: When running with master 'yarn-client' either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment.
        at org.apache.spark.deploy.SparkSubmitArguments.error(SparkSubmitArguments.scala:657)
        at org.apache.spark.deploy.SparkSubmitArguments.validateSubmitArguments(SparkSubmitArguments.scala:290)
        at org.apache.spark.deploy.SparkSubmitArguments.validateArguments(SparkSubmitArguments.scala:251)
        at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:120)
        at org.apache.spark.deploy.SparkSubmit$$anon$2$$anon$1.<init>(SparkSubmit.scala:911)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.parseArguments(SparkSubmit.scala:911)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:81)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

修改spark-env.sh

/usr/local/spark-2.4.0-bin-hadoop2.7/conf/spark-env.sh
添加export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
重启hadoop集群、spark集群

报了第二错误

Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://master:9000/user/root/README.MD

问题解决与分析

1原来集群中执行程序其实将当前目录的README.MD计算并输出当前主机的目录。修改了spark-env之后,重启集群,spark就读取的是hdfs中的文件。
2.将本地README.MD通过hdfs命令上传到/user/root/README.MD。执行spark-submit命令成功
3.尝试把spark-env.sh修改回退,重启集群,执行spark-submit就没有报错。

posted @ 2020-02-24 22:29  碧海潮心  阅读(5030)  评论(0编辑  收藏  举报