spark 报错总结2

###################################################

userDF = sc.sparkContext.newAPIHadoopRDD(conf, classOf[TableInputFormat],
classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],
classOf[org.apache.hadoop.hbase.client.Result])
.repartition(DEFAUT_PARTITIONS)

userDF.persist(StorageLevel.MEMORY_AND_DISK).count //用了newAPIHadoopRDD不加这句,居然会卡死?奇怪

#########################################
ALS 在 Spark MLlib 中的实现
https://blog.csdn.net/pearl8899/article/details/80336938

Spark性能优化指南——高级篇 (很详细)
https://blog.csdn.net/lukabruce/article/details/81504220

Spark SQL基本概念与基本用法
https://www.cnblogs.com/swordfall/p/9006088.html

Spark MLlib 机器学习
https://www.cnblogs.com/swordfall/p/9456222.html

推荐几个比较高质量的java学习博客
https://blog.csdn.net/wistbean/article/details/80964377?utm_medium=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-9.channel_param&depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-9.channel_param

相似度算法之余弦相似度
https://blog.csdn.net/zz_dd_yy/article/details/51926305

###########################################################################

java.lang.IllegalArgumentException: System memory 259522560 must be at least 471859200. Please increase heap size using the --driver-memory option or spark.driver.memory in Spark configuration.
at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:216)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:198)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:330)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:174)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:257)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:432)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
at scala01_DataStructure.ColStats$.main(ColStats.scala:9)
at scala01_DataStructure.ColStats.main(ColStats.scala)
20/09/03 17:08:58 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.lang.IllegalArgumentException: System memory 259522560 must be at least 471859200. Please increase heap size using the --driver-memory option or spark.driver.memory in Spark configuration.
at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:216)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:198)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:330)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:174)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:257)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:432)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
at scala01_DataStructure.ColStats$.main(ColStats.scala:9)
at scala01_DataStructure.ColStats.main(ColStats.scala)

运行spark在本地运行测试时出错,解决方案:
在VM options上设置:-Xms256m -Xmx1024m

https://blog.csdn.net/weixin_43322685/article/details/82961748


############################################

Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.spark.SparkContext.withScope(SparkContext.scala:701)
at org.apache.spark.SparkContext.textFile(SparkContext.scala:819)
at scala01_DataStructure.ColStats$.main(ColStats.scala:15)
at scala01_DataStructure.ColStats.main(ColStats.scala)
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Jackson version is too old 2.4.2
at com.fasterxml.jackson.module.scala.JacksonModule$class.setupModule(JacksonModule.scala:56)
at com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:19)
at com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:550)
at org.apache.spark.rdd.RDDOperationScope$.<init>(RDDOperationScope.scala:82)
at org.apache.spark.rdd.RDDOperationScope$.<clinit>(RDDOperationScope.scala)
... 4 more


################################################################


[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scala-tools:maven-scala-plugin:2.15.2:compile (default) on project NewsFeed: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: -10000(Exit value: -10000) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

posted @ 2020-09-07 17:44  等木鱼的猫  阅读(412)  评论(0编辑  收藏  举报