Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep解决方法
14/03/26 23:10:04 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
14/03/26 23:10:05 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
14/03/26 23:10:06 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
14/03/26 23:10:07 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:10020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
在使用sqoop工具对hive表导出到mysql中提示该信息,一直重试。通过网上查阅资料,这个问题是指定hdfs路径不严谨导致。
报错写法
sqoop export --connect jdbc:mysql://c6h2:3306/log --username root --password 123 --table dailylog --fields-terminated-by '\001' --export-dir '/user/hive/warehouse/weblog_2013_05_30'
解决方法
sqoop export --connect jdbc:mysql://c6h2:3306/log --username root --password 123 --table dailylog --fields-terminated-by '\001' --export-dir 'hdfs://cluster1/user/hive/warehouse/weblog_2013_05_30'
这里加上hdfs协议和集群名称,我这里是hadoop2的ha集群模式。
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步