hbase scan超时问题
下面是异常信息:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | 2018-11-08 16:55:52,361 INFO [main] org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: recovered from org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Thu Nov 08 16:55:52 CST 2018, null , java.net.SocketTimeoutException: callTimeout=180000, callDuration=180111: row '120180358862' on table 'ubas:stats_job_user_analysis' at region=ubas:stats_job_user_analysis,1\x11201803\x1158862,1536809361468.aa5027e279ba39d6505d8b507c6aa3a0., hostname=foxlog02.engine.wx,16020,1538805121280, seqNum=61831 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327) at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:413) at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:371) at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:210) at org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:147) at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$1.nextKeyValue(TableInputFormatBase.java:216) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by : java.net.SocketTimeoutException: callTimeout=180000, callDuration=180111: row '120180358862' on table 'ubas:stats_job_user_analysis' at region=ubas:stats_job_user_analysis,1\x11201803\x1158862,1536809361468.aa5027e279ba39d6505d8b507c6aa3a0., hostname=foxlog02.engine.wx,16020,1538805121280, seqNum=61831 at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169) at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by : java.io.IOException: Call to FoxLog02.engine.wx/192.168.202.2:16020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=2, waitTime=180001, operationTimeout=180000 expired. at org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:292) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1271) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:220) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136) ... 4 more Caused by : org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=2, waitTime=180001, operationTimeout=180000 expired. at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:73) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1245) ... 13 more 2018-11-08 16:55:52,362 WARN [main] org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: We are restarting the first next() invocation, if your mapper has restarted a few other times like this then you should consider killing this job and investigate why it's taking so long . 2018-11-08 17:01:45,893 INFO [main] com.tracker.offline.business.job.user.JobAnalMonthTopMR: count :0 2018-11-08 17:01:45,893 INFO [main] com.tracker.offline.business.job.user.JobAnalMonthTopMR: tablename execute end :ubas:stats_job_user_analysis 2018-11-08 17:04:45,896 WARN [main] org.apache.hadoop.hbase.client.ScannerCallable: Ignore, probably already closed |
代码设置参数:
Scan scan = new Scan(); scan.setAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME, tablename.getBytes()); scan.setCaching(2000); scan.setCacheBlocks(false); FilterList filter = new FilterList(FilterList.Operator.MUST_PASS_ONE); for(int i = 0 ; i<monthsList.size();i++){ for(int j=0 ; j<3 ;j++){ filter.addFilter(new PrefixFilter(Bytes.toBytes(j+RowUtil.ROW_SPLIT +monthsList.get(i)))); } scan.setFilter(filter); } HbaseMapReduce mr_1 = new HbaseMapReduce(JobAnalMonthTopMR.class, JobMonthMapper.class, Text.class, Text.class, trackerconfig, outputPath + "/01"); mr_1.setReducerClass(JobMonthReducer.class) .setScan(scan).setParameter("hbase.client.scanner.timeout.period", "180000") .setParameter("hbase.rpc.timeout", "180000").setJarName("JobAnalMonthTopMR-statistic"); mr_1.waitForCompletion();
代码的基本设置和出现的场景:
1-超时时长180s;
2-缓存条数2000条,每条三列,值为Long型;
3-被扫描表大小,54G;
4-有一个过滤器,使用的前缀查询;
5-刚迁移集群,以前是MapReduc和Hbase在同一个集群上。迁移后,Hbase和MapReduce不再同一个集群上;
排查原因:
1-查看表有没有被破坏,参考命令hbase hbck;
2-超时时长解决方法:
(1)-增加超时时长;(2)-减少缓存条数; 但问题不再此处,因为取的每条数据很小,总的两千条也不会暂用很多的内存;
3-前缀过滤器原因:因为时过滤查询和前缀查询的速度比较慢。所以在扫描的块中分布的合格的条数比例很小时。
会很难再短时间内读取2000条数据,解决方法,用范围查询代替前缀查询。
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· AI与.NET技术实操系列:基于图像分类模型对图像进行分类
· go语言实现终端里的倒计时
· 如何编写易于单元测试的代码
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
· 25岁的心里话
· 闲置电脑爆改个人服务器(超详细) #公网映射 #Vmware虚拟网络编辑器
· 基于 Docker 搭建 FRP 内网穿透开源项目(很简单哒)
· 零经验选手,Compose 一天开发一款小游戏!
· 一起来玩mcp_server_sqlite,让AI帮你做增删改查!!