HBase BucketAllocatorException 异常剖析

近日,观察到HBase集群出现如下WARN日志:

2020-04-18 16:17:03,081 WARN [regionserver/xxx-BucketCacheWriter-1] bucket.BucketCache:Failed allocation for 604acc82edd349ca906939af14464bcb_175674734;
org.apache.hadoop.hbase.io.hfile.bucket.BucketAllocatorException: Allocation too big size=1114202; adjust BucketCache sizes hbase.bucketcache.bucket.sizes to accomodate if size seems reasonable and you want it cached.

大概意思是说:由于block块太大(size=1114202)导致BucketAllocator无法为其分配空间,如果想要被缓存并且觉得这样做合理,可以调整参数hbase.bucketcache.bucket.sizes。

默认情况下,HBase BucketCache 能够缓存block的最大值为512KB,即hbase.bucketcache.bucket.sizes=5120,9216,17408,33792,41984,50176,58368,66560,99328,132096,197632,263168,394240,525312,默认14种size标签。如果想要缓存更大的block块,我们可以调整参数为 hbase.bucketcache.bucket.sizes=5120,9216,17408,33792,41984,50176,58368,66560,99328,132096,197632,263168,394240,525312,1049600,2098176,此时最大容许2MB的block。

下面我们简单看一下对应源代码,涉及相关类为:
/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java
BucketAllocator主要实现对bucket的组织管理,为block分配内存空间。

/**
   * Allocate a block with specified size. Return the offset
   * @param blockSize size of block
   * @throws BucketAllocatorException
   * @throws CacheFullException
   * @return the offset in the IOEngine
   */
  public synchronized long allocateBlock(int blockSize) throws CacheFullException,
      BucketAllocatorException {
    assert blockSize > 0;
    BucketSizeInfo bsi = roundUpToBucketSizeInfo(blockSize);
    if (bsi == null) {
      throw new BucketAllocatorException("Allocation too big size=" + blockSize +
        "; adjust BucketCache sizes " + BlockCacheFactory.BUCKET_CACHE_BUCKETS_KEY +
        " to accomodate if size seems reasonable and you want it cached.");
    }
    long offset = bsi.allocateBlock();

    // Ask caller to free up space and try again!
    if (offset < 0)
      throw new CacheFullException(blockSize, bsi.sizeIndex());
    usedSize += bucketSizes[bsi.sizeIndex()];
    return offset;
  }

在调用roundUpToBucketSizeInfo()方法后,返回结果如果为null则抛出BucketAllocatorException异常。看一下roundUpToBucketSizeInfo()方法:

/**
 * Round up the given block size to bucket size, and get the corresponding
 * BucketSizeInfo
*/
public BucketSizeInfo roundUpToBucketSizeInfo(int blockSize) {
	for (int i = 0; i < bucketSizes.length; ++i)
	  if (blockSize <= bucketSizes[i])
		return bucketSizeInfos[i];
	return null;
}

该方法将传入的blockSize与数组bucketSizes从索引0开始取值进行比较,一旦小于bucketSize[i],则为该block分配bucketSizeInfos[i]大小的空间存放该block。

我们看一下数组bucketSizes的初始化过程:

private static final int DEFAULT_BUCKET_SIZES[] = { 4 * 1024 + 1024, 8 * 1024 + 1024,
  16 * 1024 + 1024, 32 * 1024 + 1024, 40 * 1024 + 1024, 48 * 1024 + 1024,
  56 * 1024 + 1024, 64 * 1024 + 1024, 96 * 1024 + 1024, 128 * 1024 + 1024,
  192 * 1024 + 1024, 256 * 1024 + 1024, 384 * 1024 + 1024,
  512 * 1024 + 1024 };

private final int[] bucketSizes;
BucketAllocator(long availableSpace, int[] bucketSizes)
  throws BucketAllocatorException {
  this.bucketSizes = bucketSizes == null ? DEFAULT_BUCKET_SIZES : bucketSizes;
  Arrays.sort(this.bucketSizes);
  ...
}

可以看到,如果bucketSizes == null默认取数组DEFAULT_BUCKET_SIZES的值,并对该数组进行排序。这一步的排序为上一步的循环比较奠定了基础。

而数组DEFAULT_BUCKET_SIZES的值,也就是参数hbase.bucketcache.bucket.sizes的默认值。

扫描二维码关注博主公众号

转载请注明出处!欢迎关注本人微信公众号【HBase工作笔记】

posted @ 2020-04-22 22:02  周蓬勃  阅读(833)  评论(0编辑  收藏  举报