NETTY 4的内存管理(一)

https://blog.twitter.com/2013/netty-4-at-twitter-reduced-gc-overhead

http://www.infoq.com/news/2013/11/netty4-twitter

Reducing GC pressure and memory bandwidth consumption

A problem was Netty 3’s reliance on the JVM’s memory management for buffer allocations. Netty 3 creates a new heap buffer whenever a new message is received or a user sends a message to a remote peer. This means a ‘new byte[capacity]’ for each new buffer. These buffers caused GC pressure and consumed memory bandwidth: allocating a new byte array consumes memory bandwidth to fill the array with zeros for safety. However, the zero-filled byte array is very likely to be filled with the actual data, consuming the same amount of memory bandwidth. We could have reduced the consumption of memory bandwidth to 50% if the Java Virtual Machine (JVM) provided a way to create a new byte array which is not necessarily filled with zeros, but there’s no such way at this moment.

To address this issue, we made the following changes for Netty 4.

Netty版本3不管是接受或发送消息都会创建一个新的堆缓冲,过多就导致GC压力和消耗内存带宽(zero-filled(补零)字节数组占用与实际数据同样的带宽)

1、Netty4去除了事件对象(removal of event objects),版本4为不同的事件类型定义了不同的方法

Netty3

class Before implements ChannelUpstreamHandler {
  void handleUpstream(ctx, ChannelEvent e) {
    if (e instanceof MessageEvent) { ... }
    else if (e instanceof ChannelStateEvent) { ... }
      ...
    }
}

Netty4

class After implements ChannelInboundHandler {
  void channelActive(ctx) { ... }
  void channelInactive(ctx) { ... }
  void channelRead(ctx, msg) { ... }
  void userEventTriggered(ctx, evt) { ... }
  ...
}

2、Netty 4 引入了一个新的接口ByteBufAllocator,提供缓冲区池a buffer pool implementation via that interface and is a pure Java variant of jemalloc (另参考Netty 4 内存管理(二)-关于jemalloc(Scalable memory allocation using jemalloc)), which implements buddy memory allocation and slab allocation.)Netty不再需要补零填充buffer。不过有了自己的内存配置器,不能再依赖GC,必须小心泄露(Now that Netty has its own memory allocator for buffers, it doesn’t waste memory bandwidth by filling buffers with zeros. However, this approach opens another can of worms—reference counting. Because we cannot rely on GC to put the unused buffers into the pool, we have to be very careful about leaks. Even a single handler that forgets to release a buffer can make our server’s memory usage grow boundlessly.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

posted @ 2014-03-13 15:39  tree.liang  阅读(649)  评论(0编辑  收藏  举报