Netty User Guide
The Netty project is an effort to provide an asynchronous event-driven network application framework and tooling for the rapid development of maintainable high-performance high-scalability protocol servers and clients.
软件需求:jdk1.7 netty4.0
最简单的协议,服务端抛弃收到的任何请求。Netty使用事件驱动来处理收到的数据,ChannelInboundHandler接口定义接收数据的处理方法,channelRead()方法对从客户端收到的消息msg进行处理, ChannelHandlerContext是ChannelHandler的上下文,对ChannelHandler进行了包装,其中有指向下一个Handler的引用。一个Channel中有个ChannelPipeLine,当数据通过Channel时,被ChannelPipeLine的每一个ChannelHandler依次处理,从而完成数据的解码和输出。
Netty重新实现了NIO的ByteBuf,有两个索引分别表示当前读的位置和写的位置。
在源码中的example包下有几个简单例子,原文的向导评论中有一堆喷这篇文章写的不对的地方以及没有mina的文档好。
package io.netty.example.discard;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
/**
* Handles a server-side channel.
*/
public class DiscardServerHandler extends ChannelInboundHandlerAdapter { // (1)
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) { // (2)
// Discard the received data silently.
((ByteBuf) msg).release(); // (3)
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { // (4)
// Close the connection when an exception is raised.
cause.printStackTrace();
ctx.close();
}
}
DiscardServerHandler
继承了ChannelInboundHandlerAdapter
,ChannelInboundHandlerAdapter
实现了ChannelInboundHandler
接口。ChannelInboundHandler
中提供了多种连接和数据收发的方法接口定义.- 当收到来自客户端发来的消息时,
channelRead()
将会被调用. 这里,消息类型是ByteBuf
. -
ByteBuf
是一个引用计数对象,必须通过release()
来释放. 切记,释放传递给Handler引用计数对象是Handler的职责。通常,channelRead()
以如下方式实现:@Override public void channelRead(ChannelHandlerContext ctx, Object msg) { try { // Do something with msg } finally { ReferenceCountUtil.release(msg); } }
-
exceptionCaught()
event handler method is called with a Throwable when an exception was raised by Netty due to an I/O error or by a handler implementation due to the exception thrown while processing events. In most cases, the caught exception should be logged and its associated channel should be closed here, although the implementation of this method can be different depending on what you want to do to deal with an exceptional situation. For example, you might want to send a response message with an error code before closing the connection.
初始化服务器端的实现
package io.netty.example.discard;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
/**
* Discards any incoming data.
*/
public class DiscardServer {
private int port;
public DiscardServer(int port) {
this.port = port;
}
public void run() throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup(); // (1) 事件循环对象,用于处理连接请求
EventLoopGroup workerGroup = new NioEventLoopGroup(); // 工作线程事件循环对象,处理具体任务
try {
ServerBootstrap b = new ServerBootstrap(); // (2) 建立服务端的辅助类
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class) // (3) 使用NioServerSocketChannel类来实例化一个Channel接收连接
.childHandler(new ChannelInitializer<SocketChannel>() { // (4)
@Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new DiscardServerHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128) // (5)
.childOption(ChannelOption.SO_KEEPALIVE, true); // (6)
// Bind and start to accept incoming connections.
ChannelFuture f = b.bind(port).sync(); // (7)
// Wait until the server socket is closed.
// In this example, this does not happen, but you can do that to gracefully
// shut down your server.
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
int port;
if (args.length > 0) {
port = Integer.parseInt(args[0]);
} else {
port = 8080;
}
new DiscardServer(port).run();
}
}
NioEventLoopGroup
is a multithreaded event loop that handles I/O operation. Netty provides variousEventLoopGroup
implementations for different kind of transports. We are implementing a server-side application in this example, and therefore twoNioEventLoopGroup
will be used. The first one, often called 'boss', accepts an incoming connection. The second one, often called 'worker', handles the traffic of the accepted connection once the boss accepts the connection and registers the accepted connection to the worker. How many Threads are used and how they are mapped to the createdChannel
s depends on theEventLoopGroup
implementation and may be even configurable via a constructor.ServerBootstrap
is a helper class that sets up a server. You can set up the server using aChannel
directly. However, please note that this is a tedious process, and you do not need to do that in most cases.- Here, we specify to use the
NioServerSocketChannel
class which is used to instantiate a newChannel
to accept incoming connections. - The handler specified here will always be evaluated by a newly accepted
Channel
. TheChannelInitializer
is a special handler that is purposed to help a user configure a newChannel
. It is most likely that you want to configure theChannelPipeline
of the newChannel
by adding some handlers such asDiscardServerHandler
to implement your network application. As the application gets complicated, it is likely that you will add more handlers to the pipeline and extract this anonymous class into a top level class eventually.通过给ChannelPipeline
添加多个处理数据的Handler来处理收到的数据。 - You can also set the parameters which are specific to the
Channel
implementation. We are writing a TCP/IP server, so we are allowed to set the socket options such astcpNoDelay
andkeepAlive
. Please refer to the apidocs ofChannelOption
and the specificChannelConfig
implementations to get an overview about the supportedChannelOption
s. - Did you notice
option()
andchildOption()
?option()
is for theNioServerSocketChannel
that accepts incoming connections.childOption()
is for theChannel
s accepted by the parentServerChannel
, which isNioServerSocketChannel
in this case. - We are ready to go now. What's left is to bind to the port and to start the server. Here, we bind to the port
8080
of all NICs (network interface cards) in the machine. You can now call thebind()
method as many times as you want (with different bind addresses.)
Looking into the Received Data
在命令行中执行telnet localhost 8080
测试与服务端连接
服务端显示收到的数据处理:
修改 DiscardServerHandler
的channelRead()
:
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf in = (ByteBuf) msg;
try {
while (in.isReadable()) { // (1)
System.out.print((char) in.readByte());
System.out.flush();
}
} finally {
ReferenceCountUtil.release(msg); // (2)
}
}
- This inefficient loop can actually be simplified to:
System.out.println(in.toString(io.netty.util.CharsetUtil.US_ASCII))
- Alternatively, you could do
in.release()
here.
The full source code of the discard server is located in the io.netty.example.discard
package of the distribution.
Writing an Echo Server
向客户端响应收到的消息实现ECHO
协议.
同样修改channelRead()
方法:
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ctx.write(msg); // (1)
ctx.flush(); // (2)
}
- A
ChannelHandlerContext
object provides various operations that enable you to trigger various I/O events and operations. Here, we invokewrite(Object)
to write the received message in verbatim. Please note that we did not release the received message unlike we did in theDISCARD
example. It is because Netty releases it for you when it is written out to the wire. ctx.write(Object)
does not make the message written out to the wire. It is buffered internally, and then flushed out to the wire byctx.flush()
. Alternatively, you could callctx.writeAndFlush(msg)
for brevity.
Writing a Time Server
实现一个TIME
协议. 在连接建立后,服务端发送 32-bit 整数来表示时间, 当发出消息后就关闭连接.
.通过覆盖channelActive()
方法处理连接建立事件:
package io.netty.example.time;
public class TimeServerHandler extends ChannelInboundHandlerAdapter {
@Override
public void channelActive(final ChannelHandlerContext ctx) { // (1)
final ByteBuf time = ctx.alloc().buffer(4); // (2) 获取4个字节的buf用来作为发送消息
time.writeInt((int) (System.currentTimeMillis() / 1000L + 2208988800L)); //相当于打包数据到buffer中
final ChannelFuture f = ctx.writeAndFlush(time); // (3)
f.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) {
assert f == future;
ctx.close();
}
}); // (4)
}
}
- As explained, the
channelActive()
method will be invoked when a connection is established and ready to generate traffic. - To send a new message, we need to allocate a new buffer which will contain the message. We are going to write a 32-bit integer, and therefore we need a
ByteBuf
whose capacity is at least 4 bytes. Get the currentByteBufAllocator
viaChannelHandlerContext.alloc()
and allocate a new buffer. -
As usual, we write the constructed message.
But wait, where's the flip? Didn't we used to call
java.nio.ByteBuffer.flip()
(翻转)before sending a message in NIO?ByteBuf
does not have such a method because it has two pointers; one for read operations and the other for write operations. The writer index increases when you write something to aByteBuf
while the reader index does not change. The reader index and the writer index represents where the message starts and ends respectively. ByteBuf中有两个指针,一个writeIndex,表示当前写的位置,每次写一个字节后writeIndex++但readIndex保持不变,同理readIndex用来表示当前读的位置。In contrast, NIO buffer does not provide a clean way to figure out where the message content starts and ends without calling the flip method. You will be in trouble when you forget to flip the buffer because nothing or incorrect data will be sent. Such an error does not happen in Netty because we have different pointer for different operation types. You will find it makes your life much easier as you get used to it -- a life without flipping out!
Another point to note is that the
ChannelHandlerContext.write()
(andwriteAndFlush()
) method returns aChannelFuture
. AChannelFuture
represents an I/O operation which has not yet occurred. It means, any requested operation might not have been performed yet because all operations are asynchronous in Netty. For example, the following code might close the connection even before a message is sent:ChannelFuture
表示数据请求操作将会在未来的某个时候执行完毕,但是由于是异步执行,下面的代码中有可能在写数据之前就关闭了连接,因此要在ChannelFuture
处理完毕后再关闭连接,只需要给ChannelFuture
添加一个ChannelFutureListener
。Channel ch = ...; ch.writeAndFlush(message); ch.close();
Therefore, you need to call the
close()
method after theChannelFuture
is complete, which was returned by thewrite()
method, and it notifies its listeners when the write operation has been done. Please note that,close()
also might not close the connection immediately, and it returns aChannelFuture
. -
How do we get notified when a write request is finished then? This is as simple as adding a
ChannelFutureListener
to the returnedChannelFuture
. Here, we created a new anonymousChannelFutureListener
which closes theChannel
when the operation is done.Alternatively, you could simplify the code using a pre-defined listener:关闭一个连接的默认实现
f.addListener(ChannelFutureListener.CLOSE);
To test if our time server works as expected, you can use the UNIX rdate
command:
$ rdate -o <port> -p <host>
where <port>
is the port number you specified in the main()
method and <host>
is usually localhost
.
Writing a Time Client
TIME
协议需要客户端解析收到的字节数据才能知道当前的时间.客户端和服务端的区别在于Bootstrap
和Channel
的使用方法。
package io.netty.example.time;
public class TimeClient {
public static void main(String[] args) throws Exception {
String host = args[0];
int port = Integer.parseInt(args[1]);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap(); // (1)
b.group(workerGroup); // (2)
b.channel(NioSocketChannel.class); // (3)
b.option(ChannelOption.SO_KEEPALIVE, true); // (4)
b.handler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new TimeClientHandler());
}
});
// Start the client.
ChannelFuture f = b.connect(host, port).sync(); // (5)
// Wait until the connection is closed.
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
}
}
}
Bootstrap
is similar toServerBootstrap
except that it's for non-server channels such as a client-side or connectionless channel.- If you specify only one
EventLoopGroup
, it will be used both as a boss group and as a worker group. The boss worker is not used for the client side though. - Instead of
NioServerSocketChannel
,NioSocketChannel
is being used to create a client-sideChannel
. - Note that we do not use
childOption()
here unlike we did withServerBootstrap
because the client-sideSocketChannel
does not have a parent. - We should call the
connect()
method instead of thebind()
method.
客户端的ChannelHandler
收到 32-bit整数,解析为时间对象再关闭连接。
package io.netty.example.time;
import java.util.Date;
public class TimeClientHandler extends ChannelInboundHandlerAdapter {
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf m = (ByteBuf) msg; // (1)
try {
long currentTimeMillis = (m.readUnsignedInt() - 2208988800L) * 1000L;
System.out.println(new Date(currentTimeMillis));
ctx.close();
} finally {
m.release();
}
}
}
- In TCP/IP, Netty reads the data sent from a peer into a `ByteBuf`.
It looks very simple and does not look any different from the server side example. However, this handler sometimes will refuse to work raising an IndexOutOfBoundsException
. We discuss why this happens in the next section.
Dealing with a Stream-based Transport
One Small Caveat of Socket Buffer
In a stream-based transport such as TCP/IP, received data is stored into a socket receive buffer. Unfortunately, the buffer of a stream-based transport is not a queue of packets but a queue of bytes. It means, even if you sent two messages as two independent packets, an operating system will not treat them as two messages but as just a bunch of bytes. Therefore, there is no guarantee that what you read is exactly what your remote peer wrote. For example, let us assume that the TCP/IP stack of an operating system has received three packets: 基于流的数据传输实际传输的是字节序列而不包序列,因此发送三个包的数据时,系统会把所有数据作为字节流看待,重新分割,导致在客户端有可能收到的不一定是对等的三个包,而是成了四个。因此在接收数据时,需要将收到的字节数据重新组装成为有实际意义的分组数据才能被正确解析。
Because of this general property of a stream-based protocol, there's high chance of reading them in the following fragmented form in your application:
Therefore, a receiving part, regardless it is server-side or client-side, should defrag the received data into one or more meaningful frames that could be easily understood by the application logic. In case of the example above, the received data should be framed like the following:
The First Solution 解决方案一 在接收端创建一个固定大小的buffer,当buffer填满后,说明一个包的数据完整收到了,再进行解析
Now let us get back to the TIME
client example. We have the same problem here. A 32-bit integer is a very small amount of data, and it is not likely to be fragmented often. However, the problem is that it can be fragmented, and the possibility of fragmentation will increase as the traffic increases.
The simplistic solution is to create an internal cumulative buffer and wait until all 4 bytes are received into the internal buffer. The following is the modified TimeClientHandler
implementation that fixes the problem:
package io.netty.example.time;
import java.util.Date;
public class TimeClientHandler extends ChannelInboundHandlerAdapter {
private ByteBuf buf;
@Override
public void handlerAdded(ChannelHandlerContext ctx) {
buf = ctx.alloc().buffer(4); // (1)
}
@Override
public void handlerRemoved(ChannelHandlerContext ctx) {
buf.release(); // (1)
buf = null;
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf m = (ByteBuf) msg;
buf.writeBytes(m); // (2)
m.release();
if (buf.readableBytes() >= 4) { // (4)
long currentTimeMillis = (buf.readInt() - 2208988800L) * 1000L;
System.out.println(new Date(currentTimeMillis));
ctx.close();
}
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
- A
ChannelHandler
has two life cycle listener methods:handlerAdded()
andhandlerRemoved()
. You can perform an arbitrary (de)initialization task as long as it does not block for a long time. - First, all received data should be cumulated into
buf
. - And then, the handler must check if
buf
has enough data, 4 bytes in this example, and proceed to the actual business logic. Otherwise, Netty will call thechannelRead()
method again when more data arrives, and eventually all 4 bytes will be cumulated.
The Second Solution 将数据处理工作分给多个Handler来处理,很多协议的解析都是用此来实现
Although the first solution has resolved the problem with the TIME
client, the modified handler does not look that clean. Imagine a more complicated protocol which is composed of multiple fields such as a variable length field. Your ChannelInboundHandler
implementation will become unmaintainable very quickly.
As you may have noticed, you can add more than one ChannelHandler
to a ChannelPipeline
, and therefore, you can split one monolithic ChannelHandler
into multiple modular ones to reduce the complexity of your application. For example, you could split TimeClientHandler
into two handlers:
TimeDecoder
which deals with the fragmentation issue, and- the initial simple version of
TimeClientHandler
.
Fortunately, Netty provides an extensible class which helps you write the first one out of the box:
package io.netty.example.time;
public class TimeDecoder extends ByteToMessageDecoder { // (1)
@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { // (2)
if (in.readableBytes() < 4) {
return; // (3)
}
out.add(in.readBytes(4)); // (4)
}
}
ByteToMessageDecoder
is an implementation ofChannelInboundHandler
which makes it easy to deal with the fragmentation issue.ByteToMessageDecoder
calls thedecode()
method with an internally maintained cumulative buffer whenever new data is received.decode()
can decide to add nothing toout
where there is not enough data in the cumulative buffer.ByteToMessageDecoder
will calldecode()
again when there is more data received.当收到一个字节数据时,ByteToMessageDecoder
会调用decode方法,并将它内部缓存的字节传给decode方法- If
decode()
adds an object toout
, it means the decoder decoded a message successfully.ByteToMessageDecoder
will discard the read part of the cumulative buffer. Please remember that you don't need to decode multiple messages.ByteToMessageDecoder
will keep calling thedecode()
method until it adds nothing toout
.
Now that we have another handler to insert into the ChannelPipeline
, we should modify the ChannelInitializer
implementation in the TimeClient
:
b.handler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new TimeDecoder(), new TimeClientHandler());
}
});
If you are an adventurous person, you might want to try the ReplayingDecoder
which simplifies the decoder even more. You will need to consult the API reference for more information though.
public class TimeDecoder extends ReplayingDecoder<VoidEnum> {
@Override
protected void decode(
ChannelHandlerContext ctx, ByteBuf in, List<Object> out, VoidEnum state) {
out.add(in.readBytes(4));
}
}
Additionally, Netty provides out-of-the-box decoders which enables you to implement most protocols very easily and helps you avoid from ending up with a monolithic unmaintainable handler implementation. Please refer to the following packages for more detailed examples:
io.netty.example.factorial
for a binary protocol, andio.netty.example.telnet
for a text line-based protocol.
Speaking in POJO instead of ByteBuf
All the examples we have reviewed so far used a ByteBuf
as a primary data structure of a protocol message. In this section, we will improve the TIME
protocol client and server example to use a POJO instead of a ByteBuf
.
The advantage of using a POJO in your ChannelHandler
s is obvious; your handler becomes more maintainable and reusable by separating the code which extracts information from ByteBuf
out from the handler. In the TIME
client and server examples, we read only one 32-bit integer and it is not a major issue to use ByteBuf
directly. However, you will find it is necessary to make the separation as you implement a real world protocol.
First, let us define a new type called UnixTime
.
package io.netty.example.time;
import java.util.Date;
public class UnixTime {
private final int value;
public UnixTime() {
this((int) (System.currentTimeMillis() / 1000L + 2208988800L));
}
public UnixTime(int value) {
this.value = value;
}
public int value() {
return value;
}
@Override
public String toString() {
return new Date((value() - 2208988800L) * 1000L).toString();
}
}
We can now revise the TimeDecoder
to produce a UnixTime
instead of a ByteBuf
.
@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) {
if (in.readableBytes() < 4) {
return;
}
out.add(new UnixTime(in.readInt()));
}
With the updated decoder, the TimeClientHandler
does not use ByteBuf
anymore:
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
UnixTime m = (UnixTime) msg;
System.out.println(m);
ctx.close();
}
Much simpler and elegant, right? The same technique can be applied on the server side. Let us update the TimeServerHandler
first this time:
@Override
public void channelActive(ChannelHandlerContext ctx) {
ChannelFuture f = ctx.writeAndFlush(new UnixTime());
f.addListener(ChannelFutureListener.CLOSE);
}
Now, the only missing piece is an encoder, which is an implementation of ChannelOutboundHandler
that translates a UnixTime
back into a ByteBuf
. It's much simpler than writing a decoder because there's no need to deal with packet fragmentation and assembly when encoding a message.
package io.netty.example.time;
public class TimeEncoder extends ChannelOutboundHandlerAdapter {
@Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) {
UnixTime m = (UnixTime) msg;
ByteBuf encoded = ctx.alloc().buffer(4);
encoded.writeInt(m.value());
ctx.write(encoded, promise); // (1)
}
}
-
There are quite a few important things to important in this single line.
First, we pass the original
ChannelPromise
as-is so that Netty marks it as success or failure when the encoded data is actually written out to the wire.Second, we did not call
ctx.flush()
. There is a separate handler methodvoid flush(ChannelHandlerContext ctx)
which is purposed to override theflush()
operation.
To simplify even further, you can make use of MessageToByteEncoder
:
public class TimeEncoder extends MessageToByteEncoder<UnixTime> {
@Override
protected void encode(ChannelHandlerContext ctx, UnixTime msg, ByteBuf out) {
out.writeInt(msg.value());
}
}
The last task left is to insert a TimeEncoder
into the ChannelPipeline
on the server side, and it is left as a trivial exercise.
Shutting Down Your Application
Shutting down a Netty application is usually as simple as shutting down all EventLoopGroup
s you created via shutdownGracefully()
. It returns a Future
that notifies you when the EventLoopGroup
has been terminated completely and allChannel
s that belong to the group have been closed.
Summary
In this chapter, we had a quick tour of Netty with a demonstration on how to write a fully working network application on top of Netty.
There is more detailed information about Netty in the upcoming chapters. We also encourage you to review the Netty examples in the io.netty.example
package.
reference: