解决用netty去做web服务时,post长度过大的问题

原文地址

http://my.oschina.net/momohuang/blog/114552

 

先说一下,本来是想自己写socket ,启动一个简单点的web服务用于接收数据的。写完之后,发现会有各种情况没有考虑到的,很有可能出现问题,而且,太折腾了。于是,就用了netty去其web服务,另外,我也觉得netty基本上是最简单的web服务了吧,如果童鞋们有其他推荐的话,就留个言呗。

 

1、server

public class AdminServer {
 protected static final Log log = LogFactory.getLog(AdminServer.class);
 public static void main(String[] args) {
 log.info("start app");
 start(8088);
//		System.out.println("admin start on "+1);
 }


 public static void start(int port) {
 // 配置服务器-使用java线程池作为解释线程
 ServerBootstrap bootstrap = new ServerBootstrap(new NioServerSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool()));
 // 设置 pipeline factory.
 bootstrap.setPipelineFactory(new ServerPipelineFactory());
 // 绑定端口
 bootstrap.bind(new InetSocketAddress(port));
 System.out.println("admin start on "+port);
 ServiceLocator.initServiceLocator();
 }


 private static class ServerPipelineFactory implements
 ChannelPipelineFactory {
 public ChannelPipeline getPipeline() throws Exception {
 // Create a default pipeline implementation.
 ChannelPipeline pipeline = Channels.pipeline();
 pipeline.addLast("decoder", new HttpRequestDecoder());
 pipeline.addLast("encoder", new HttpResponseEncoder());


 //http处理handler
 pipeline.addLast("handler", new AdminServerHandler());
 return pipeline;
 }
 }
}

 

 
 

启动了服务,绑定了8088端口。

 

2、当客户端给服务端post数据的时候,如果数据超过50K,这个时候服务端接受到的post内容是空的了。这是因为超过了 服务端默认的post的长度的最大值。

http协议里边,本来是没有对post的长度进行限制,但是,无论是系统层面或者是服务端层面的,都会对post的长度进行限制,这个也有利于网络安全。

 

3、在netty中的解决方法

private static class ServerPipelineFactory implements
            ChannelPipelineFactory {
        public ChannelPipeline getPipeline() throws Exception {
            // Create a default pipeline implementation.
            ChannelPipeline pipeline = Channels.pipeline();
//			pipeline.addFirst("frameDecoder", new LengthFieldBasedFrameDecoder(100000000,0,4,0,4));
            pipeline.addLast("decoder", new HttpRequestDecoder());
            pipeline.addLast("encoder", new HttpResponseEncoder());
//		         pipeline.addLast("streamer", new ChunkedWriteHandler()); 
                 pipeline.addLast("aggregator", new HttpChunkAggregator(65536));//设置块的最大字节数
            //http处理handler
            pipeline.addLast("handler", new AdminServerHandler());
            return pipeline;
        }
    }

 

 

 
 

加上

pipeline.addLast("aggregator", new HttpChunkAggregator(65536))
 
 

之后,设置默认的chunk最大为 65536,这样,就可以接受最大post的内容大小为 65536。

这样有一个不好的地方,就是这个大小不好控制,开大了,会浪费空间。并且在接受到的字符串的最后,会出现空白的字符串,这是由于post的内容长度小于chunk里边的ChannelBuffer的数组的大小,程序给予补全。

4、自己设置,自己读取chunk

加上

pipeline.addLast("streamer", new ChunkedWriteHandler());
 
 

设置为给位 分开一个个chunk去接受信息。

public boolean excuteChunk(ChannelHandlerContext ctx, MessageEvent e)
            throws TooLongFrameException {
        // HttpMessage currentMessage = e.getMessage();

        if (e.getMessage() instanceof HttpMessage) {
            HttpMessage m = (HttpMessage) e.getMessage();
            if (m.isChunked()) {
                // A chunked message - remove 'Transfer-Encoding' header,
                // initialize the cumulative buffer, and wait for incoming
                // chunks.
                List<String> encodings = m
                        .getHeaders(HttpHeaders.Names.TRANSFER_ENCODING);
                encodings.remove(HttpHeaders.Values.CHUNKED);
                if (encodings.isEmpty()) {
                    m.removeHeader(HttpHeaders.Names.TRANSFER_ENCODING);
                }
                m.setContent(ChannelBuffers.dynamicBuffer(e.getChannel()
                        .getConfig().getBufferFactory()));
                this.currentMessage = m;
            } else {
                // Not a chunked message - pass through.
                this.currentMessage = null;
            }
            return false;
        } else if (e.getMessage() instanceof HttpChunk) {
            // Sanity check
            if (currentMessage == null) {
                throw new IllegalStateException("received "
                        + HttpChunk.class.getSimpleName() + " without "
                        + HttpMessage.class.getSimpleName());
            }

            // Merge the received chunk into the content of the current message.
            HttpChunk chunk = (HttpChunk) e.getMessage();
            ChannelBuffer content = currentMessage.getContent();

            if (content.readableBytes() > maxContentLength
                    - chunk.getContent().readableBytes()) {
                throw new TooLongFrameException("HTTP content length exceeded "
                        + maxContentLength + " bytes.");
            }

            content.writeBytes(chunk.getContent());
            if (chunk.isLast()) {
                this.currentMessage = null;
                currentMessage.setHeader(HttpHeaders.Names.CONTENT_LENGTH,
                        String.valueOf(content.readableBytes()));
                return true;
                // Channels.fireMessageReceived(ctx, currentMessage,
                // e.getRemoteAddress());
            }
        }
        return true;
    } 
 

在handle中,自己做处理,接受 整个post过来的数据,然后在整合起来,即可

posted @ 2021-05-25 14:57  牧之丨  阅读(705)  评论(0编辑  收藏  举报