http长连接和短连接
HTTP是无状态的 HTTP1.1和HTTP1.0相比較而言,最大的差别就是添加�了持久连接支持(貌似最新的 http1.0 能够显示的指定 keep-alive),但还是无状态的,或者说是不能够信任的。 假设浏览器或者server在其头信息添�了这行代码 Connection:keep-alive TCP连接在发送后将仍然保持打开状态,于是,浏览器能够继续通过同样的连接发送请求。保持连接节省了为每一个请求建立新连接所需的时间,还节约了带宽。 实现长连接要client和服务端都支持长连接。 假设webserver端看到这里的值为“Keep-Alive”,或者看到请求使用的是HTTP 1.1(HTTP 1.1默认进行持久连接),它就能够利用持久连接的长处,当页面包括多个元素时(比如Applet,图片),显著地降低下载所须要的时间。要实现这一点, webserver须要在返回给clientHTTP头信息中发送一个Content-Length(返回信息正文的长度)头,最简单的实现方法是:先把内容写入ByteArrayOutputStream,然 后在正式写出内容之前计算它的大小 不管client浏览器 (Internet Explorer) 还是 Web server具有较低的 KeepAlive 值,它都将是限制因素。比如,假设client的超时值是两分钟,而 Web server的超时值是一分钟,则最大超时值是一分钟。client或server都能够是限制因素 在header中添� --Connection:keep-alive Http Keep-Alive seems to be massively misunderstood. Here's a short description of how it works, under both1.0 and 1.1 HTTP/1.0Under HTTP 1.0, there is no official specification for how keepalive operates. It was, in essence, tacked on to an existing protocol. If the browser supports keep-alive, it adds an additional header to the request: Connection: Keep-Alive Then, when the server receives this request and generates a response, it also adds a header to the response: Connection: Keep-Alive Following this, the connection is NOT dropped, but is instead kept open. When the client sends another request, it uses the same connection. This will continue until either the client or the server decides that the conversation is over, and one of them drops the connection. HTTP/1.1Under HTTP 1.1, the official keepalive method is different. All connections are kept alive, unless stated otherwise with the following header: Connection: close The Connection: Keep-Alive header no longer has any meaning because of this. Additionally, an optional Keep-Alive: header is described, but is so underspecified as to be meaningless. Avoid it. Not reliableHTTP is a stateless protocol - this means that every request is independent of every other. Keep alive doesn’t change that. Additionally, there is no guarantee that the client or the server will keep the connection open. Even in 1.1, all that is promised is that you will probably get a notice that the connection is being closed. So keepalive is something you should not write your application to rely upon. KeepAlive and POSTThe HTTP 1.1 spec states that following the body of a POST, there are to be no additional characters. It also states that "certain" browsers may not follow this spec, putting a CRLF after the body of the POST. Mmm-hmm. As near as I can tell, most browsers follow a POSTed body with a CRLF. There are two ways of dealing with this: Disallow keepalive in the context of a POST request, or ignore CRLF on a line by itself. Most servers deal with this in the latter way, but there's no way to know how a server will handle it without testing. Java应用 client用apache的commons-httpclient来运行method 。 经常使用的apache、resin、tomcat等都有相关的配置是否支持keep-alive。 tomcat中能够设置: The maximum number of HTTP requests which can be pipelined until the connection is closed by the server. Setting this attribute to 1 will disable HTTP/1.0 keep-alive, as well as HTTP/1.1 keep-alive and pipelining. Setting this to -1 will allow an unlimited amount of pipelined or keep-alive HTTP requests. If not specified, this attribute is set to 100. 解释1 所谓长连接指建立SOCKET连接后无论是否使用都保持连接,但安全性较差, 解释2 长连接就是指在基于tcp的通讯中,一直保持连接,无论当前是否发送或者接收数据。 解释3 长连接和短连接这个概念好像仅仅有移动的CMPP协议中提到了,其它的地方没有看到过。 解释4 短连接:比方http的,仅仅是连接、请求、关闭,过程时间较短,server若是一段时间内没有收到请求就可以关闭连接。 近期在看“server推送技术”,在B/S结构中,通过某种magic使得client不须要通过轮询即能够得到服务端的最新信息(比方股票价格),这样能够节省大量的带宽。 传统的轮询技术对server的压力非常大,而且造成带宽的极大浪费。假设改用ajax轮询,能够降低带宽的负荷(由于server返回的不是完整页面),可是对server的压力并不会有明显的降低。
而推技术(push)能够改善这样的情况。但由于HTTP连接的特性(短暂,必须由client发起),使得推技术的实现比較困难,常见的做法是通过延长http连接的寿命,来实现push。
接下来自然该讨论怎样延长http连接的寿命,最简单的自然是死循环法:
【servlet代码片段】
public void doGet(Request req, Response res) {
PrintWriter out = res.getWriter();
……
正常输出页面
……
out.flush();
while (true) {
out.print("输出更新的内容");
out.flush();
Thread.sleep(3000);
}
}
假设使用观察者模式则能够进一步提高性能。
可是这样的做法的缺点在于client请求了这个servlet后,webserver会开启一个线程运行servlet的代码,而servlet由迟迟不肯结束,造成该线程也无法被释放。于是乎,一个client一个线程,当client数量添加�时,server依旧会承受非常大的负担。
要从根本上改变这个现象比較复杂,眼下的趋势是从webserver内部入手,用nio(JDK 1.4提出的java.nio包)改写request/response的实现,再利用线程池增强server的资源利用率,从而解决问题,眼下支持这一非J2EE官方技术的server有Glassfish和Jetty(后者仅仅是听说,没实用过)。
眼下也有一些框架/工具能够帮助你实现推功能,比方pushlets。只是没有深入研究。
这两天准备学习一下Glassfish中对Comet(彗星:某人给server推送技术起的名字)的支持,呵呵。
|