nginx+Tomcat优化,性能监控
前几日吧服务器给搭好了,nginx+tomcat6,刚刚开始开可以,随着用户数的增加,网站访问越来越慢,开始以为是代码问题,下载了JRockit来监控,并没有发现内存泄漏什么的问题,然后开启了nginx的监控,和tomcat的监控,发现问题了,tomcat的并发数是200,达到这个上线之后就不能访问了,很有可能问题出现在这里,到网上搜索之后,很多人说tomcat6性能不如tomcat7,对并发处理也不如tomcat7,好吧,那就换成tomcat7吧,换成tomcat7之后达到这个上线还是一样的,看了得修改并发数了,修改并发数之后性能稍好一点,但是也并不如意,网上再次搜索之后发现tomcat默认的http请求处理模式是bio(即阻塞型),每次请求都新开一个线程处理,难怪这个并发数一下就给满了,好了,将默认bio处理方式改为nio(非阻塞型),修改之后果然快很多。下面看下具体操作:
一、关于nginx+tomcat性能监控,请看http://www.cnblogs.com/wanghaosoft/archive/2013/02/04/2892099.html
二、使用JRockit监控java内存,请看http://www.linuxidc.com/Linux/2011-04/34615.htm
三、nginx+tomcat优化
1)、nginx优化
这里只说nginx的简单优化,即让nginx处理html静态文件,图片,css,js等非动态文件,讲jsp文件交给tomcat处理,这样的话可以减轻tomcat的压力,再说对于这些静态文件来说,不是tomcat的强项,而是nginx的强项。
请在nginx.conf中添加如下配置
location ~ .*\.(gif|jpg|jpeg|png|bmp|ico)$ {
root /www/; #即图片存在的根路径
expires 30d;
}
location ~ .*\.(js|css)?$ {
root /www/;#即图文件存在的根路径
expires 10h;
}
还可以将worker_processes 2; #处理线程更改为cpu的倍数,即cpu*2。最后附上详细饿nginx配置文件供参考
2)、tomcat优化,
1、修改tomcat的并发线程和默认处理方式为nio
这里以tomcat7为例,修改tomcat/conf/server.xml中Connector节点为
<Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="1000" minSpareThreads="25" maxSpareThreads="250"
enableLookups="false" redirectPort="8443" acceptCount="300" connectionTimeout="20000" disableUploadTimeout="true"/>
2、tomcat的几种connector方式简介
Tomcat的四种基于HTTP协议的Connector性能比较
<Connector port="8081" protocol="org.apache.coyote.http11.Http11NioProtocol"
connectionTimeout="20000" redirectPort="8443"/>
<Connector port="8081" protocol="HTTP/1.1" connectionTimeout="20000"
redirectPort="8443"/>
<Connector executor="tomcatThreadPool"
port="8081" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<Connector executor="tomcatThreadPool"
port="8081" protocol="org.apache.coyote.http11.Http11NioProtocol"
connectionTimeout="20000"
redirectPort="8443" />
我们姑且把上面四种Connector按照顺序命名为 NIO, HTTP, POOL, NIOP
为了不让其他因素影响测试结果,我们只对一个很简单的jsp页面进行测试,这个页面仅仅是输出一个Hello World。假设地址是 http://tomcat1/test.jsp
我们依次对四种Connector进行测试,测试的客户端在另外一台机器上用ab命令来完成,测试命令为: ab -c 900 -n 2000 http://tomcat1/test.jsp ,最终的测试结果如下表所示(单位:平均每秒处理的请求数):
NIO HTTP POOL NIOP
281 65 208 365
666 66 110 398
692 65 66 263
256 63 94 459
440 67 145 363
由这五组数据不难看出,HTTP的性能是很稳定,但是也是最差的,而这种方式就是Tomcat的默认配置。NIO方式波动很大,但没有低于280 的,NIOP是在NIO的基础上加入线程池,可能是程序处理更复杂了,因此性能不见得比NIO强;而POOL方式则波动很大,测试期间和HTTP方式一样,不时有停滞。
由于linux的内核默认限制了最大打开文件数目是1024,因此此次并发数控制在900。
尽管这一个结果在实际的网站中因为各方面因素导致,可能差别没这么大,例如受限于数据库的性能等等的问题。但对我们在部署网站应用时还是具有参考价值的
最后附上nginx的配置文件,这是我的配置文件,仅供参考,如果您有什么好的建议请留言。
关于nginx的各项参数解释请看http://www.cnblogs.com/wanghaosoft/archive/2013/01/16/2863265.html
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
use epoll;
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
#new 1 start---------------
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32K;
client_max_body_size 8m;
#new 1 end -----------
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#tomcat add start<<
tcp_nodelay on;
client_body_buffer_size 512k;
proxy_connect_timeout 5;
proxy_read_timeout 60;
proxy_send_timeout 5;
proxy_buffer_size 16k;
proxy_buffers 4 64k;
proxy_busy_buffers_size 128k;
proxy_temp_file_write_size 128k;
#tomcat add end>>
gzip on;
#news2 start --
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
#tomcat add start<<
upstream tomcat_server {
server 127.0.0.1:8080;
#server 127.0.0.1:9080;
}
#tomcat add end>>
#Proxy_temp_path:/www/cache/images_temp;
#Proxy_cache_path:/www/cache/images_cache levels=1:2 keys_zone=cache_one:200m inactive=1d max_size=10g;
server {
listen 80;
server_name _;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root /www;
index index.html index.htm index.jsp default.jsp index.do default.do index.php default.php;
#nagios
#auth_basic "nagios admin";
#auth_basic_user_file /usr/local/nagios/etc/nagiosAdmin.net;
#cache set start
#proxy_cache cache_one;
#proxy_cache_valid 200 304 12h;
#proxy_cache_key $host$uri$is_args$args;
#proxy_set_header Host $host;
#proxy_set_header X-Forwarded-For $remote_addr;
#proxy_pass http://127.0.0.1:8080;
#log_format cache '***$time_local ' '***$upstream_cache_status ' '***Cache-Control: $upstream_http_cache_control ' '***Expires: $upstream_http_expires ' '***"$request" ($status) ' '***"$http_user_agent" ';
#access_log /usr/local/nginx-0.8.32/logs/cache.log cache;
#expires 1d;
}
#tomcat add start<<
if (-d $request_filename)
{
rewrite ^/(.*)([^/])$http://$host/$1$2/ permanent;
}
location ~ \.(jsp|jspx|do)?$ {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://tomcat_server;
}
#tomcat add end>>
error_page 404 /404.html;
location = /404.html {
root /www;
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /www;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
# location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
root /www;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /www$fastcgi_script_name;
include fastcgi_params;
#nagios
#auth_basic "nagios admin";
#auth_basic_user_file /usr/local/nagios/etc/nagiosAdmin.net;
}
# location ~* /dwr/?
# {
# proxy_pass http://localhost:8080;
# }
location ~ .*\.(gif|jpg|jpeg|png|bmp|ico)$ {
root /www/;
expires 30d;
}
location ~ .*\.(js|css)?$ {
root /www/;
expires 10h;
}
location /status {
stub_status on;
access_log off;
}
location /nagios {
alias /www/nagios;
auth_basic "nagios admin";
auth_basic_user_file /usr/local/nagios/etc/nagiosAdmin.net;
}
location /cgi-bin {
alias /usr/local/nagios/sbin;
}
location ~ .*\.cgi$ {
root /usr/local/nagios/sbin;
rewrite ^/nagios/cgi-bin/(.*)\.cgi /$1.cgi break;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
fastcgi_index index.cgi;
fastcgi_param SCRIPT_FILENAME /usr/local/nagios/sbin$fastcgi_script_name;
include fastcgi_params;
auth_basic "nagios admin";
auth_basic_user_file /usr/local/nagios/etc/nagiosAdmin.net;
}
location ~ .*\.pl$ {
fastcgi_pass unix:/var/run/fcgiwrap.socket;
fastcgi_index index.pl;
fastcgi_param SCRIPT_FILENAME /usr/local/nagios/sbin$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Load config files from the /etc/nginx/conf.d directory
#news 2 -----------
include /etc/nginx/conf.d/*.conf;
# include /etc/nginx/sites-enabled/*;
}