Nginx+Tomcat集群部署

为了获取更好的性能,我们常常需要将tomcat进行集群部署。下文通过nginx转发实现tomcat集群,并通过nginx-upstream-jvm-route插件保证session的粘滞。

  应用场景环境:

  server1 服务器上安装了 nginx + tomcat01

  server2 服务器上只安装了 tomcat02         

  server1 IP 地址: 192.168.1.88

  server2 IP 地址: 192.168.1.89

 

  安装步骤:

  1)在server1 上安装配置 nginx + nginx_upstream_jvm_route

  shell $> wget -c http://sysoev.ru/nginx/nginx-*.tar.gz

  shell $> svn checkout http://nginx-upstream-jvm-route.googlecode.com/svn/trunk/ nginx-upstream-jvm-route-read-only

  shell $> tar zxvf nginx-*

  shell $> cd nginx-*

  shell $> patch -p0 < ../nginx-upstream-jvm-route-read-only/jvm_route.patch

  shell $> useradd www

  shell $> ./configure --user=www --group=www --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --add-module=/root/nginx-upstream-jvm-route-read-only

  shell $> make

  shell $> make install

 

  2)分别在两台机器上安装 tomcat和java

  设置tomcat的server.xml,在两台服务器的tomcat的配置文件中分别找到:

  <Engine name="Catalina" defaultHost="localhost" >

  分别修改为:

  Tomcat01:

  <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">

  Tomcat02:

  <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm2">

  分别启动两个tomcat

 

  3)设置nginx

  shell $> cd /usr/local/nginx/conf

  shell $> mv nginx.conf nginx.bak

  shell $> vi nginx.conf

  配置示例:

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
  user www www;
worker_processes 4;
error_log logs/nginx_error.log crit;
pid        /usr/local/nginx/nginx.pid;
#Specifies the value for maximum file descriptors that can be opened by this process.
worker_rlimit_nofile 51200;
events
{
use epoll;
worker_connections 2048;
}
http
{
upstream backend {
    server 192.168.1.88:8080 srun_id=jvm1;
   server 192.168.1.89:8080 srun_id=jvm2;
   jvm_route $cookie_JSESSIONID|sessionid reverse;
}
include       mime.types;
default_type application/octet-stream;
#charset gb2312;
charset UTF-8;
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 20m;
limit_rate 1024k;
sendfile on;
tcp_nopush     on;
keepalive_timeout 60;
tcp_nodelay on;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
 
 
gzip on;
#gzip_min_length 1k;
gzip_buffers     4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types       text/plain application/x-javascript text/css application/xml;
gzip_vary on;
#limit_zone crawler $binary_remote_addr 10m;
server
{
   listen       80;
   server_name 192.168.1.88;
 
    ssi on;
 
    ssi_silent_errors on;
 
     ssi_types text/shtml;
 
   index index.html index.htm index.jsp;
   root /var/www;
   #location ~ .*\.jsp$
   location /app/
{
     proxy_pass http://backend;
     proxy_redirect    off;
     proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
     proxy_set_header X-Real-IP $remote_addr;
     proxy_set_header Host $http_host;
   }
   location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$
   {
     expires      30d;
   }
   location ~ .*\.(js|css)?$
   {
     expires      1h;
   }
   location /Nginxstatus {
     stub_status on;
     access_log   off;
   }
log_format access '$remote_addr - $remote_user [$time_local] "$request" '
             '$status $body_bytes_sent "$http_referer" '
             '"$http_user_agent" $http_x_forwarded_for';
# access_log off;
}
}

 

  4)测试

  运行 nginx -t 测试配置的正确性,

  可将服务器注释一个,实现服务的热部署,

  server 192.168.2.88:8080 srun_id=jvm1;

  server 192.168.2.89:8080 srun_id=jvm2;

  运行 nginx -s reload 实现热部署。

posted @ 2014-12-10 09:33  火腿骑士  阅读(240)  评论(0编辑  收藏  举报