/ 中存储网

Nginx-tomcat负载均衡集群的配置过程

2013-10-03 15:00:01 来源:IT技术网

在windows下作nginx负载均衡测试。

nginx的配置文件如下:

worker_processes 1;

events {

worker_connections 1024;

}

http {

include mime.types;

default_type application/octet-stream;

upstream localhost {

server 127.0.0.1:8080 weight=1 max_fails=2 fail_timeout=30s;

server 127.0.0.1:8081 weight=1 max_fails=2 fail_timeout=30s;

}

sendfile on;

keepalive_timeout 65;

server {

listen 80;

server_name localhost;

listen 80;

server_name localhost;

location /{

proxy_pass http://localhost;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

}

error_page 500 502 503 504 /50x.html;

location = /50x.html {

root html;

}

}

}

遇到这个问题,搜索网络, 发现网上也有人遇到同样的问题, 问题描述如下:

用了nginx负载均衡后,在两台tomcat正常运行的情况下,访问http://localhost 速度非常迅速,通过测试程序也可以看出是得到的负载均衡的效果,但是我们试验性的把其中一台tomcat(server localhost:8080)关闭后,再查看http://localhost,发现反应呈现了一半反映时间快,一半反映时间非常非常慢的情况,但是最 后都能得到正确结果。

然后我又把关闭的那吧tomcat实例恢复,此时再访问http://localhost,又可以很快的访问,负载均衡也运行正常了!郁闷!

分析怀疑可能是nginx将一半的左右的请求仍然发到了宕掉的tomcat实例上了,然后由于转发到宕掉的tomcat没有反映,nginx又重新分发到其它实例上处理。

但是这个时间也太长了。当有一台宕机后,访问http://localhost有时候会现了大概30s左右的响应时间,非常郁闷!

增加这么几个参数:

proxy_connect_timeout 300;

proxy_send_timeout 300;

proxy_read_timeout 300;

问题解决,主要是proxy_connect_timeout 这个参数, 这个参数是连接的超时时间。 我设置成1,表示是1秒后超时会连接到另外一台服务器。

现在贴出NGINX的配置如下,供后来人参考:

#user nobody;

worker_processes 1;

#error_log logs/error.log;

#error_log logs/error.log notice;

#error_log logs/error.log info;

#pid logs/nginx.pid;

events {

worker_connections 1024;

}

http {

include mime.types;

default_type application/octet-stream;

upstream localhost {

#ip_hash;

server 127.0.0.1:8081;

server 127.0.0.1:8080;

}

#log_format main '$remote_addr - $remote_user [$time_local] "$request" '

# '$status $body_bytes_sent "$http_referer" '

# '"$http_user_agent" "$http_x_forwarded_for"';

#access_log logs/access.log main;

sendfile on;

#tcp_nopush on;

#keepalive_timeout 0;

keepalive_timeout 65;

#gzip on;

server {

listen 80;

server_name localhost;

listen 80;

server_name localhost;

location /{

proxy_pass http://localhost;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_connect_timeout 1;

proxy_read_timeout 1;

proxy_send_timeout 1;

}

#charset koi8-r;

#access_log logs/host.access.log main;

#error_page 404 /404.html;

# redirect server error pages to the static page /50x.html

#

error_page 500 502 503 504 /50x.html;

location = /50x.html {

root html;

}

# proxy the PHP scripts to Apache listening on 127.0.0.1:80

#

#location ~ .php$ {

# proxy_pass http://127.0.0.1;

#}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000

#

#location ~ .php$ {

# root html;

# fastcgi_pass 127.0.0.1:9000;

# fastcgi_index index.php;

# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;

# include fastcgi_params;

#}

# deny access to .htaccess files, if Apache's document root

# concurs with nginx's one

#

#location ~ /.ht {

# deny all;

#}

}

# another virtual host using mix of IP-, name-, and port-based configuration

#

#server {

# listen 8000;

# listen somename:8080;

# server_name somename alias another.alias;

# location / {

# root html;

# index index.html index.htm;

# }

#}

# HTTPS server

#

#server {

# listen 443;

# server_name localhost;

# ssl on;

# ssl_certificate cert.pem;

# ssl_certificate_key cert.key;

# ssl_session_timeout 5m;

# ssl_protocols SSLv2 SSLv3 TLSv1;

# ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;

# ssl_prefer_server_ciphers on;

# location / {

# root html;

# index index.html index.htm;

# }

#}

}