Trying to figure out why I get some timeouts still

These machines are on their own VLAN (just HTTP and MySQL traffic)

It’s all gigabit. However, I get some timeouts still.

2008/04/20 02:49:16 [error] 21638#0: *55619 upstream timed out (110:
Connection timed out) while reading response header from upstream,
client: 1.2.3.4, server: loadbalanced.server.com, request: “GET
/gallery/videos/image.jpg HTTP/1.1”, upstream:
http://10.13.5.14:80/gallery/videos/image.jpg”, host: “somehost.com”,
referrer: “http://somehost.com/gallery/videos.php

I’m wondering if perhaps I can tweak or change the upstream machines
or the proxy machines, perhaps one benefits from buffers and other
tweaking and the other does not? Or some other weird network
bottleneck I should be looking for? It comes and goes. There will be a
handful of timeouts to a specific server, then it comes back again
without an issue. Can’t figure out why, nothing is running on the
machines that should spike the CPU at all.

Proxy machine is nginx 0.6.29

Here’s the config:

user www-data www-data;
worker_processes 4;
worker_cpu_affinity 0001 0010 0100 1000;
working_directory /var/run;
error_log /var/log/nginx.error.log error;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
upstream webs {
server web01:80;
server web02:80;
server web03:80;
}
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_max_body_size 100m;
client_header_buffer_size 8k;
large_client_header_buffers 12 6k;
keepalive_timeout 5;
gzip on;
gzip_static on;
gzip_proxied any;
gzip_min_length 1100;
#gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types text/plain text/html text/css application/x-javascript
text/xml application/xml application/xml+rss text/javascript;
gzip_disable “MSIE [1-6].”;
gzip_vary on;
server_names_hash_max_size 4096;
server_names_hash_bucket_size 128;
server {
listen 80;
access_log off;
location / {
proxy_pass http://mikehost;
proxy_next_upstream error timeout http_500 http_503
http_404 invalid_header;
proxy_read_timeout 30;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Proxied-IP $remote_addr;
}
location ~ /.ht {
deny all;
}
}
}

Here’s a config from one of the upstream machines, also nginx 0.6.29:

user www-data www-data;
worker_processes 4;
worker_cpu_affinity 0001 0010 0100 1000;
working_directory /var/run;
error_log /var/log/nginx.error.log debug;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
client_max_body_size 100m;
client_header_buffer_size 8k;
large_client_header_buffers 12 6k;
keepalive_timeout 5;
server_tokens off;
gzip off;
gzip_static off;
server_names_hash_max_size 4096;
server_names_hash_bucket_size 128;

then tons of server {} blocks like this for example:

server {
listen 80;
server_name domain.com www.domain.com;
index index.php index.html;
root /home/user/web/domain.com/;
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html)$ {
expires 30d;
}
location ~ .php {
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;

    fastcgi_param  SCRIPT_FILENAME 

$document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;

    fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
    fastcgi_param  SERVER_SOFTWARE    nginx/$nginx_version;

    fastcgi_param  REMOTE_ADDR        $remote_addr;
    fastcgi_param  REMOTE_PORT        $remote_port;
    fastcgi_param  SERVER_ADDR        $server_addr;
    fastcgi_param  SERVER_PORT        $server_port;
    fastcgi_param  SERVER_NAME        $server_name;

    fastcgi_param  REDIRECT_STATUS    200;
    fastcgi_pass 127.0.0.1:11003;
    fastcgi_index index.php;
}

}

(Note, I’ve put the common elements into includes, but for the sake of
this email I’ve combined it all into one)

Why are you proxying things twice mike ?

From what I can see your setup is

web -> nginx1 -> nginx2 -> php

Why don’t you just do

web -> nginx - > php

Cheers

Dave

Or try dns wack-a-mole if you don’t have access to a hardware solution
(we rent space on our host’s Alteons)

I have a quad core machine for mail/load balancing/etc. I don’t want
to pay additional to my ISP for load balancing I can do on my own :slight_smile:

I can go back to ipvsadm too, but I was getting odd timeouts there
too. At least nginx is smart enough to try each upstream until it
finds a working one.

I have a load balancing server (which used LVS before) I just changed
it to use nginx today.

I figure it will scale well enough for my needs, provide any layer 7
needs, SSL and gzip that I want.

I could use nginx and make the 3 backend servers purely fastcgi pools,
but I don’t have any load balancing in front of the webservers then.
I’d have to do DNS round robin or something.

yeah, i could…

but right now i still have an issue that seems to be there regardless
of what i’m using to proxy, and what i am using to serve… something
is making things freeze up every so often.

On Son 20.04.2008 03:35, mike wrote:

I have a load balancing server (which used LVS before) I just changed
it to use nginx today.

I figure it will scale well enough for my needs, provide any layer 7
needs, SSL and gzip that I want.

I could use nginx and make the 3 backend servers purely fastcgi pools,
but I don’t have any load balancing in front of the webservers then.
I’d have to do DNS round robin or something.

or you can give haproxy (http://haproxy.1wt.eu/) a chance :wink:

Cheers

Aleks

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs