499 errors, fcgi sockets require restart python and custom proxy

Hi folks,

I posted in the paid help about this too… here is the issue:

We currently have a setup where we have multiple app servers sitting
behind NGINX. We want to provide HMAC based authentication comparing an
hash of the content or headers of the request against an authenticator
using a known key (which varies depending on which URL you are trying to

We receive on port 80.
There are 50 or so sockets setup as a backend which feed into a python
script (gatekeeper.py).
gatekeeper.py has the logic to authenticate the request given params in
the headers, etc
It then makes a request to port :81
:81 is setup in nginx to route to our application servers
The app servers return to nginx, nginx returns to gatekeeper.py and then
gatekeeper.py returns back to nginx.

It has been mostly successful, and fast enough, but we’re having some
issues with fcgi threads timing out, and having to recreate them every
so often. Having an integrated solution would be ideal.

Problem is, we keep getting these in the log:

2009/04/04 20:59:28 29828#0: *754537 upstream timed out (110:
Connection timed out) while reading response header from upstream,
client:, server: balancer1.search.xxxxxxxx.net, request:
HTTP/1.0”, upstream: “fastcgi://unix:/tmp/fcgi-gate-46.socket:”, host:

And we get lots of them, and while the requests typically make it
through after trying 4 or 5 threads, it gets worse and worse until I
delete the sockets and recreate them, then it runs absolutely fine,
given the same traffic for hours.

Any ideas? This one is really killing me. Totally open to paying
someone to help with this too.


Posted at Nginx Forum: http://forum.nginx.org/read.php?2,808,808#msg-808

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs