We’re working on a server-to-server integration effort, using nginx as
our front end. The guys on the other side are using ab (ApacheBench) to
perform initial testing. ApacheBench includes a switch to turn on
keepalive request support, but it only sends HTTP/1.0 requests.
We’re working on a server-to-server integration effort, using nginx as
our front end. The guys on the other side are using ab (ApacheBench) to
perform initial testing. ApacheBench includes a switch to turn on
keepalive request support, but it only sends HTTP/1.0 requests.
I don’t recommend to use it. “ab” is slow and buggy.
Thanks for your response. We’ve asked the other company if they might
try something other than ab for their testing, but so far they seem to
want to keep using it. As a result, we’re stuck with supporting it.
From the testing that the other company has done, and from the testing
that we’ve done to try to repeat their results, it does not appear that
nginx is supporting keepalives on HTTP/1.0 requests. We see the request
header go out with keepalives requested, but the response header
contains Connection: Close.
If nginx supports this by default, it implies that we have an
incompatible config option. keepalive_timeout is 600 for this virtual
server (tried it at 30s, also - same result). We aren’t using any other
keepalive configs.
I can repeat your results with static files - keepalives work with
HTTP/1.0 in that scenario. However, for responses via FCGI, a
connection close is consistently sent. I found a post on the russian
forum (thank you Google translate) that implies that the content-length
header needs to be set on the application side. I had assumed that
nginx would calculate the length and add the header, but this does not
appear to be the case.
If nginx supports this by default, it implies that we have an
incompatible config option. keepalive_timeout is 600 for this virtual
server (tried it at 30s, also - same result). We aren’t using any other
keepalive configs.
This would lead to delay response until the entire body will be
buffered by
nginx.
The body is small, so the buffering isn’t an issue. But because nginx
buffers the full response, I had expected nginx to calculate the
response size and pass that back to the client.
I’m sure there’s an excellent reason that it doesn’t do this…it just
ended up being somewhat non-intuitive for me.