Currently I’m running nginx 1.2.4 with uwsgi backend. To my
understanding $upstream_response_time should represent time taken
to deliver content by upsteam (in my case uwsgi backend). It looks like
it’t not the case for myself.
On backend I’m running uWSGI with 1 worker only. Backend app
(PSGI) generates 900kB output then waits for 10s and finishes response:
my $app = sub {
my $env = shift;
return sub {
my $respond = shift;
my $writer = $respond->([200, [‘Content-Type’, ‘text/html’]]);
for (1…900) {
my $dt = localtime;
$writer->write("[ $dt ]: " . “x” x 1024 . “\n”);
}
sleep 10;
$writer->close();
};
};
When I use “slow” client connecting to nginx (eg. socat
TCP:127.0.0.1:80,rcvbuf=128 STDIO) I can see the following hapening:
Backend server gets busy only for ~10s (this is what I expect). If I
issue 2 concurrent requests one is served immediately and 2nd one after
~10s. This behaviour would indicate that backend was able to deliver
content in ~10s (whole response was buffered as buffer size is big
enough to accommodate full response and we have only 1 worked at the
backend). Unfortunately access log disagrees with that as it makes
$upstream_response_time almost equal to $request_time (eg. ~1000s vs
~10s of expected). Is this an expected behaviour ?
Regards,
On Tue, Oct 02, 2012 at 03:44:38PM +0200, Marcin D. wrote:
…
uwsgi_buffering off;
[…]
When I use “slow” client connecting to nginx (eg. socat
TCP:127.0.0.1:80,rcvbuf=128 STDIO) I can see the following hapening:
Backend server gets busy only for ~10s (this is what I expect). If I
issue 2 concurrent requests one is served immediately and 2nd one after
~10s. This behaviour would indicate that backend was able to deliver
content in ~10s (whole response was buffered as buffer size is big
enough to accommodate full response and we have only 1 worked at the
backend). Unfortunately access log disagrees with that as it makes
$upstream_response_time almost equal to $request_time (eg. ~1000s vs
~10s of expected). Is this an expected behaviour ?
You asked nginx to work in unbuffered mode, and in this mode
doesn’t pay much attention to what happens with backend connection
if it isn’t able to write data it already has to a client. In
particular it won’t detect connection close by the backend (and
stop counting $upstream_response_time).
This is probably could be somewhat enhanced, but if you care about
$upstream_response_time it most likely means you don’t need
“uwsgi_buffering off” and vice versa.
You asked nginx to work in unbuffered mode, and in this mode
doesn’t pay much attention to what happens with backend connection
if it isn’t able to write data it already has to a client. In
particular it won’t detect connection close by the backend (and
stop counting $upstream_response_time).
I see.
This is probably could be somewhat enhanced, but if you care about
$upstream_response_time it most likely means you don’t need
“uwsgi_buffering off” and vice versa.
Well, I don’t completely agree. As it was explained at Re: How do proxy_module response buffering options work? some sort of
buffering still takes place even though we work in unbuffered mode. The
only reason why we went for unbuffered mode is latency.
Regards,
Marcin
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.