Forum: NGINX bugreport - connection broke on slow clients in proxy mode

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
0c6869b31006faa88ee668b99f102b22?d=identicon&s=25 Tomáš Hála (Guest)
on 2009-02-06 15:24
(Received via mailing list)
Hello,
we use nginx as reverse proxy to apache server and if we use
proxy_max_temp_file_size directive to limit the size of files buffering,
downloading larger files with slow connection is always be broken and it
is nessesary to start downloading again.

For example, how to replicate the problem:
Use "proxy_max_temp_file_size 10M" in the proxy configuration, generate
about 40MB binary file in document root of the proxyed apache (or maybe
other webserver as well) and try to download it with speed limited to
100k/sec. For example with wget:
wget -t 1 --limit-rate=100k http://server/file

Downloading will fail aproximatly in 12MB. If you download it with full
speed (10M in my case), there will be no problem. If you download it
directly from the apache server running on diferent tcp port, there will
by also no problem. The problem appears on latest stable (0.6.35)
version as well as on latest development (0.7.33).
Feel free to ask me about more details.
Best Regards Tomas Hala
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2009-02-06 16:02
(Received via mailing list)
Hello!

On Fri, Feb 06, 2009 at 03:15:26PM +0100, Tomáš Hála wrote:

> 100k/sec. For example with wget:
> wget -t 1 --limit-rate=100k http://server/file
>
> Downloading will fail aproximatly in 12MB. If you download it with full
> speed (10M in my case), there will be no problem. If you download it
> directly from the apache server running on diferent tcp port, there will
> by also no problem. The problem appears on latest stable (0.6.35)
> version as well as on latest development (0.7.33).
> Feel free to ask me about more details.

I've seen similar problem caused by client timeouts in Apache,
since from Apache's point of view client downloads about 10M (+
nginx proxy memory buffers) and then stops downloading for a
relatively long time (the time needed for client to download at
least one memory buffer from nginx).

Maxim Dounin
0c6869b31006faa88ee668b99f102b22?d=identicon&s=25 Tomáš Hála (Guest)
on 2009-02-06 16:46
(Received via mailing list)
Maxim Dounin wrote:
>> For example, how to replicate the problem:
>> version as well as on latest development (0.7.33).
>
Hello,
that makes sense. It's probably problem with understanding of meaning of
proxy_max_temp_file_size directive. With reference to documentation
(wiki) we tought, that if the file is larger then limit, it will be
served synchronously. When I try to strace the apache process serving
this file, it seams, that at first it downloads size acording to
proxy_max_temp_file_size and after reaching this point by client it
starts transfering synchronous. So the documentation is little bit
misguided. But I probably understand, why it is implemented like this
because it is easyer wait until it will fill the buffer then to detect
the size of serving file before.
Thanks for your hint.
BR Tomas Hala
This topic is locked and can not be replied to.