Temp disk space usage

I’m running nginx 0.5.26 on OpenBSD 4.0 as a load-balancing reverse
proxy to two backend web servers. Recently we tried serving several
moderately large static files (around 100-200MB each), and had numerous
concurrent connections downloading them at a time.

When this started, the disk space usage on the machine running nginx
began climbing rapidly, and soon ran out of space. There’s only a few
GB that nginx can use; we did not anticipate this server to need a lot
of disk space, as it’s not intended to be a caching proxy. Anyway, when
the disk is full, nginx understandably started throwing “writev() failed
(28: No space left on device) while reading upstream” errors, but when
nginx was restarted, everything was fine (and we gained several GB of
space back).

The only nginx.conf directive that we have that has anything to do with
buffering/caching/temp files is large_client_header_buffers 4 8K;,
which doesn’t have anything to do with this. Everything else related to
buffers and temp files should be set to the default.

Is nginx buffering the entire file from the backend server on disk
while serving it to the client? Are there any options to tune this, or
limit disk usage by temporary files?

On Wednesday 17 October 2007, Andrew Deason wrote:

Is nginx buffering the entire file from the backend server on disk
while serving it to the client? Are there any options to tune this, or
limit disk usage by temporary files?

“proxy_max_temp_file_size 1M”
default is 1G

On Wed, 17 Oct 2007 23:19:14 +0200
Roxis [email protected] wrote:

On Wednesday 17 October 2007, Andrew Deason wrote:

Is nginx buffering the entire file from the backend server on disk
while serving it to the client? Are there any options to tune this,
or limit disk usage by temporary files?

“proxy_max_temp_file_size 1M”
default is 1G

Thank you. It would be nice if someone were to describe it on
http://wiki.codemongers.com/NginxHttpProxyModule#proxy_max_temp_file_size
,
though.

Hello!

On Wed, 17 Oct 2007, Andrew Deason wrote:

(28: No space left on device) while reading upstream" errors, but when
limit disk usage by temporary files?
By default, nginx will buffer reply from upstream to memory buffers
(specified by proxy_buffers/proxy_buffers) and then to disk into
directory
specified by proxy_temp_path.

You may avoid disk buffering by setting proxy_buffering to off.

Detailed docs in English can be found here:

http://wiki.codemongers.com/NginxHttpProxyModule#proxy_buffering

Note:
It’s really good idea to serve static files directly from nginx, not
from
backend servers.

Maxim D.

Hello!

On Tue, 30 Oct 2007, Igor C. wrote:

legacy stuff which will need some time set aside for converting.) Hence
We’ve achieved this straightforwardly by setting
defeats the object of our generic per-host configuration based on hostnames.

What’s the best way to do this, please?

You may try something like this:

 set $blah $server_name;
 if ($http_x_forwarded_host) {
     set $blah $http_x_forwarder_host;
 }

 fastcgi_param SERVER_NAME $blah;

Maxim D.

Hi Igor and nginx people,

We use nginx as a front end on various development machines in our
studio to route to installations of Apache 1, Apache 2, PHP/FCGI, and
Ruby/Mongrel on each machine, as appropriate depending on SERVER_NAME
conventions, using our private DNS domain.

We have some circumstances under which we need to grant external
access to machines with this setup, and we do this using Apache
reverse proxy on the firewall. (Eventually this will be replaced by
nginx, but it’s got a lot of legacy stuff which will need some time
set aside for converting.) Hence http://outside.dns.name/index.php is
proxied through to http://inside.dns.name/index.php.

This means that the SERVER_NAME value seen by the web application
(PHP script in this case) is inside.dns.name.

We often configure web applications on a per-host basis, so that e.g.
database configuration information is kept in a hash keyed by
SERVER_NAME values. This means we need to have SERVER_NAME contain
the outside.dns.name.

We’ve achieved this straightforwardly by setting

fastcgi_param SERVER_NAME $http_x_forwarded_host;

which works nicely, but means that all applications responding to
different DNS names (not viewed externally via Apache reverse proxy)
fail, because they don’t have an X-FORWARDED-HOST header, thus fall
back to the local machine hostname, which is not in the configuration
hash.

We can get round this by creating separate server {} configurations
for applications which need to be served behind a remote reverse
proxy, but that defeats the object of our generic per-host
configuration based on hostnames.

So ideally I’d like to do something like:

fastcgi_param SERVER_NAME $server_name;
if ($http_x_forwarded_host) {
fastcgi_param SERVER_NAME $http_x_forwarded_host;
}

but: firstly “fastcgi_param” is not supported inside “if”, and
secondly I don’t know how or if this “overriding” would work.

What’s the best way to do this, please?

Thanks very much
Igor


Igor C. // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749
5355 // www.pokelondon.com

Hi Maxim,

On 30 Oct 2007, at 16:50, Maxim D. wrote:

secondly I don’t know how or if this “overriding” would work.
fastcgi_param SERVER_NAME $blah;

Maxim D.

That works perfectly, thank you very much!

All the best,
Igor


Igor C. // POKE // 10 Redchurch Street // E2 7DD // +44 (0)20 7749
5355 // www.pokelondon.com