Upstream sent too big header while reading response header from upstream

After I added some CORS headers to my API, one of the users of my
nginx-based system complained about occasional errors with:

upstream sent too big header while reading response header from upstream

He also reported to have worked around the issue using:

proxy_buffers 8 512k;
proxy_buffer_size 2024k;
proxy_busy_buffers_size 2024k;
proxy_read_timeout 3000;

However unfortunately I was unable to reproduce this problem myself. I
also had a hard time figuring out what the exact problem is.

Some questions:

  • What exactly does this error mean? Does it mean that response
    contained too many headers? How many is too many?
  • Is it wise to increase the buffer sizes as the user reported? What
    would be sensible defaults?

Hello!

On Wed, Feb 05, 2014 at 03:48:50PM -0800, Jeroen O. wrote:

proxy_read_timeout 3000;

However unfortunately I was unable to reproduce this problem myself. I
also had a hard time figuring out what the exact problem is.

Some questions:

  • What exactly does this error mean? Does it mean that response
    contained too many headers? How many is too many?

Response headers should fit into proxy_buffer_size, see
http://nginx.org/r/proxy_buffer_size. If they don’t, the error
is reported.

  • Is it wise to increase the buffer sizes as the user reported? What
    would be sensible defaults?

Certainly no. In most cases defaults used (4k on most platforms)
are appropriate. If big cookies are expected to be returned by a
proxied server, something like 32k or 64k will be good enough. If
larger values are needed, it indicate backend problem.


Maxim D.
http://nginx.org/

On Thu, Feb 6, 2014 at 4:18 AM, Maxim D. [email protected] wrote

Response headers should fit into proxy_buffer_size, see
http://nginx.org/r/proxy_buffer_size. If they don’t, the error
is reported.

In which the “size” refers to the number of characters that appear up
till the blank line that separates the headers from the body in the
response? It looks like it would be around 4k. So perhaps I’ll
increase proxy_buffer_size to 8k.

Is it also necessary to increase modify proxy_busy_buffers_size and
proxy_buffers to deal with responses with many headers?

Hello!

On Thu, Feb 06, 2014 at 09:11:31AM -0800, Jeroen O. wrote:

increase proxy_buffer_size to 8k.
Yes.

Is it also necessary to increase modify proxy_busy_buffers_size and
proxy_buffers to deal with responses with many headers?

No.


Maxim D.
http://nginx.org/

Hi Maxim & Jeroen,

I’m the user Jeroen mentioned. I’m sorry for only being to produce
sporadic
errors earlier, I now made a test case which reliably produces
the error, both on our server and Jeroen’s server (so it’s hopefully not
just my amateur status with nginx).

Of course, the ridiculously large proxy settings were only chosen in
desperation, but I can now report that no increase of proxy buffer
settings solves the problem at all (or shifts the treshold at which the
error messages occur). So I think it’s safe to say the error message
upstream sent too big header while reading response header from upstream
is misleading (unless the CORS headers that Jeroen added are somehow
inflated through the POST request, wouldn’t know why that would be).

I can also now rule out that’s its due to the headers sent or received.

I receive only the following headers when the 502s occur

HTTP/1.1 100 Continue

HTTP/1.1 502 Bad Gateway
Server: nginx/1.4.4
Date: Mon, 10 Feb 2014 10:46:57 GMT
Content-Type: text/html
Content-Length: 172
Connection: keep-alive

And I sent only these

POST /ocpu/library/base/R/identity HTTP/1.1
Host: ourhost.xx
Accept: /
Content-Length: 3545
Content-Type: application/x-www-form-urlencoded
Expect: 100-continue

I do send a large request, 2241 characters (inlined JSON-ified data),
but
that is not near the upper limit of POST request as I know them.

Here’s the issue on the openCPU github, where I uploaded my test cases
(sorry for it being a bit primitively done, I’m in a bit of a tight
spot
time-wise here and just an amateur):
https://github.com/jeroenooms/opencpu/issues/76

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,247230,247327#msg-247327

Hi,

after some further testing I discovered that I had the order in which
various nginx config files are called wrong. Because location {} isn’t
merged, but overridden, my directives never ‘took’.

Setting proxy_buffer_size 8k; kept the errors from occurring.

As I wrote on Github https://github.com/jeroenooms/opencpu/issues/76 ,
it
still seems like the error message was misleading here, because the
headers
sent were identical (except for the exact Content-Length) and the
headers
received were pretty much the same as well. There are no cookies at all
involved here.

Thanks for the advice and nginx!

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,247230,247334#msg-247334

Hello,
try this
http://stackoverflow.com/questions/13894386/upstream-too-big-nginx-codeigniter

Regards,
Basti

Hi Basti,

thanks, I found the SO post myself. I had not set up the directives
properly, so thought the fix didn’t work. It does now. I also think they
described a different problem, as in my case no cookies were sent,
headers
were fairly small and two requests with pretty much identical headers
sent/received had different results (only 502ed).

I’m very interested in learning what exactly may have caused the message
in
my case, so as to know the boundaries in which my requests will work. I
now
set the buffer_size to 8K and I don’t see failures with Content-Lengths
that
go far beyond that (though maybe compression has to be considered?).

Best regards,

Ruben

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,247230,247335#msg-247335

Yes, that’s quite probably it!
Then I guess it’s on Jerome to weigh in, I don’t know the exact
reasoning
for doing so, I would have thought it would be more appropriate to hash
the
request body in the cache key, but maybe that’s not possibly using
nginx?
OpenCPU allows for request bodies up to 500M by default, what kind of
settings would be necessary to make that (or something more reasonable
like
50M) play with the buffer? I’m guessing the buffer doesn’t have to be as
large as the maximal request, right?

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,247230,247352#msg-247352

On Mon, Feb 10, 2014 at 5:15 AM, Maxim D. [email protected]
wrote:

it is likely the cause, as the config includes the following lines:

proxy_cache_methods POST;
proxy_cache_key "$request_method$request_uri$request_body";

Yikes I was not aware that the cache key gets stored into the buffers
as well. Is this mentioned in the manual anywhere?

So we need to set proxy_buffer_size to a value greater than the sum of
client_body_buffer_size + header size? Or alternatively, is there a
way that to use a fixed length hash of the request body in the
proxy_cache_key?

Hello!

On Mon, Feb 10, 2014 at 07:56:22AM -0500, rubenarslan wrote:

set the buffer_size to 8K and I don’t see failures with Content-Lengths that
go far beyond that (though maybe compression has to be considered?).

Another possible cause may be use of $request_body in
proxy_cache_key. Cache header, including cache key, is placed
into proxy buffer if caching is enabled, and effectively reduces
proxy_buffer_size available to read response headers.

Assuming you are using configs like this one:

https://github.com/jeroenooms/opencpu-deb/blob/master/opencpu-cache/nginx/opencpu-ocpu.conf

it is likely the cause, as the config includes the following lines:

proxy_cache_methods POST;
proxy_cache_key "$request_method$request_uri$request_body";


Maxim D.
http://nginx.org/

Hello!

On Mon, Feb 10, 2014 at 09:45:50AM -0800, Jeroen O. wrote:

On Mon, Feb 10, 2014 at 5:15 AM, Maxim D. [email protected] wrote:

it is likely the cause, as the config includes the following lines:

proxy_cache_methods POST;
proxy_cache_key "$request_method$request_uri$request_body";

Yikes I was not aware that the cache key gets stored into the buffers
as well. Is this mentioned in the manual anywhere?

Likely no. It’s mostly proxy cache implementation detail - it needs a
way to write a cache header (which includes key) to a cache file,
and places it to the buffer just before response headers from
an upstream.

So we need to set proxy_buffer_size to a value greater than the sum of
client_body_buffer_size + header size?

If you use $request_body in the cache key - yes. And don’t forget
to add other variables in proxy_cache_key.

Or alternatively, is there a
way that to use a fixed length hash of the request body in the
proxy_cache_key?

As of now hashes may be calculated using, e.g., the embedded perl
or 3rd party modules.


Maxim D.
http://nginx.org/

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs