In the client, there is a control. I tried to upload a
file with size of 3.7MB. In the client request, the content type is
“multipart/form-data”, and there is an “Expect: 100-continue” header.
Through tcpdump, I could see nginx immediately return an “HTTP/1.1 100
Continue” response, and started to read data. After buffering the
uploaded data, nginx then started to send them to jetty. However in this
time, no “Expect: 100-continue” header was proxied because HTTP/1.0 is
used.
After sending part of data, nginx stopped continuing to proxy the rest
of data, but the connection is kept. After 30s, jetty reports time out
exception and returned an response. Nginx finally proxied this response
back to client.
I simply merged all the tcp segments which was sent from nginx to jetty,
and found only 400K bytes are proxied.
All proxy buffer config was not explicitly set so the default values
were applied. I tried to “proxy_buffering off;” and re-do the experiment
above and find the result was same.
I also tried to observe the temp file written by nginx but it’s
automatically removed when everything is done. Any way to keep it?
Therefore, I’m wondering is this expected? Did I make mistakes for
configuring proxy buffers? Do I have to use the third party “upload”
module (http://www.grid.net.ru/nginx/upload.en.html) to make it work?
On Tue, Jun 05, 2012 at 03:33:24AM -0400, speedfirst wrote:
Through tcpdump, I could see nginx immediately return an “HTTP/1.1 100
Continue” response, and started to read data. After buffering the
uploaded data, nginx then started to send them to jetty. However in this
time, no “Expect: 100-continue” header was proxied because HTTP/1.0 is
used.
So far this is expected behaviour.
After sending part of data, nginx stopped continuing to proxy the rest
of data, but the connection is kept. After 30s, jetty reports time out
exception and returned an response. Nginx finally proxied this response
back to client.
I simply merged all the tcp segments which was sent from nginx to jetty,
and found only 400K bytes are proxied.
This is obviously not expected.
Anything in error log? Could you please provide tcpdump and debug
log? It would be also cool to see which version of nginx you are
using, i.e. please provide “nginx -V” output, and a full config.
This misses at least “client_max_body_size” as by default 3.5MB
upload will be just rejected.
All proxy buffer config was not explicitly set so the default values
were applied. I tried to “proxy_buffering off;” and re-do the experiment
above and find the result was same.
Proxy buffers, as well as proxy_buffering, doesn’t matter, as it
only affects sending response from an upstream to a client.
I also tried to observe the temp file written by nginx but it’s
automatically removed when everything is done. Any way to keep it?
It’s sad you skipped them all, and only did debug_http log. With
full debug log it would be clearly visible sending request goes on
(i.e. how many bytes are sent).
2012/06/06 01:28:22 [debug] 15621#0: *5 http upstream process header
2012/06/06 01:28:22 [debug] 15621#0: *5 http proxy status 200 “200 OK”
On the other hand, it looks like sending of the request is still
in progress, and upstream server replies before the request was
completely sent. It might indicate it just doesn’t wait long
enough, and the problem is in the backend (and slow connectivity
to the backend).
I don’t see any pause in request sending you’ve claimed in your
initial message.
On the other hand, here is ~ 30s pause you’ve probably talked
about. It might indicate that upstream tries to send headers
before “receiving and interpreting a request message” (as per HTTP
RFC2616 it should do it “after”), which confuses nginx and makes
it to think further body bytes aren’t needed.
You may want to dig further into what goes on on the backend to
understand the real problem.
retry the test while client_max_body_size=0;
The size of tmp file is as expected, about 3.7M :
root@zm-dev03:/opt/data/tmp/nginx/client# ll 0000000001
-rw------- 1 speedfirst speedfirst 3914486 2012-06-06 01:27 0000000001
On the other hand, it looks like sending of the request is still
in progress, and upstream server replies before the request was
completely sent. It might indicate it just doesn’t wait long
enough, and the problem is in the backend (and slow connectivity
to the backend).
I don’t see any pause in request sending you’ve claimed in your
initial message.
On the other hand, here is ~ 30s pause you’ve probably talked
about. It might indicate that upstream tries to send headers
before “receiving and interpreting a request message” (as per HTTP
RFC2616 it should do it “after”), which confuses nginx and makes
it to think further body bytes aren’t needed.
understand the real problem.
Yes, I agree and also notice where the real problem is. I just created a
fake backend (which simply receives the uploaded data and writes into
disk), nginx correctly pass all the data to it.
Let me hack the backend code to see what’s wrong. Will update if I found
something new.
Thanks for your inspired comments
Posted at Nginx Forum:
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.