Slow performance when sending a large file upload request via proxy_pass

I’m trying to diagnose some strange behavior in my web app, and at the
moment it seems like nginx may be at fault, though I’d be happy to learn
otherwise.

On the client side, I’m using flow.js
(GitHub - flowjs/flow.js: A JavaScript library providing multiple simultaneous, stable, fault-tolerant and resumable/restartable file uploads via the HTML5 File API.) to
upload a file to the server. This library should allow me to upload very
large files by splitting them up into (by default) 1MB chunks, and
sending
each chunk as a standard file form upload request.

On the server, I am connecting to a Python WSGI server (gunicorn) via
try_files / proxy_pass. The configuration is very standard:

location / {
    root   /var/www;
    index  index.html index.htm;
    try_files $uri $uri/ @proxy_to_app;
}

location @proxy_to_app {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;

        proxy_pass   http://app_server;
}

The Python code is pretty simple, mainly just opening the file and
writing
the data. According to the gunicorn access log, each request takes
around
135ms:

127.0.0.1 - - [17/Jul/2016:05:07:07 +0000] "POST /files HTTP/1.0" 

200 -
… 0.135206
127.0.0.1 - - [17/Jul/2016:05:07:07 +0000] “POST /files HTTP/1.0”
200 -
… 0.136749
127.0.0.1 - - [17/Jul/2016:05:07:07 +0000] “POST /files HTTP/1.0”
200 -
… 0.137314

But in the nginx access log, the $request_time varies wildly and is
usually
very large:

10.0.0.0 - - [17/Jul/2016:05:07:06 +0000] "POST /files HTTP/1.1" 200 


0.956
10.0.0.0 - - [17/Jul/2016:05:07:07 +0000] “POST /files HTTP/1.1” 200

0.553
10.0.0.0 - - [17/Jul/2016:05:07:07 +0000] “POST /files HTTP/1.1” 200

0.888

At first I thought it might be the network itself taking a long time to
send
the data, but looking at the network logs in the browser doesn’t seem to
bear this out. Once the socket connection is established, Chrome says
that
the request time is often as low as 8ms, with the extra ~.5s-1s spent
waiting for a response.

So the question is, what is nginx doing during all that extra time? On
normal (small) requests, the times in the two logs are identical, but
even
dialing down the chunk size to flow.js to 128kb or 64kb results in a
delay
in nginx, and it’s making it take way too long to upload these files (I
can’t just set the chunk size to something super small like 4kb, because
the
overhead of making so many requests makes the uploads slower).

I’ve tried messing with various configuration options including
proxy_buffer_size and proxy_request_buffering, to no effect.

Any ideas on next steps for how I could begin to diagnose this?

Extra info:

CentOS 7, running on AWS

nginx version: nginx/1.10.1
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx

–modules-path=/usr/lib64/nginx/modules
–conf-path=/etc/nginx/nginx.conf
–error-log-path=/var/log/nginx/error.log
–http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
–lock-path=/var/run/nginx.lock
–http-client-body-temp-path=/var/cache/nginx/client_temp
–http-proxy-temp-path=/var/cache/nginx/proxy_temp
–http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
–http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
–http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx
–group=nginx
–with-http_ssl_module --with-http_realip_module
–with-http_addition_module
–with-http_sub_module --with-http_dav_module --with-http_flv_module
–with-http_mp4_module --with-http_gunzip_module
–with-http_gzip_static_module --with-http_random_index_module
–with-http_secure_link_module --with-http_stub_status_module
–with-http_auth_request_module --with-http_xslt_module=dynamic
–with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic
–with-http_perl_module=dynamic
–add-dynamic-module=njs-1c50334fbea6/nginx
–with-threads --with-stream --with-stream_ssl_module
–with-http_slice_module --with-mail --with-mail_ssl_module
–with-file-aio
–with-ipv6 --with-http_v2_module --with-cc-opt=‘-O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
–param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic’

Posted at Nginx Forum: