Nginx 1.8 proxying to Netty - timeout from upstream

I have setup Nginx proxy to a Netty server. I am seeing a timeout from
upstream, i.e. Netty. The consequence of this timeout is that the JSON
payload response is truncated (as seen on browser developer tools)

2015/07/21 05:08:56 [error] 6#0: *19 upstream prematurely closed
connection
while reading upstream, client: 198.147.191.15, server:
sbox-wus-ui.cloudapp.net, request: “GET /api/v1/entities/DEVICE
HTTP/1.1”,
upstream: “http://10.0.3.4:8080/api/v1/entities/DEVICE”, host:
sbox-wus-ui.cloudapp.net”, referrer:
https://sbox-wus-ui.cloudapp.net/home.html

So, yes I initially thought that this is a Netty issue. However, when I
make
the same API call on Netty I am able to the retrieve the full JSON
payload.

The JSON response message size is about 13k. The JSON response I see on
the
Nginx side is 10K. After spending some time reading up on the Nginx
configuration parameters, I added client_body_temp and proxy_temp but to
no
avail. Any help is really appreciated.

Nginx details:

nginx version: nginx/1.8.0
built by gcc 4.8.2 20140120 (Red Hat 4.8.2-16) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
–conf-path=/etc/nginx/nginx.conf
–error-log-path=/var/log/nginx/error.log
–http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
–lock-path=/var/run/nginx.lock
–http-client-body-temp-path=/var/cache/nginx/client_temp
–http-proxy-temp-path=/var/cache/nginx/proxy_temp
–http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
–http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
–http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx
–group=nginx
–with-http_ssl_module --with-http_realip_module
–with-http_addition_module
–with-http_sub_module --with-http_dav_module --with-http_flv_module
–with-http_mp4_module --with-http_gunzip_module
–with-http_gzip_static_module --with-http_random_index_module
–with-http_secure_link_module --with-http_stub_status_module
–with-http_auth_request_module --with-mail --with-mail_ssl_module
–with-file-aio --with-ipv6 --with-http_spdy_module --with-cc-opt=‘-O2
-g
-pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
–param=ssp-buffer-size=4 -m64 -mtune=generic’


For more information on configuration, see:

* Official English Documentation: nginx documentation

* Official Russian Documentation: nginx: документация

user nginx;
worker_processes 1;
daemon off;

error_log {{logDir}}/error.log;

pid /run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local] 

“$request”

'$status $body_bytes_sent “$http_referer” ’
‘“$http_user_agent” “$http_x_forwarded_for”’;

access_log  {{logDir}}/access.log  main;

sendfile        on;
#tcp_nopush     on;

#keepalive_timeout  0;
keepalive_timeout  65;

chunked_transfer_encoding off;

# Disable constraints on potential large uploads resulting in HTTP 

413

client_max_body_size 0;

#gzip  on;

index   index.html index.htm;

upstream netty {  {% for netty in servers %}
     server {{netty}}; {% endfor %}
}

# Load modular configuration files from the /etc/nginx/conf.d

directory.
# See Core functionality
# for more information.
include /etc/nginx/conf.d/*.conf;

server {
listen 80;
server_name {{serverName}};
rewrite ^ https://$server_name$request_uri? permanent;
}

server {
    listen       443 ssl;
    server_name  {{serverName}};

ssl_certificate /data/nginx/cert/{{crtFile}};
ssl_certificate_key /data/nginx/cert/{{keyFile}};

    root         /usr/share/nginx/html;

    #charset koi8-r;

    #access_log  /var/log/nginx/host.access.log  main;

    # Load configuration files for the default server block.
    include /etc/nginx/default.d/*.conf;

#The only resource available to check health
location /health {
  root /apps/nginx/f2;
     index index.html;
  }

    location / {
          client_body_buffer_size    128k;

        client_body_temp_path      /apps/nginx/client_body_temp;

        proxy_connect_timeout      90;
        proxy_send_timeout         90;
        proxy_read_timeout         90;

        proxy_buffer_size          4k;
        proxy_buffers              4 32k;
        proxy_busy_buffers_size    64k;
        proxy_temp_file_write_size 64k;
  proxy_temp_path            /apps/nginx/proxy_temp;
  root /apps/nginx/f2;
     index index.html;
     {% if basicAuth == "true" %}
       auth_basic           "Restricted";
      auth_basic_user_file  /data/nginx/cert/htpasswd;
     {% endif %}
}

location /ui/ {
  proxy_pass http://netty;
  {% if basicAuth == "true" %}
       auth_basic           "Restricted";
      auth_basic_user_file  /data/nginx/cert/htpasswd;
     {% endif %}
  }

location /api/ {
  proxy_pass http://netty;
   }

   location /sales/ {
  root /apps/nginx/f2;
     index index.html;
     {% if basicAuth == "true" %}
       auth_basic           "Restricted";
      auth_basic_user_file  /data/nginx/cert/htpasswd;
     {% endif %}
   }


    # redirect server error pages to the static page /40x.html
    #
    error_page  404              /404.html;
    location = /40x.html {
    }

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
    }
}

}

Posted at Nginx Forum:

Hello!

On Tue, Jul 21, 2015 at 03:33:13PM -0400, smuthali wrote:

So, yes I initially thought that this is a Netty issue. However, when I make
the same API call on Netty I am able to the retrieve the full JSON payload.

The JSON response message size is about 13k. The JSON response I see on the
Nginx side is 10K. After spending some time reading up on the Nginx
configuration parameters, I added client_body_temp and proxy_temp but to no
avail. Any help is really appreciated.

The message suggests this is a backend problem. If you don’t see
the problem by directly talking to the backend - this may be
because the problem only appears in some specific conditions
triggered by nginx, e.g., only when HTTP/1.0 is used.

If in doubt, try looking in nginx debug log and/or tcpdump to find
out what’s going on on the wire.


Maxim D.
http://nginx.org/

Maxim, many thanks for the reply. i did run a tcpdump on nginx port 8080
and
dst being Netty. I did not see anything out the ordinary in the packet
capture. Essentially i see a HTTP 200 OK for response and the
connections
are torn down as expected. I wil run another packet capture just in
case I
overlooked something.

Thanks again.

Satish

Posted at Nginx Forum: