Optimal nginx settings for websockets sending images

Hello guys

The recent nginx 1.3.13 websocket support is fantastic! Big thanks to
the
nginx devs, it works like a charm.

I only have performance issues. Sending images through websockets turns
out
to be difficult and slow. I have a website sending 5 images per seconds
to
the server.

Sometimes I have warnings like “an upstream response is buffered to a
temporary file”, then sometimes it’s lagging and the server isn’t that
fast.

I’m not sure if my settings for this scenario are optimal. Below you
will
find extracts of my nginx conf files. Maybe you spot some mistakes or
have
suggestions?

Thanks,
Michael

nginx.conf:

user www-data;
worker_processes 2;
pid /var/run/nginx.pid;

events {
worker_connections 1536;
}

http {
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=one:8m
max_size=1400m inactive=500m;
proxy_temp_path /var/tmp;

    proxy_buffers 8 2m;
    proxy_buffer_size 10m;
    proxy_busy_buffers_size 10m;

    proxy_cache one;
    proxy_cache_key "$request_uri|$request_body";

    # Sendfile copies data between one FD and other from within the

kernel.
# More efficient than read() + write(), since the requires
transferring data to and from the user space.
sendfile on;

    # Tcp_nopush causes nginx to attempt to send its HTTP response 

head
in one packet,
# instead of using partial frames. This is useful for prepending
headers before calling sendfile,
# or for throughput optimization.
tcp_nopush on;

    # on = don't buffer data-sends (disable Nagle algorithm). Good 

for
sending frequent small bursts of data in real time.
# here set off because of large bursts of data
tcp_nodelay off;

    # Timeout for keep-alive connections. Server will close 

connections
after this time.
keepalive_timeout 30;

include /etc/nginx/mime.types;
default_type application/octet-stream;

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log notice;

    gzip on;
    gzip_min_length 10240;
    gzip_disable "MSIE [1-6]\.";

    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/plain text/css application/json

application/x-javascript text/xml application/xml application/xml+rss
text/javascript;

include /etc/nginx/conf.d/.conf;
include /etc/nginx/sites-enabled/
;
}

/sites-available/other_site.conf:

upstream other_site_upstream {
server 127.0.0.1:4443;
}

server {

location / {
proxy_next_upstream error timeout http_502;
proxy_pass https://other_site_upstream;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;

proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

proxy_http_version 1.1;
proxy_redirect off;

}
}

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,236601,236601#msg-236601

Hello!

On Mon, Feb 25, 2013 at 10:15:55PM -0500, michael.heuberger wrote:

The recent nginx 1.3.13 websocket support is fantastic! Big thanks to the
nginx devs, it works like a charm.

Good to hear. :slight_smile:

I only have performance issues. Sending images through websockets turns out
to be difficult and slow. I have a website sending 5 images per seconds to
the server.

Sometimes I have warnings like “an upstream response is buffered to a
temporary file”, then sometimes it’s lagging and the server isn’t that
fast.

This messages are not related to websocket connections, as
websocket connections doesn’t do any disk buffering and only use
in-memory buffers. (More specifically, a connection uses two
in-memory buffers with size of proxy_buffer_size - one for
backend-to-client data, and one for client-to-backend data.)

Given the above, I would suppose that you actually have
performance problems unrelated to websockets.

I’m not sure if my settings for this scenario are optimal. Below you will
find extracts of my nginx conf files. Maybe you spot some mistakes or have
suggestions?

[…]

    proxy_buffers 8 2m;
    proxy_buffer_size 10m;
    proxy_busy_buffers_size 10m;

Buffers used looks huge, make sure you have enough memory.

    proxy_cache one;
    proxy_cache_key "$request_uri|$request_body";

Usuing request body as a cache key isn’t really a good idea unless
all request bodies are known to be small.

[…]


Maxim D.
http://nginx.com/support.html

PS: happy to send the whole code by email if that’s better?

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,236601,236753#msg-236753

Thanks man :slight_smile:

    proxy_buffers 8 2m;
    proxy_buffer_size 10m;
    proxy_busy_buffers_size 10m;

Buffers used looks huge, make sure you have enough memory.

Mmmhhh, do you think I should remove these and trust nginx’s default
values
for these buffer?

    proxy_cache one;
    proxy_cache_key "$request_uri|$request_body";

Usuing request body as a cache key isn’t really a good idea unless
all request bodies are known to be small.

Ok, I changed that to:
proxy_cache_key “$scheme$host$request_uri”;

I also made few additions under location/:

proxy_cache_valid 200 302 304 10m;
proxy_cache_valid 301 1h;
proxy_cache_valid any 1m;

proxy_next_upstream error timeout invalid_header http_500 http_502

http_503 http_504 http_404;

Do you think these are good and justified?

Unfortunately I’m seeing these warnings now:
“an upstream response is buffered to a temporary file”

Any hints why? Help is much appreciated

Cheers
Michael

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,236601,236752#msg-236752

Hello!

On Thu, Feb 28, 2013 at 11:06:05PM -0500, michael.heuberger wrote:

Thanks man :slight_smile:

    proxy_buffers 8 2m;
    proxy_buffer_size 10m;
    proxy_busy_buffers_size 10m;

Buffers used looks huge, make sure you have enough memory.

Mmmhhh, do you think I should remove these and trust nginx’s default values
for these buffer?

You should make sure you have enough memory for the buffers
configured. If your system will start swapping - there will
obvious performance degradation compared to smaller buffers.

Default buffers indeed might be better unless you have good
reasons for the buffers sizes set, or you might start with default
sizes and tune them till you are happy with the result. Exact
optimal sizes depend on a particular use case.

proxy_cache_valid 200 302 304 10m;
proxy_cache_valid 301 1h;
proxy_cache_valid any 1m;

proxy_next_upstream error timeout invalid_header http_500 http_502

http_503 http_504 http_404;

Do you think these are good and justified?

This depends on what and how long you want to cache and how you
would like to handle upstream errors.

Unfortunately I’m seeing these warnings now:
“an upstream response is buffered to a temporary file”

Any hints why? Help is much appreciated

The message indicate that an (uncacheable) response was buffered to
a temporary file. It doesn’t indicate the problem per se, but
might be useful to track sources of I/O problems. It also might
appear as a side effect from other problems - e.g. if you have
network issues and clients just can’t download files requested
fast enough.

If you see such messages it might be just ok if they are rare
enough. Or might indicate that you should configure bigger
buffers (if you have enough memory), or consider disabling disk
buffering.

Try reading here for more information:

http://nginx.org/r/proxy_buffering


Maxim D.
http://nginx.org/en/donation.html

Hello!

On Fri, Mar 01, 2013 at 11:54:06PM -0500, michael.heuberger wrote:

warnings are gone? how would you do this?
As long as files are big - it probably doesn’t make sense to even
try to eliminate warnings by increasing buffers. Instead, you
have to derive buffer sizes from amount of memory you want to use
for buffering, keeping in mind that maximum memory will be
(proxy_buffer_size + proxy_buffers) / (worker_processes *
worker_connections).

Note: as long as you have other activity on the host in question,
including nginx with cache or even with just disk buffering, you
likely want to keep at least part of the memory free - e.g., for VM
cache.

second problem:
when i made changes in the css file, uploaded that, then the nginx server
was still serving the older version. because of “proxy_cache_valid 200 302
304 10m;” - how can i tell nginx to refresh cache asap when a new file was
uploaded?

While in theory you may remove/refresh file in nginx cache, it
won’t really work in real life - as the same file might be cached
at various other layers, including client browser cache.

Correct aproach to the problem is to use unique links, e.g. with
version number in them. Something like “/css/file.css?42”, where
“42” is a number you bump each time you do significant changes to
the file, will do the trick.


Maxim D.
http://nginx.org/en/donation.html

Great response Maxim, you’re absolutely right here. Will do all that.

No further questions :slight_smile:

Cheers
Michael

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,236601,236885#msg-236885

thank you so much maxim

i have read the documentation at
http://nginx.org/en/docs/http/ngx_http_proxy_module.html and am trying
to
understand all that. it’s not easy …

i’m serving video files (mp4 or webm). that’s where these “an upstream
response is buffered to a temporary file” warnings occur.

how can i find out how far i should increase the buffers until these
warnings are gone? how would you do this?

second problem:
when i made changes in the css file, uploaded that, then the nginx
server
was still serving the older version. because of “proxy_cache_valid 200
302
304 10m;” - how can i tell nginx to refresh cache asap when a new file
was
uploaded?

cheers
michael

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,236601,236812#msg-236812

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs