Forum: NGINX Ngix Performace as a Reverse Proxy

1f5e54c0406f75a1b4f5f12edc6efc34?d=identicon&s=25 Niall Gallagher - Yieldbroker (Guest)
on 2012-10-26 06:37
(Received via mailing list)
Hi,

We have been doing some testing with Nginx as a reverse proxy. We have
been comparing it to a number of solutions which it easily beats, like
IIS and Apache with mod_proxy etc. However, as an experiment we have
been comparing it to an adapted NIO server written in Java. This seems
to be out performing Nginx in the reverse proxy role by a factor of 3
times. We are convinced our configuration is wrong. Both run on the same
box (at different times) with the same sysctl settings (see below). We
also saw some spikes, up to 3 seconds per request at times, and some at
10 over a 1 million request test of 1000 concurrent clients.

We are using a fairly straight forward configuration for Nginx. Since we
have two processors on the box we tried worker_processes of 4 with
worker_connections of 6000, then we tried worker_processes of 40 with
worker_connections of 5000. No change. We need to be able to support
responsive Ajax requests with strategies like HTTP streaming and long
polling in our setup.

Any ideas what we can do to boost our throughput and latency?

[root@dc1dmzngx02 apachebench]# uname -a
Linux dc1dmzngx02 2.6.32-220.13.1.el6.x86_64 #1 SMP Tue Apr 17 23:56:34
BST 2012 x86_64 x86_64 x86_64 GNU/Linux

[root@dc1dmzngx02 apachebench]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.


# High perf config
net.core.somaxconn = 12048
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 4096 16777216
net.ipv4.tcp_wmem = 4096 4096 16777216
net.ipv4.tcp_mem = 786432 2097152 3145728
net.ipv4.tcp_max_syn_backlog = 16384
net.core.netdev_max_backlog = 20000
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_orphans = 131072

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

Thanks,
Niall
97b81453789bfb9cb8a3f11e71ca3017?d=identicon&s=25 Sergey Budnevitch (Guest)
on 2012-10-26 14:05
(Received via mailing list)
On 26  Oct2012, at 08:36 , Niall Gallagher - Yieldbroker
<Niall.Gallagher@yieldbroker.com> wrote:

> Hi,
>
> We have been doing some testing with Nginx as a reverse proxy. We have been
comparing it to a number of solutions which it easily beats, like IIS and Apache
with mod_proxy etc. However, as an experiment we have been comparing it to an
adapted NIO server written in Java. This seems to be out performing Nginx in the
reverse proxy role by a factor of 3 times. We are convinced our configuration is
wrong. Both run on the same box (at different times) with the same sysctl 
settings
(see below). We also saw some spikes, up to 3 seconds per request at times, and
some at 10 over a 1 million request test of 1000 concurrent clients.
>
> We are using a fairly straight forward configuration for Nginx. Since we have
two processors on the box we tried worker_processes of 4 with worker_connections
of 6000, then we tried worker_processes of 40 with worker_connections of 5000. 
No
change. We need to be able to support responsive Ajax requests with strategies
like HTTP streaming and long polling in our setup.
>
> Any ideas what we can do to boost our throughput and latency?

Buffers. nginx should not write/read anything to/from disk while proxing
for maximum performance. Check your average request and response sizes
and tune buffers sizes accordingly (look at proxy_buffers,
proxy_buffer_size, proxy_max_temp_file_size documentation).
1f5e54c0406f75a1b4f5f12edc6efc34?d=identicon&s=25 Niall Gallagher - Yieldbroker (Guest)
on 2012-10-28 23:26
(Received via mailing list)
According to documentation for proxy_buffering

"For Comet applications based on long-polling it is important to set
proxy_buffering to off, otherwise the asynchronous response is buffered
and the Comet does not work."

I have tried the following and am not getting better results, however
its still being outperformed by the Java HTTP proxy by about 10% - 15%.

worker_processes  2;
worker_cpu_affinity 01 10;
worker_rlimit_nofile 20000;

However spikes of up to 20 seconds are still frequent under high load.
97b81453789bfb9cb8a3f11e71ca3017?d=identicon&s=25 Sergey Budnevitch (Guest)
on 2012-10-29 10:12
(Received via mailing list)
On 29  Oct2012, at 02:26 , Niall Gallagher - Yieldbroker
<Niall.Gallagher@yieldbroker.com> wrote:

> According to documentation for proxy_buffering
>
> "For Comet applications based on long-polling it is important to set
proxy_buffering to off, otherwise the asynchronous response is buffered and the
Comet does not work."
> I have tried the following and am not getting better results, however its still
being outperformed by the Java HTTP proxy by about 10% - 15%.

Try to set
postpone_output 0;
Please log in before posting. Registration is free and takes only a minute.
Existing account

NEW: Do you have a Google/GoogleMail, Yahoo or Facebook account? No registration required!
Log in with Google account | Log in with Yahoo account | Log in with Facebook account
No account? Register here.