I am pretty much a noobie at nginx and i really need some help.
I am using nginx as a reverse proxy server to serve primarily as a load
balancer. I am using almost only dynamic files. The back end servers are
apache.
Here are my httperf results:
single apache server (1024 mb): 300 requests per second
2x 512 mb apache server, 1 nginx server( 1024 mb) :300 requests per
second
2x 1024 mb apache server, 1 nginx server( 1024 mb) :300 requests per
second
It seems that my nginx server is the bottleneck but i cant figure out
how i can optimize it.
the cpu usage and ram usage on the apache backend server and nginx
server is minimal, less than 10%.
My goal is to find a great way to scale up and by using a load balancer,
but it seems that if nginx is limited in requests per second as a single
apache server, then there is no point…
Are the Apache and Nginx servers separate physical servers or are they
VPS?
Have you checked if you’re hitting network bottleneck? If each response
is 10KB, then 300 rps translate to 3MB/s, or 24mbps, add in all the
networking overhead and we’re talking about 30mbps. Maybe you’ve hit the
bandwidth limit.
4x 512 mb apache server, 1 nginx server( 2048 mb) :700 requests per
second
4x 2048 mb apache server, 1 nginx server( 2048 mb) :700 requests per
second
In conjuction with the data giving before, it shows that nginx is the
bottle neck. But the cpu and memory usage are very very low. I am using
“top” and sars to check activity.
Also, using and activity on apache seems to be very low as well.
I am using Autobench to test the nginx server performance. The numbers i
generated above are when the response times are still very low. After
than, more requests per second starts to error out.
So, those numbers weren’t even real ones.
Try to use something different that has a proper concurrency control,
like ab.
I am using Autobench to test the nginx server performance. The numbers i
generated above are when the response times are still very low. After
than, more requests per second starts to error out.
IE:
4x 512 mb apache server, 1 nginx server( 2048 mb) :700 requests per
second
4x 2048 mb apache server, 1 nginx server( 2048 mb) :700 requests per
second
the range of testing was from 500 -1000 requests per second. the
response times and network IO increased steadily until it hit 700, then
response times increase dramatically and network IO stayed flat at that
point.
Thanks for the suggestion.
I ran some tests and used top to monitor my nginx server and apache
backend server. During this whole test, i still had about 60mb free on
my nginx server and 80mb free (85mb before test) on my apache backend
server. I can clearly see that the free ram was slowly decreasing on the
apache server as it was running. So i can see it working (aside from the
new process spawns taking about 2.7% mem usage).
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Concurrency Level: 50
Time taken for tests: 45.253 seconds
Complete requests: 30000
Failed requests: 15000
(Connect: 0, Receive: 0, Length: 15000, Exceptions: 0)
Write errors: 0
Total transferred: 20655000 bytes
HTML transferred: 12675000 bytes
Requests per second: 662.94 [#/sec] (mean)
Time per request: 75.422 [ms] (mean)
Time per request: 1.508 [ms] (mean, across all concurrent
requests)
Transfer rate: 445.73 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 13 151.9 1 9610
Processing: 1 61 206.3 38 7767
Waiting: 1 58 203.6 37 7767
Total: 1 73 254.0 40 9613
Percentage of the requests served within a certain time (ms)
50% 40
66% 40
75% 40
80% 40
90% 76
95% 280
98% 759
99% 761
100% 9613 (longest request)
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Concurrency Level: 40
Time taken for tests: 8.322 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 8440000 bytes
HTML transferred: 4230000 bytes
Requests per second: 1201.57 [#/sec] (mean)
Time per request: 33.290 [ms] (mean)
Time per request: 0.832 [ms] (mean, across all concurrent
requests)
Transfer rate: 990.36 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 4
Processing: 5 33 5.3 30 79
Waiting: 5 33 5.1 30 79
Total: 6 33 5.3 30 80
Percentage of the requests served within a certain time (ms)
50% 30
66% 36
75% 38
80% 38
90% 40
95% 41
98% 41
99% 42
100% 80 (longest request)
Hmm That makes sense, but in my case, i cant figure out why i am
experiencing this issue:
IE, 1gb nginx server handles 300 requests per second but a 2gb nginx
server handles 700 requests per second. Everything else is identical in
the. The weird is that the ram usage on these tests is very low.
So my question is, why would increasing the total system ram affect the
server performance if total ram usage is very low in the first place???
None. I just wanted to point out, that results of performance tests
heavily depend on your location and client computer. Without any special
settings I could easily top w3elfs results.
When doing performance tests, always check:
is the client-power sufficient? (packet rate, tcp settings etc.)
can your connection handle the intended traffic? (not only capacity,
but in terms of latency and packet rate → problems with link saturation
etc.)
which route to the server is used?
are the results different, when using another client/location?
Always check these double, if you are not testing with direct
connection.
Posted at Nginx Forum:
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.