Fwd: nginx performance on Amazon EC2

Hello,

I am running a django app with nginx & uwsgi on an amazon ec2 instance
and
a vmware machine almost the same size as the ec2 one. Here’s how i run
uwsgi:

sudo uwsgi -b 25000 --chdir=/www/python/apps/pyapp
–module=wsgi:application --env DJANGO_SETTINGS_MODULE=settings
–socket=/tmp/pyapp.socket --cheaper=8 --processes=16 --harakiri=10
–max-requests=5000 --vacuum --master --pidfile=/tmp/pyapp-master.pid
–uid=220 --gid=499

& nginx configurations:

server {
listen 80;
server_name test.com

root /www/python/apps/pyapp/;

access_log /var/log/nginx/test.com.access.log;
error_log /var/log/nginx/test.com.error.log;

# 

location /static/ {
alias /www/python/apps/pyapp/static/;
expires 30d;
}

location /media/ {
    alias /www/python/apps/pyapp/media/;
    expires 30d;
}

location / {
    uwsgi_pass unix:///tmp/pyapp.socket;
    include uwsgi_params;
    proxy_read_timeout 120;
}

# what to serve if upstream is not available or crashes
#error_page 500 502 503 504 /media/50x.html;}

Here comes the problem. When doing “ab” (ApacheBenchmark) on both
machines
i get the following results: (vmware machine being almost the same size
as
the ec2 small instance)

Amazon EC2:

nginx version: nginx version: nginx/1.2.6

uwsgi version:1.4.5

Concurrency Level: 500Time taken for tests: 21.954
secondsComplete requests: 5000Failed requests: 126
(Connect: 0, Receive: 0, Length: 126, Exceptions: 0)Write errors:
0Non-2xx responses: 4874Total transferred: 4142182
bytes
HTML transferred: 3384914 bytesRequests per second: 227.75
[#/sec] (mean)Time per request: 2195.384 [ms] (mean)Time per
request: 4.391 [ms] (mean, across all concurrent
requests)Transfer rate: 184.25 [Kbytes/sec] received

Vmware machine (CentOS 6):

nginx version: nnginx version: nginx/1.0.15

uwsgi version: 1.4.5

Concurrency Level: 1000Time taken for tests: 1.094
secondsComplete requests: 5000Failed requests: 0Write
errors: 0Total transferred: 30190000 bytes
HTML transferred: 28930000 bytesRequests per second: 4568.73
[#/sec] (mean)Time per request: 218.879 [ms] (mean)Time per
request: 0.219 [ms] (mean, across all concurrent
requests)Transfer rate: 26939.42 [Kbytes/sec] received

As you can see… all requests on the ec2 instance fail with either
timeout
errors or “Client prematurely disconnected”. However, on my vmware
machine
all requests go through with no problems. The other thing is the
difference
in reqs / second i am doing on both machines.

What am i doing wrong on ec2?

On Thu, Feb 21, 2013 at 1:43 AM, Rakan Alhneiti
[email protected]wrote:

server {
alias /www/python/apps/pyapp/static/;
include uwsgi_params;
Amazon EC2:

difference in reqs / second i am doing on both machines.

What am i doing wrong on ec2?


nginx mailing list
[email protected]
nginx Info Page

Is there any data in the nginX error logs?

Really you would have to do this test on 2 amazon servers and then see
if one was more performant.
Then you can assume something is wrong.

Based on the configs everything looks right.

The fact that your vmware server is better performing, is really not
saying much. Its really hard to directly compare.

I would presume so many other factors.

Is the vmware box on a local network?

Do you run the benchmark program on the same virtual machine as the web
stack?? For yielding conclusive results, you certainly don’t want to
make ab, nginx, and all other entities involved compete for the same
CPU.

If yes, try running ab from a different machine in the same network
(make sure your network is not the bottle neck here) and compare your
results again.

Cheers,

Jan-Philip

Hello,

Yes my vm machine is working on my local network. I am referring to it
rather so show that it performs better & no issues appear there.
I tried both Amazon EC2 small instance & a linode 2048 instance and both
give the exact same result as well.

When doing apache benchmark, i can see the following in my nginx error
log:

[error] 4167#0: *27229 connect() to unix:///tmp/pyapp.socket failed (11:
Resource temporarily unavailable) while connecting to upstream, client:
127.0.0.1, server: mysite.com

and

upstream prematurely closed connection while reading response header
from
upstream, client: 127.0.0.1, server: mysite.com

Other than that, there’s nothing in django error log but here’s what i
can
see in uwsgi’s daemon log:

Wed Feb 20 21:59:51 2013 - writev(): Broken pipe [proto/uwsgi.c line
124]
during GET /api/nodes/mostviewed/9/?format=json (127.0.0.1)
[pid: 4112|app: 0|req: 34/644] 127.0.0.1 () {30 vars in 415 bytes} [Wed
Feb
20 21:59:42 2013] GET /api/nodes/mostviewed/9/?format=json => generated
0
bytes in 8904 msecs (HTTP/1.0 200) 3 headers in 0 bytes (0 switches on
core
0)
Wed Feb 20 21:59:51 2013 - writev(): Broken pipe [proto/uwsgi.c line
124]
during GET /api/nodes/mostviewed/9/?format=json (127.0.0.1)
[pid: 4117|app: 0|req: 1/645] 127.0.0.1 () {30 vars in 415 bytes} [Wed
Feb
20 21:59:46 2013] GET /api/nodes/mostviewed/9/?format=json => generated
0
bytes in 5021 msecs (HTTP/1.0 200) 3 headers in 0 bytes (0 switches on
core
0)

and stuff like:
Wed Feb 20 20:01:01 2013 - uWSGI worker 1 screams: UAAAAAAH my master
disconnected: i will kill myself !!!

What do you guys think?

Thanks alot

Best Regards,

Rakan AlHneiti
Find me on the internet:
Rakan Alhneiti http://www.facebook.com/rakan.alhneiti |
@rakanalhhttps://twitter.com/rakanalh
| Rakan Alhneiti http://www.linkedin.com/in/rakanalhneiti |
alhneiti

----- GTalk [email protected]
----- Mobile: +962-798-910 990

hello,

is the setup of your vmware similar to you ec2 - instance? i talk esp.
about
RAM/CPU-power
here.

do you have a monitoring on your instances, checking for
load/ram-zusage,
iowait etc?

maybe you should start your ec-test with less than 500 concurrent
connections and work up
to the point that instance starts to fail.

[error] 4167#0: *27229 connect() to unix:///tmp/pyapp.socket failed
(11:
Resource temporarily unavailable) while connecting to upstream,
client:
127.0.0.1, server: mysite.com

looks like your django-app shuts down or isnt capable of handling that
ammount of connections.

oh, and you shouldnt expect a distant instance to have the same
performance
as a
machine in your local net.

Rakan Alhneiti Wrote:

upstream, client: 127.0.0.1, server: mysite.com
20 21:59:42 2013] GET /api/nodes/mostviewed/9/?format=json =>
generated 0
Thanks alot
alhneiti
see

Is the vmware box on a local network?
[email protected]wrote:
–module=wsgi:application --env DJANGO_SETTINGS_MODULE=settings

    expires 30d;
    proxy_read_timeout 120;

Amazon EC2:
HTML transferred: 3384914 bytesRequests per second: 227.75
Concurrency Level: 1000Time taken for tests: 1.094
machine all requests go through with no problems. The other thing



nginx mailing list
[email protected]
nginx Info Page

Posted at Nginx Forum:

now you only need to find the bootleneck :slight_smile:

maybe its better to run ab from a machine in the same network
instead of running on the same machine, esp. with one core.

Rakan Alhneiti Wrote:

the small ec2 instance.
Jan-Philip:
an EC2 instance… same results.

Again, thank you for your support, i am persevering on this until i
find
out what the issue is.

Posted at Nginx Forum:

Hello,

Thank you all for your support.

*Mex: *
1) is the setup of your vmware similar to you ec2 - instance? i talk
esp.
about RAM/CPU-power here.

Yes, i’ve setup the vmware machine to be 1.7 G in ram and 1 core just
like
the small ec2 instance.

2) do you have a monitoring on your instances, checking for
load/ram-zusage, iowait etc?

What goes on here is that my CPU usage per uwsgi process is around 13%
and
server load starts to get much higher. MySQL operations usually take 6-8
ms
which are optimized and not slowing the app down. once the load starts
rising, the app slows down and more connections start to fail.

Jan-Philip:
*Do you run the benchmark program on the same virtual machine as the web
stack?? For yielding conclusive results, you certainly don’t want to
make
ab, nginx, and all other entities involved compete for the same CPU.
*Yes, on both machines i try to run ab on the same machine. So that i am
profiling the app from within taking away any network latency that can
affect the response rate. I tried running the test from a linode machine
to
an EC2 instance… same results.

Again, thank you for your support, i am persevering on this until i find
out what the issue is.

Best Regards,

Rakan AlHneiti
Find me on the internet:
Rakan Alhneiti http://www.facebook.com/rakan.alhneiti |
@rakanalhhttps://twitter.com/rakanalh
| Rakan Alhneiti http://www.linkedin.com/in/rakanalhneiti |
alhneiti

----- GTalk [email protected]
----- Mobile: +962-798-910 990

On Thu, Feb 21, 2013 at 11:00 AM, Jan-Philip Gehrcke <

Since my requests according to my profiling take around 0.065 seconds to
execute. I am really unsure of what happens after the response leaves
the
django side and uwsgi starts handling it with nginx. Is there a way to
do
some profiling at uwsgi or nginx level?

Best Regards,

Rakan AlHneiti
Find me on the internet:
Rakan Alhneiti http://www.facebook.com/rakan.alhneiti |
@rakanalhhttps://twitter.com/rakanalh
| Rakan Alhneiti http://www.linkedin.com/in/rakanalhneiti |
alhneiti

----- GTalk [email protected]
----- Mobile: +962-798-910 990