Nginx as a Linux Service

Greetings,

Can nginx – running on one server – deliver 1000 requests
per second without “bogging down” and pushing more and more
requests into a queue?

Here’s my reason for asking:

I’m designing a live auction website that needs to respond
to 500-1000 requests per second for about an hour. Each
request will post only 20 bytes of data so the volume being
posted is low. Nevertheless the HTTP headers still need to
be parsed and they will have far more volume than the
actual post data – so it seems I should do everything I
can to reduce the HTTP header overhead. This will
substantially reduce the load and speed up nginx’s response
times, correct?

I’m wondering if nginx has the ability to use “Web Sockets”
technology to eliminate all but the first HTTP header, and
maintain a connection with the browser so data can be
passed back and forth faster?

http://www.w3.org/html/wg/html5/#network

If this is not possible, can you tell me the best way to
reduce the HTTP header overhead so I can make sure that
each of those 1000 requests per second are responded to as
fast as they come in? Or am I concerned about something
that’s a non-issue, perhaps because nginx is so blazing
fast that it can handle this kind of load without breaking
a sweat?

The worst problem I can imagine is that during one of these
live auctions the server will begin to respond slowly and
push requests into a queue. If this happens, bidders will
not receive timely updates from the server and then the
whole service loses credibility.

If Web Sockets is not an option, perhaps using Javascript in
the visitor’s browsers to send requests via XMLHttpRequest
is the next-best option for reducing overhead?

Thanks for any insights you can provide to help me decide
whether or not nginx might be appropriate for my needs.

Best,
Owkaye

How did you install it? Using rpm or from source?

Regards,

Glen L.

Sent from my BlackBerry®

Many thanks to you two. I ended up installing the start-stop-daemon,
since I already had a functional nginx.

Now I’m a little worried about having to convince my company’s customers
to let me install all these third-party applications and scripts on
their production servers. Indeed, my company sells Oracle Forms 6i and
Oracle database-based applications to large corporations and my job was
to identify the next technology we can use to step up to Web 2.0-ish
applications. Having chosen Flex front-end and Ruby-on-Rails back-end
against an existing Oracle database, I realize lots of IT people will be
weary of having Ruby, numerous gems including Rails, Oracle adapter,
Mongrels, nginx, start-stop-daemon, Monit installed on their servers…

Well, I’m still experimenting, but I’ll have to come up with a
technically and politically correct setup sometime soon!

Cheers,

Chris.

luben karavelov wrote:

post only 20 bytes of data so the volume being posted is low.
http://www.w3.org/html/wg/html5/#network
into a queue. If this happens, bidders will not receive timely
updates from the server and then the whole service loses credibility.

If Web Sockets is not an option, perhaps using Javascript in the
visitor’s browsers to send requests via XMLHttpRequest is the
next-best option for reducing overhead?

I suspect that nginx would be able to handle this without at problem,
although I would suggest that the best way to find out is to do some
load testing. How are the responses built? If that involves database
lookups, for example, it’s more likely that your bottleneck will be
there. Incidentally, I don’t think XMLHttpRequest will help you - from
the server’s point of view, it’s just another request, albeit one
serving a smaller amount of data.

owkaye wrote:

request will post only 20 bytes of data so the volume being
passed back and forth faster?

http://axod.blogspot.com/

Thanks for any insights you can provide to help me decide
whether or not nginx might be appropriate for my needs.

Best,
Owkaye

My experience is that nginx will not pose limit in this case.
On my desktop (Pentium 4) nginx serves 3000-5000 req/s with static
content (10K). What might pose limits is your application code and
database utilization pattern.

luben

Can nginx – running on one server – deliver 1000
requests per second without “bogging down” and
pushing more and more requests into a queue?

I suspect that nginx would be able to handle this without
at problem, although I would suggest that the best way to
find out is to do some load testing.

I will definitely do this after I buy a server. I’m still
in the planning stage right now.

How are the responses built? If that involves database
lookups, for example, it’s more likely that your
bottleneck will be there.

The responses will require a comparison of a posted value to
another value stored in a specific location in memory.
It’s not a lookup or search in a databse but it required a
separate app to do the comparison so it’s not as simple as
immediately serving a cached static web page.

Incidentally, I don’t think XMLHttpRequest will
help you - from the server’s point of view, it’s just
another request, albeit one serving a smaller amount of
data.

Thanks for this info. I know it will result in less traffic
but if you’re saying

Best,
Owkaye

owkaye wrote:

The responses will require a comparison of a posted value to
another value stored in a specific location in memory.
It’s not a lookup or search in a databse but it required a
separate app to do the comparison so it’s not as simple as
immediately serving a cached static web page.

How large are the responses? That’s something else to consider, if
you’re returning large amounts of data, as the bottleneck might then
move to the network.

Even though you haven’t got your server yet, I recommend installing
nginx on your development machine and playing with it with a bit of load
testing. I think you’ll be pleasantly surprised. What amazes me is how
tiny its demands on the CPU and memory are, even under heavy load.

Hi,

That’s a benchmark I did some time ago on my PIII 500:
http://www.ruby-forum.com/topic/145783