Nginx feature request

Nginx server in front could take care of. I’d rather see something like
Libevent (i.e. an asynchronous HTTP server engine) with embedded Python.
The core of Nginx is probably similar to Libevent with its HTTP layer.
Would be interesting to find out how Nginx core compares to Libevent
with HTTP layer in terms of performance.

Being somewhat of a newbie, just throwing this out.
It would indeed be interesting to see if nginx with a libev core would
be faster than the current (ex:
http://www.zenebo.com/word/asynchronous-programming/lighttz-a-simple-and-fast-web-server/)

Though probably wouldn’t be tons faster.
Thoughts?
-=r

On Tue, Apr 21, 2009 at 04:43:05PM +0200, Roger P. wrote:

Though probably wouldn’t be tons faster.
Thoughts?

I do not think that using libev in nginx will change anything.

Probably, the following settings may slightly improve or worsen:

  1. turing sendfile off: it may be not effective on small files.
    Also not that lighttz does not read file at all each request.

  2. using open file cache, it saves 3 syscalls per request (open, fstat,
    close):

open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

  1. using buffered log:

access_log /path/to/access.log buffer=32k;

or turing it off at all:

access_log off;

  1. using 1 worker.

Igor,

I do not see the open_file_cache directive documented on the wiki. Is
this different from “open_log_file_cache” which is documented? If not,
what is the difference…

–J

Igor S. wrote:

On Tue, Apr 21, 2009 at 04:43:05PM +0200, Roger P. wrote:

Though probably wouldn’t be tons faster.
Thoughts?

I do not think that using libev in nginx will change anything.

Probably, the following settings may slightly improve or worsen:

  1. turing sendfile off: it may be not effective on small files.
    Also not that lighttz does not read file at all each request.

  2. using open file cache, it saves 3 syscalls per request (open, fstat,
    close):

open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;

  1. using buffered log:

access_log /path/to/access.log buffer=32k;

or turing it off at all:

access_log off;

  1. using 1 worker.

On Wed, Apr 22, 2009 at 09:56:37AM +0800, Delta Y. wrote:

I do not think that using libev in nginx will change anything.
open_file_cache_valid 30s;

  1. using 1 worker.

1 worker for each core?

No, just 1 worker. In the test there is already 1 worker per core (dual
core).
In microbenchmark (not real life) this may improve or worsen.

2009/4/22 Igor S. [email protected]

It would indeed be interesting to see if nginx with a libev core would
Probably, the following settings may slightly improve or worsen:
open_file_cache_errors on;

1 worker for each core?

On Wed, Apr 22, 2009 at 12:58:30AM +0200, Joe Bofh wrote:

Igor,

I do not see the open_file_cache directive documented on the wiki. Is
this different from “open_log_file_cache” which is documented? If not,
what is the difference…

It’s similar to open_log_file_cache, but have additional directive:

open_file_cache_errors [on|off]; # default is off

which allows to cache open file errors: not found, etc.

On Wed, Apr 22, 2009 at 09:33:32AM +0200, Joe Bofh wrote:

So, can I use both or does one override the other and I should just use
“open_file_cache” instead.

You can use both.
open_log_file_cache is just for access log files given using variables.
open_file_cache is for all other operations.

So, can I use both or does one override the other and I should just use
“open_file_cache” instead.

–J

Igor S. wrote:

On Wed, Apr 22, 2009 at 12:58:30AM +0200, Joe Bofh wrote:

Igor,

I do not see the open_file_cache directive documented on the wiki. Is
this different from “open_log_file_cache” which is documented? If not,
what is the difference…

It’s similar to open_log_file_cache, but have additional directive:

open_file_cache_errors [on|off]; # default is off

which allows to cache open file errors: not found, etc.