"Hacking" the event model of Nginx

Hi All!

I’m new on this nice discussion list but I’m a long time lurker.

I’m working on a very specific module for Nginx: a complete CMS. It may
sounds strange since upstream servers and scripting languages are the
norm for the CMS, but I want (and need) speed.

So far I’ve done many things without too much trouble, but I’m a little
bit stuck with the processing of events in Nginx. I would like to
process a specific event which is not connection related but created by
one of my worker threads.

I hope this short example will be clear:

// My specific Nginx http handler
int my_http_handler (ngx_request_t * r)
{
if (r->my_state == 0) // first step: initiate the work to do
{
r->my_state ++ ;
my_sendmsg (myqueue,r) ; // send a message to worker

return NGX_AGAIN ; // please call me back when done

}
else // second step: results are ready
{
// produce xhtml output from results
return NGX_OK ; // finished
}
}

// The worker running on a specific thread
void my_http_worker (void * arg)
{
ngx_http_request_t * r ;
ngx_event_t * ev ;

while (1)
{
my_recvmsg (myqueue,r) ;
// processing the request
// …
// wake up my_http_handler
ngx_post_event (ev, (ngx_event_t * *) & ngx_posted_events) ;
}
}

But I don’t know how to fill the ngx_event_t (in particular the
handlers) in order to call again my_http_handler on Nginx’s context.
I believe it’s possible to do so from what I’ve seen, but Nginx’s code
is not so easy to enter on (it’s not a critic).

Sorry for this (first) long message but I think it would be nice to be
able to develop clean and non blocking modules for Nginx.

BTW forgive my English, I’m French :wink:

Manlio P. <manlio_perillo@…> writes:

If you need “speed”, try my WSGI module.
http://hg.mperillo.ath.cx/nginx/mod_wsgi/

First, thanks a lot for your reply. Well, I believe Python is a great
language
but I will keep C (I was considering using assembly language!) because I
will be
able to have zero copy between my database buffers and my xml to xhtml
translation, I already know where the bottleneck is and that’s why I
made this
choice.

Give at look at:
http://hg.mperillo.ath.cx/nginx/mod_wsgi/file/tip/src/ngx_wsgi.c
row 850.

Well, I’m not too sure it’s relevant to my case but I will investigate
in depth;
thanks to share your knowledge.

Is my_http_worker in a separate thread?
Then this will not work, Nginx is not thread safe.

Yes there are separate threads launched for each Nginx worker process,
it’s not
an issue as a request is linked to a worker process, just a matter of
mutex;
nothing to scare me! The mix using non preemptive model and worker
thread is
very interesting but not so easy to do.

Thanks for your kind reply,

Best regards

François Battail ha scritto:

Hi All!

I’m new on this nice discussion list but I’m a long time lurker.

I’m working on a very specific module for Nginx: a complete CMS. It may
sounds strange since upstream servers and scripting languages are the
norm for the CMS, but I want (and need) speed.

If you need “speed”, try my WSGI module.
http://hg.mperillo.ath.cx/nginx/mod_wsgi/

It enables you to write Python applications embedded in Nginx.

Of course there are some overheads, but usually the problems are
elsewhere.

http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all

Python is not really that bad, unless, of course, you start to use a lot
of for cycles or recursion (but in this case you can always write the
function in C).

So far I’ve done many things without too much trouble, but I’m a little
bit stuck with the processing of events in Nginx. I would like to
process a specific event which is not connection related but created by
one of my worker threads.

Give at look at:
http://hg.mperillo.ath.cx/nginx/mod_wsgi/file/tip/src/ngx_wsgi.c
row 850.

You need to obtain a valid file descriptor s, then call:
c = ngx_get_connection(s, log);

The event for read and write notifications are:
c->read and c->write

Note however that in my code, the file descriptor is assumed as already
“connected” to the peer.

return NGX_AGAIN ; // please call me back when done

{
}
}

But I don’t know how to fill the ngx_event_t (in particular the
handlers) in order to call again my_http_handler on Nginx’s context.
I believe it’s possible to do so from what I’ve seen, but Nginx’s code
is not so easy to enter on (it’s not a critic).

Is my_http_worker in a separate thread?
Then this will not work, Nginx is not thread safe.

Sorry for this (first) long message but I think it would be nice to be
able to develop clean and non blocking modules for Nginx.

BTW forgive my English, I’m French :wink:

Regards Manlio P.

François Battail ha scritto:

Ok.

Give at look at:
http://hg.mperillo.ath.cx/nginx/mod_wsgi/file/tip/src/ngx_wsgi.c
row 850.

Well, I’m not too sure it’s relevant to my case but I will investigate in depth;
thanks to share your knowledge.

What type of connection do you want to create?

Is my_http_worker in a separate thread?
Then this will not work, Nginx is not thread safe.

Yes there are separate threads launched for each Nginx worker process, it’s not
an issue as a request is linked to a worker process, just a matter of mutex;
nothing to scare me!

The problem is with:
ngx_post_event

as far as I know it is not thread safe, unless you enable threads
support in Nginx (but in current version it is broken).

The mix using non preemptive model and worker thread is
very interesting but not so easy to do.

What are you trying to do?

Thanks for your kind reply,

Best regards

Regards Manlio P.

Manlio P. <manlio_perillo@…> writes:

What type of connection do you want to create?

I don’t want to create a connection, just to split the processing of a
HTTP
request in two without blocking Nginx.

HTTP request
-> handler
send a message to one of my workers
returns NGX_AGAIN

-> worker (running on an independant thread)
wait for a message
do the job (using blocking calls)
“wake up” the HTTP handler with a Nginx event

-> handler
process the worker’s results
finalize the HTTP reply

Hope it’s more clear written that way.

as far as I know it is not thread safe, unless you enable threads
support in Nginx (but in current version it is broken).

Yes, you’re correct. But I don’t want to activate thread support in
Nginx,
I would like to create my specific threads inside a Nginx worker. I
just
need to protect the event queue of Nginx from my threads with a mutex as
far I’ve seen.

The mix using non preemptive model and worker thread is
very interesting but not so easy to do.

What are you trying to do?

Simply to call blocking functions during a HTTP request without blocking
Nginx and without using an upstream server. In my project the database
(Sqlite3)
is embedded into the web server and share commons memory pools,
that way I can process XML documents stored in the database without
allocating or copying data but only by using a list of fragment buffers
for the reply which fit nicely with ngx_chain_t. I hope it will be very
fast to process a dynamic request.

Your code is a great help for me, thank you.

Best regards

François Battail ha scritto:

send a message to one of my workers
returns NGX_AGAIN

-> worker (running on an independant thread)
wait for a message
do the job (using blocking calls)
“wake up” the HTTP handler with a Nginx event

Two questions:

  • how many threads do you have for each worker process?
  • how do you plan to implement the “wake up”?

matter of mutex; nothing to scare me!

No.
You need to protect the event queue not only from your threads, but also
from the main thread of the worker process, and here you don’t have any
control.

for the reply which fit nicely with ngx_chain_t. I hope it will be very
fast to process a dynamic request.

Your code is a great help for me, thank you.

You’re welcome.

Best regards

Regards Manlio P.

Manlio P. <manlio_perillo@…> writes:

Two questions:

  • how many threads do you have for each worker process?
  • how do you plan to implement the “wake up”?

I believe that 4 threads by worker should be sufficient but I need to do
some
benchmarks. For the second question, the wake up will be done by
injecting a
“fake” Nginx event into the queue, that’s my main problem and that’s why
I’ve
used the word “hacking” in the title of this subject.

No.
You need to protect the event queue not only from your threads, but also
from the main thread of the worker process, and here you don’t have any
control.

I will modify Nginx to ensure it will use the same mutex as my threads
when
accessing the event queue. In fact it’s already in Nginx’s code
(ngx_posted_event_mutex) but only active when compiling with the
NGX_THREADS
directive.

My project is a very specific module, so I will do any modification
required in
Nginx’s code.

Best regards

Le lundi 14 avril 2008 à 16:27 +0200, Manlio P. a écrit :

Note that this it not sufficient to “wake up” the main thread.
In fact, the main event loop can be waiting on the select/poll/epoll/kqueue.

Yes, you’re right. I’ve rewritten the epoll module to have the
epoll_wait calls on a separate thread and to have the main event loop
waiting on a queue which receive the epoll events and the events from my
threads.

But as the initial request can be cancelled while still being processed
by my threads or still in the queue, some care is needed before
destroying the request structure.

Best regards

François Battail ha scritto:

Manlio P. <manlio_perillo@…> writes:

Two questions:

  • how many threads do you have for each worker process?
  • how do you plan to implement the “wake up”?

I believe that 4 threads by worker should be sufficient but I need to do some
benchmarks. For the second question, the wake up will be done by injecting a
“fake” Nginx event into the queue,

Note that this it not sufficient to “wake up” the main thread.
In fact, the main event loop can be waiting on the
select/poll/epoll/kqueue.

A solution is to use a pipe: the read end is registered in the event
loop, and the write end is used by threads to wake the main event loop.

Manlio P.

On Die 15.04.2008 07:16, François Battail wrote:

Le lundi 14 avril 2008 à 16:27 +0200, Manlio P. a écrit :

Note that this it not sufficient to “wake up” the main thread.
In fact, the main event loop can be waiting on the select/poll/epoll/kqueue.

Yes, you’re right. I’ve rewritten the epoll module to have the
epoll_wait calls on a separate thread and to have the main event loop
waiting on a queue which receive the epoll events and the events from
my threads.

Have you taken a look into the libev
http://software.schmorp.de/pkg/libev.html or libevent
libevent before you have implement your own
epoll handlers :wink:

cheers

Aleks

François Battail wrote in post #660685:

Aleksandar L. <al-nginx@…> writes:

Have you taken a look into the libev
http://software.schmorp.de/pkg/libev.html or libevent
libevent before you have implement your own
epoll handlers

Thank you for these hints, but I’ve already done this layer! I’ve
bookmarked
libev, maybe for an another project.

Best regards

Is the implement of yours open?
I want to see how do you rewrite the epoll module,thanks

Aleksandar L. <al-nginx@…> writes:

Have you taken a look into the libev
libev or libevent
libevent before you have implement your own
epoll handlers

Thank you for these hints, but I’ve already done this layer! I’ve
bookmarked
libev, maybe for an another project.

Best regards

// My specific Nginx http handler
int my_http_handler (ngx_request_t * r)
{
if (r->my_state == 0) // first step: initiate the work to do

I’m trying to do something similar, however r->my_state is unknown here.
Did you create your own struct?