Kiril A. ha scritto:
Manlio,
I would be all over a module for nginx which has the same
functionality as mod_magnet on Lighttpd and I can see already you are
very much into Nginx so I am sure it will be a no brainer for you 
Yes, but the problem is time 
I’m already working (among other projects and work) on mod_wsgi and
mod_scgi.
In future I plan to “revive” the mod_pg project, and to write a
digest_auth module (if it is still not in the Igor’s schedule).
[…]
Manlio P.
Igor S. ha scritto:
It only calls exit if no exception handler is installed.
Well, but what can I do in exception handler ?
Destroy a whole interpreter, leaving various leaks ?
With Lua you can supply you allocator function.
It does not resolve the problem. The interpreter internally
The Lua interpreter?
}
}
Not sure to understand, but since memory allocation failure is signaled
via an exception, the code flow is quite different.
From Lua 5.1.5 source code - ldo.c
struct lua_longjmp lj;
lj.status = 0;
lj.previous = L->errorJmp; /* chain new error handler */
L->errorJmp = &lj;
LUAI_TRY(L, &lj,
(*f)(L, ud);
);
L->errorJmp = lj.previous; /* restore old error handler */
if (lj.status != 0) {
… if memory error, destroy the interpreter
}
and:
#define LUAI_TRY(L,c,a) if (setjmp((c)->b) == 0) { a }
Manlio P.
On Tue, Apr 22, 2008 at 03:18:48PM +0200, Manlio P. wrote:
Lua uses _longjmp/_setjmp, with integer representing error codes.
It only calls exit if no exception handler is installed.
Well, but what can I do in exception handler ?
Destroy a whole interpreter, leaving various leaks ?
With Lua you can supply you allocator function.
It does not resolve the problem. The interpreter internally
The Lua interpreter?
Yes.
}
lj.previous = L->errorJmp; /* chain new error handler */
and:
#define LUAI_TRY(L,c,a) if (setjmp(©->b) == 0) { a }
I understand the exception mechanic, this way is easy to program -
you do not need to worry about all these tests, but I’m not sure, that
it is safe way.
Marcin K. ha scritto:
Regarding nginx support: currently I tend to believe that using
nginx as reverse proxy (and static file server) may be the best
runtime configuration…
The problem with reverse proxy is that you need another server,
FastCGI/SCGI/etc are also another server, just less explicit 
Of course 
and this server is usually written in Python, and most of the time
it uses threads for concurrency,
If you are hardcore, you can use twisted. But IMO it is overkill for
web.
By the way I’m a Twisted programmer :).
This how I came to know and low asynchronous programming!
And I have also used Twisted Web (and Nevow) for a rather big web
application.
However I think that Nginx + mod_wsgi + templating system like Mako
produces applications that are more maintainable and modular.
Also, threads are not that bad in such context. Note that here you have
non-trivial code, which makes some database operations, dynamically
formats pages using templates, etc. The cost of some context switches
is far less noticeable than in case of static file serving.
The problem is not with context switches, but with the Python Global
Interpreter Lock.
- Use Nginx as main server + reverse proxy
- Run your application embedded in Apache or
Nginx (but the application should be written with care)
I do not see any advantages of apache here (unless you want to
use existing permissions, authorization etc infrastructure already
defined for apache).
No, Apache has advantages.
You have a very robust server for your application, and it’s wsgi module
allows much more control then any of the “custom” servers written in
Python.
Manlio P.
Of course!
I will be happy if you put it on your todo list and whenever you can
get to it. I will help with testing when the time comes.
Regards,
Kiril
On Tue, Apr 22, 2008 at 9:50 AM, Manlio P.
On Tue, 2008-04-22 at 13:07 +0200, Manlio P. wrote:
The problem with reverse proxy is that you need another server, and this
server is usually written in Python, and most of the time it uses
threads for concurrency, so it’s not the best solution (unless, of
course, you don’t run a whole “cluster” of servers, like Ruby
programmers like to do)
FAPWS is a non-threaded, libevent-based WSGI server:
I’ve tested it and it is quite fast (~5000 req/s is achievable on decent
hardware).
Regards,
Cliff
Cliff W. ha scritto:
http://william-os4y.livejournal.com/
I’ve tested it and it is quite fast (~5000 req/s is achievable on decent
hardware).
What type of application have you tested?
One limitation of FAPWS (like Twisted) is that it is single process.
Regards,
Cliff
Regards Manlio P.
Hi Folks
Currently I don’t see problem using one or other Programming Language,
nginx
can work very fine with any PL.
I’m using both Ruby and perl with optimal results in both Apache and
nginx;
I prefer nginx for hosting RoR apps (like Mephisto, Typo, Radiant, or my
own’s). Perl also work with FastCGI and work fine, maybe it requiere a
bit
of work but it’s functional.
I think, that you can use any language, whenever you are satisfied with
it.
Best Regards.
On Tue, 2008-04-22 at 22:26 +0200, Manlio P. wrote:
http://william-os4y.livejournal.com/
I’ve tested it and it is quite fast (~5000 req/s is achievable on decent
hardware).
What type of application have you tested?
I’ve only done testing with minimal “hello, world” type apps. The
author claims to have run a simple Django app (wiki) under it with a
significant performance increase over other methods.
One limitation of FAPWS (like Twisted) is that it is single process.
The author is working on this aspect. Personally I’d simply
load-balance several instances behind Nginx, but he wants to do it
within FAPWS.
Regards,
Cliff
Igor S. ha scritto:
void *
via an exception, the code flow is quite different.
);
I understand the exception mechanic, this way is easy to program -
you do not need to worry about all these tests, but I’m not sure, that
it is safe way.
Ah, ok.
But do you have doubts only about handling memory allocation failures
via exceptions, about exception handling in general or about exception
handling as implemented in Lua?
Manlio P.
IMHO, a better solution is:
- Use Nginx as main server + reverse proxy
- Run your application embedded in Apache or
Nginx (but the application should be written with care)
this methodology works really well for us, esp when you throw in
memcache for data that is only semi volatile.
On Mon 21.04.2008 22:57, Kiril A. wrote:
Ah, http://www.keplerproject.org/
Thank you I will take a look 
On Die 22.04.2008 16:31, Manlio P. wrote:
Marcin K. ha scritto:
Regarding nginx support: currently I tend to believe that using
nginx as reverse proxy (and static file server) may be the best
runtime configuration…
The problem with reverse proxy is that you need another server,
FastCGI/SCGI/etc are also another server, just less explicit 
Of course 
Full ack 
and this server is usually written in Python, and most of the time
it uses threads for concurrency,
If you are hardcore, you can use twisted. But IMO it is overkill for
web.
By the way I’m a Twisted programmer :).
Cool 
cheers
Aleks
Hi all,
On Son 20.04.2008 23:35, Aleksandar L. wrote:
Hi,
due the fact that here are a lot of peoples who care about fast and
light environments so I just ask 
What do YOU think is the ‘best (smallest/fastest/easiest)’ language to
develop a dynamic website?
Thank you for all your input ;-).
Asa I have a decision I will communicate it 
Cheers
Aleks
I too would be really into an embedded Lua module. Lua and nginx seem
like
they’re a very good match, we just need to make the introduction.
I’ve only written a couple of nginx modules (both pretty simple and
specific
for my weird needs) so I don’t think I’m quite up to this task yet. But
I’d
love to help out if someone with more expertise would be willing to
point me
in the right direction.
j.
On Tue, Apr 22, 2008 at 6:50 AM, Manlio P.
[email protected]
Hi Igor,
On Die 22.04.2008 10:14, Igor S. wrote:
It seems that Neko as well as Lua, perl, etc do the same in memory
allocation failure case: exit() or nothing, i.e., segfault.
The developer of neko (Nicolas Cannasse) have agreed that he can add a
memory management hook at init state so that it is possible to use the
mm of nginx.
Request:
http://lists.motion-twin.com/pipermail/neko/2008-April/002194.htm
Response:
http://lists.motion-twin.com/pipermail/neko/2008-April/002194.html
what do you think is this enough for the integration or do you need
something more from neko?!
Cheers
Aleks 
Igor S. ha scritto:
I understand the problem, however I think that Lua is still usable.
I’m reading the source code of Lua io library, and any opened file is
closed when reached by the gc.
This means that when Nginx detects an error, it can just force a full gc
cycle.
If this still does not sounds safe, Nginx can just create a Lua state
(interpreter) for each request, finalizing it when the request is
finalized.
This is both feasible and efficient (but better is one of Lua language
developer can confirm it).
Regards Manlio P.
On Sat, Apr 26, 2008 at 07:56:45AM +0200, Aleksandar L. wrote:
http://lists.motion-twin.com/pipermail/neko/2008-April/002194.htm
Response:
http://lists.motion-twin.com/pipermail/neko/2008-April/002194.html
what do you think is this enough for the integration or do you need
something more from neko?!
The problem is not in hooks/etc.
The problem is that interpreter MUST TEST EVERY operation result that
may fail on memory allocation. And it MUST return an error to all higher
levels, closing and freeing all allocated resources on the back way.
The existent interpreters either do not test result in most cases
(perl),
or simply exit(), or in best case they throw exception. Exceptions are
easy
way to program (you not need to test most operations) and cheap way to
test
results (for the same reason), but they may lead to socket/file
descriptor/etc
leak.
On Sat, Apr 26, 2008 at 11:59:38AM +0200, Manlio P. wrote:
descriptor/etc
If this still does not sounds safe, Nginx can just create a Lua state
(interpreter) for each request, finalizing it when the request is finalized.
This is both feasible and efficient (but better is one of Lua language
developer can confirm it).
Can this per request interpreter run precompiled code, or will it
compile
it on its creation ?
Igor S. ha scritto:
way to program (you not need to test most operations) and cheap way to test
cycle.
If this still does not sounds safe, Nginx can just create a Lua state
(interpreter) for each request, finalizing it when the request is finalized.
This is both feasible and efficient (but better is one of Lua language
developer can confirm it).
Can this per request interpreter run precompiled code, or will it compile
it on its creation ?
You can precompile all the Lua code at Nginx configuration time:
http://www.lua.org/manual/5.1/manual.html#lua_load
The lua_load function reads and parses a Lua code chunk and return (in
the Lua stack) the compiled code (or an error code).
So, if I’m not wrong, you need to:
- Create a Lua state at the begin of the configuration phase
- Use this Lua state for precompiling all the Lua code in Nginx
- Finalize this Lua State at the end of configuration phase
- For each request create a new Lua state
- Use this Lua state for executing the precompiled script code
- Finalize the Lua state at the end of the request
But, again, I have never used Lua, I’m just reading the reference manual
and the source code, so better if a Lua developer can confirm this.
Regards Manlio P.