Accesslog into sql

Hello,

I would like to know if a module is available with which I can write my
accesslogs directly into mysql, postgresql or any other rdbms. I’m using
0.6.34 currently, but would go to 0.7.x branch if such module is
available
only there.

Thanks in advance,
-w-

why would you slow down the nginx 1000 times?

:wink:

Because some PHP guy thinks

a.) it’s a good idea to rip off data from the accesslog to show up on
some site in real-time

b.) it’s really a better idea than allowing him to read through the
accesslog file itself (2-3Gb), though I’m pushing the dead horse to the
other street (to the rdbms)

c.) handling the output of some script from a cronjob won’t be flexible
enough for him

d.) it’s fun to see me go postal on such requests.

:slight_smile:

Regards,
-w-

I agree. If you really need to store the data in a MySQL or other
database
I would think that you are better off writing a php script to insert it
from the text file. You can run it as a cron job every so often.

Jim

From: [email protected] [mailto:[email protected]] On Behalf Of
István Szukács
Sent: Wednesday, February 04, 2009 11:50 AM
To: [email protected]
Subject: Re: accesslog into sql

why would you slow down the nginx 1000 times?

:wink:

On Wed, Feb 4, 2009 at 3:41 PM, Zoltan HERPAI [email protected] wrote:

Hello,

I would like to know if a module is available with which I can write my
accesslogs directly into mysql, postgresql or any other rdbms. I’m using
0.6.34 currently, but would go to 0.7.x branch if such module is
available
only there.

Thanks in advance,
-w-

On Feb 4, 2009, at 8:49, István Szukács wrote:

why would you slow down the nginx 1000 times?

Woah - nginx can do 20-30 million requests a second ?!

  • ask

You’re doing 20-30 thousand requests per second with a MySQL-backed
access
log? :wink:

While I think István exaggerated a slight bit, but I can’t imagine that it
would be a highly-performant solution. While IIS supports logging to a
SQL
server (the go-to web server if you’re looking to compare server X
against
something that has every “useless” feature… :wink: ), even Microsoft says
that
the sites that would probably find the capability most useful shouldn’t
use
it.

<From Microsoft KB article 245243>
Microsoft does not recommend IIS logging to a SQL Server table if the
IIS
computer is a busy server. Sending logging data to a SQL Server database
for
a busy Web site consumes system resources.

–Matt


From: “Ask Bjørn Hansen” [email protected]
Date: Wednesday, February 04, 2009 10:39 AM
To: [email protected]
Subject: Re: accesslog into sql

i regret, depends on the configuration and the sql server type you might
be
able to do 1000 - 50 000 insert/sec

if there is no huge overhead in the module you write you might get the
same
performance…

anyhow, the best would be for Zoltan to get the flat files into the sql
wo/
nginx modification, i belive :slight_smile:

regards,
Istvan

?

i have seen nginx serving files 50K req/s on a single node, while an sql
server might be able to do 200-300 query/sec or so…

connect to mysql sleep one request approximately 0.069s.

On Wed, 04 Feb 2009 22:11:20 +0200, István Szukács [email protected]

You can log throught syslog :
http://snippets.aktagon.com/snippets/247-Logging-nginx-to-remote-loghost-with-syslog-ng-
And since rsyslog (or syslog-ng) can store logs in DB, you will have
your logs where you want.

I suppose nginx and/or rsyslog will do buffering, so there will not be
much overhead.

Note about MySQL : on an “old” P4-Xeon in production connection time
from a PHP mysql_connect() is about 2ms, throught a 10Mbit/s network.
But the logger will stay connected to the DB…

Andrius Semionovas a écrit :

I think one thing to worry about would be latency in the insert
statement execution. While it is true that average performance can be
pretty good, one might occasionally see an insert take a long time for
various reasons… so it would be important either to separate the
insertions from the main event loop (so they don’t block the worker
process), or use a database API that can be worked into an
event-driven program somehow (I am not aware of such an API for any
RDBMS). It seems like it would be simpler to have, for example, an
ngx_http_accesslog_mysql module that generates the insert statements
(perhaps batching them up using MySQL’s extended insert syntax) and
writes them to a file, where they can be loaded into the DB without
the potential to disrupt the performance of nginx.

-dave

sqlite might be a better solution. Also think about optimizing your log
format

On Feb 4, 2009, at 12:23, István Szukács wrote:

i regret, depends on the configuration and the sql server type you
might be able to do 1000 - 50 000 insert/sec

Right – or higher. 50k/second should be reasonably easy with ARCHIVE
tables. In other words: plenty plenty fast for the vast majority of
installations.

  • ask