Forum: NGINX accesslog into sql

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
2fefdeac1e8f28020dad954e18e74033?d=identicon&s=25 Zoltan HERPAI (Guest)
on 2009-02-04 16:52
(Received via mailing list)
Hello,

I would like to know if a module is available with which I can write my
accesslogs directly into mysql, postgresql or any other rdbms. I'm using
0.6.34 currently, but would go to 0.7.x branch if such module is
available
only there.

Thanks in advance,
-w-
022210bcab2575d7518b94f94b04df69?d=identicon&s=25 István Szukács (Guest)
on 2009-02-04 18:02
(Received via mailing list)
why would you slow down the nginx 1000 times?

;)
2fefdeac1e8f28020dad954e18e74033?d=identicon&s=25 Zoltan HERPAI (Guest)
on 2009-02-04 18:23
(Received via mailing list)
Because some PHP guy thinks

a.) it's a good idea to rip off data from the accesslog to show up on
some site in real-time

b.) it's really a better idea than allowing him to read through the
accesslog file itself (2-3Gb), though I'm pushing the dead horse to the
other street (to the rdbms)

c.) handling the output of some script from a cronjob won't be flexible
enough for him

d.) it's fun to see me go postal on such requests.

:)

Regards,
-w-
2c6f80fff253635f12c249ef4f116796?d=identicon&s=25 Jim Ohlstein (Guest)
on 2009-02-04 18:32
(Received via mailing list)
I agree. If you really need to store the data in a MySQL or other
database
I would think that you are better off writing  a php script to insert it
from the text file. You can run it as a cron job every  so often.



Jim



From: owner-nginx@sysoev.ru [mailto:owner-nginx@sysoev.ru] On Behalf Of
István Szukács
Sent: Wednesday, February 04, 2009 11:50 AM
To: nginx@sysoev.ru
Subject: Re: accesslog into sql



why would you slow down the nginx 1000 times?

;)




On Wed, Feb 4, 2009 at 3:41 PM, Zoltan HERPAI <wigyori@uid0.hu> wrote:

Hello,

I would like to know if a module is available with which I can write my
accesslogs directly into mysql, postgresql or any other rdbms. I'm using
0.6.34 currently, but would go to 0.7.x branch if such module is
available
only there.

Thanks in advance,
-w-
51d4559d326e8515c726c64ce04acc06?d=identicon&s=25 Ask Bjørn Hansen (Guest)
on 2009-02-04 19:47
(Received via mailing list)
On Feb 4, 2009, at 8:49, István Szukács wrote:

> why would you slow down the nginx 1000 times?

Woah - nginx can do 20-30 million requests a second ?!


   - ask
11511c9ae60176911be3220b3df68660?d=identicon&s=25 Matt Lewandowsky (Guest)
on 2009-02-04 20:19
(Received via mailing list)
You're doing 20-30 thousand requests per second with a MySQL-backed
access
log? ;)

While I think István exaggerated a slight bit, but I can't imagine that it
would be a highly-performant solution. While IIS supports logging to a
SQL
server (the go-to web server if you're looking to compare server X
against
something that has every "useless" feature... ;) ), even Microsoft says
that
the sites that would probably find the capability most useful shouldn't
use
it.

<From Microsoft KB article 245243>
Microsoft does not recommend IIS logging to a SQL Server table if the
IIS
computer is a busy server. Sending logging data to a SQL Server database
for
a busy Web site consumes system resources.

--Matt

--------------------------------------------------
From: "Ask Bjørn Hansen" <ask@develooper.com>
Date: Wednesday, February 04, 2009 10:39 AM
To: <nginx@sysoev.ru>
Subject: Re: accesslog into sql
022210bcab2575d7518b94f94b04df69?d=identicon&s=25 István Szukács (Guest)
on 2009-02-04 21:22
(Received via mailing list)
?

i have seen nginx serving files 50K req/s on a single node, while an sql
server might be able to do 200-300 query/sec or so....
022210bcab2575d7518b94f94b04df69?d=identicon&s=25 István Szukács (Guest)
on 2009-02-04 21:39
(Received via mailing list)
i regret, depends on the configuration and the sql server type you might
be
able to do 1000 - 50 000 insert/sec

if there is no huge overhead in the module you write you might get the
same
performance...

anyhow, the best would be for Zoltan to get the flat files into the sql
wo/
nginx modification, i belive :)


regards,
Istvan
D4fdbe62b825c010adf04da0f72571e9?d=identicon&s=25 Andrius Semionovas (Guest)
on 2009-02-04 22:35
(Received via mailing list)
connect to mysql sleep one request approximately 0.069s.

On Wed, 04 Feb 2009 22:11:20 +0200, István Szukács <leccine@gmail.com>
Cab66b0caecae0068ab03d1f36b87273?d=identicon&s=25 Olivier Bonvalet (Guest)
on 2009-02-04 23:10
(Received via mailing list)
You can log throught syslog :
http://snippets.aktagon.com/snippets/247-Logging-n...
And since rsyslog (or syslog-ng) can store logs in DB, you will have
your logs where you want.

I suppose nginx and/or rsyslog will do buffering, so there will not be
much overhead.

Note about MySQL : on an "old" P4-Xeon in production connection time
from a PHP mysql_connect() is about 2ms, throught a 10Mbit/s network.
But the logger will stay connected to the DB...

Andrius Semionovas a écrit :
Ff751c81227187a737dc2e102374e2a9?d=identicon&s=25 Dave Bailey (Guest)
on 2009-02-04 23:15
(Received via mailing list)
I think one thing to worry about would be latency in the insert
statement execution.  While it is true that average performance can be
pretty good, one might occasionally see an insert take a long time for
various reasons... so it would be important either to separate the
insertions from the main event loop (so they don't block the worker
process), or use a database API that can be worked into an
event-driven program somehow (I am not aware of such an API for any
RDBMS).  It seems like it would be simpler to have, for example, an
ngx_http_accesslog_mysql module that generates the insert statements
(perhaps batching them up using MySQL's extended insert syntax) and
writes them to a file, where they can be loaded into the DB without
the potential to disrupt the performance of nginx.

-dave
37ffa5245cf7b33e467cd1b9a4788040?d=identicon&s=25 Nginx Lova (ilovenginx)
on 2009-02-04 23:20
sqlite might be a better solution. Also think about optimizing your log
format
51d4559d326e8515c726c64ce04acc06?d=identicon&s=25 Ask Bjørn Hansen (Guest)
on 2009-02-04 23:46
(Received via mailing list)
On Feb 4, 2009, at 12:23, István Szukács wrote:

> i regret, depends on the configuration and the sql server type you
> might be able to do 1000 - 50 000 insert/sec

Right -- or higher.  50k/second should be reasonably easy with ARCHIVE
tables.   In other words: plenty plenty fast for the vast majority of
installations.


  - ask
This topic is locked and can not be replied to.