Hey There!
Is there any chance to be able to log access through syslog? It would be
useful for an environment with a dozen nginx proxies logging to a
central syslog server for statistics and analyze.
Regards,
Gabri Mate
Hey There!
Is there any chance to be able to log access through syslog? It would be
useful for an environment with a dozen nginx proxies logging to a
central syslog server for statistics and analyze.
Gabri Mate
On Wed, Dec 16, 2009 at 1:52 PM, Gabri Mate
[email protected] wrote:
nginx mailing list
[email protected]
nginx Info Page
Hello,
I know there is an older syslog patch floating around somewhere - no
idea if it works with current or not.
Many log analyzers work fine with multiple files from multiple
sources, at least I know analog does. Failing that, you could write a
script to aggregate the logs into a single file.
Thanks,
Merlin
On Thursday, December 17, 2009, merlin corey [email protected]
wrote:
Many log analyzers work fine with multiple files from multiple
sources, at least I know analog does. Â Failing that, you could write a
script to aggregate the logs…
I think a more important use case for syslog is enabling
tamper-resistant logs to another system. Syslog over IPSec to an
unrelated system is a lot more confidence inspiring to security folks
than a local text file that can be modified after a breach.
–
RPM
On Thu, Dec 17, 2009 at 4:41 PM, Ryan M. [email protected]
wrote:
–
RPM
nginx mailing list
[email protected]
nginx Info Page
If you want to wear that security blanket, go ahead.
If you are worried about the integrity of your logfiles, you should
implement some kind of integrity checking on every important point.
This means that even if you do push things over your favorite secure
protocol to another system you’ll want to do some kind of integrity
checking there because someone could break in and tamper with the data
on the “secure” system.
Security folks know that everything breaks, so they plan for and
monitor breakages.
What’s the plan for when the syslog server goes down? No logs at all
then?
– Merlin
On Thu, Dec 17, 2009 at 9:33 PM, Ryan M. [email protected]
wrote:
Exploiting nginx or a web app gives you access to the system where the
logs are if they are on disk. It is not easy to get from there to a
completely separate syslog server that is hardened. Yes, you can send
fake data to the syslog server, but you cannot erase evidence of your
attack without breaking into it as well. WORM media can be used on the
log sever. Defense in depth.
Nice sideways response. The main statement (for me) was that if you
care about integrity you should check it in multiple places. This was
followed by an intimation that no system is secure, even if you
hardened it, as long as it is plugged into a network which has any
chance of being accessible via the internet (and even often still when
not, as long as it is powered on). Just because you think it would be
hard for most people to hop from the exposed front-end webserver to
the hardened syslog server certainly doesn’t mean it is hard for
everyone. We both know it only takes that one person that one time
with that one attack that you/the world aren’t aware of and it’s
owned. This holds true for nginx, ssh, syslog/rsyslog and any other
software that listens.
You have an nginx exploit? I don’t need to explain to you that I am
not asking about the vulnerabilities.
At any rate, if integrity of data is your concern, then implementing
integrity checking on multiple fronts - including within your hardened
server(s) - is certainly a good idea, and I stand by it. Do you care
to respond directly to this statement?
Security folks know that everything breaks, so they plan for and
monitor breakages.Yes, and one of those checks is “how can I trust my log files to
provide evidence of attack so I can fix things, comply with
regulations, and help law enforcement catch the bastards”. Having your
only logs on the system with the largest attack surface, the web
server, is not a good idea.
No, it certainly is not a good idea to have your only logs on the web
server, but I never suggested any such silly thing anyway (nice one).
At least we clearly agree here ;).
What’s the plan for when the syslog server goes down? No logs at all then?
Standard practice is to send logs to multiple log servers, via unicast
or multicast. Or at least send them to local disk and syslog so you
can compare. PCI, HIPPA, SOX, and many other regulations have
requirements for log retention and authentication.
All the more reason for integrity checking then ;).
Are you being serious here, or just contrarian?
I’m extremely serious.
This conversation started because someone else wanted to use syslog
for log analysis, which I explained is unnecessary.
You are concerned about conforming to PCI, HIPPA, and SOX - that’s
great, your reasons for wanting to use syslog are based in industry
standard practices for meeting these needs.
That’s not what the other guy needed, and it’s apparently not what
most people need, because we don’t have a large group of users with
money clamoring to have Igor add in syslog officially.
As a final point, I don’t mean to put it as if you were selling the
security blanket, because I would like to point out to you (and
everyone else) that I did note and appreciate your use of the term
“tamper resistant” logs, rather than “tamper proof” ;)… I just made
an offhand comment and look at us now XD
– Merlin
On Friday, December 18, 2009, merlin corey [email protected]
wrote:
At any rate, if integrity of data is your concern, then implementing
integrity checking on multiple fronts - including within your hardened
server(s) - is certainly a good idea, and I stand by it. Â Do you care
to respond directly to this statement?
I believe I did, and you’re being pendantic. But I will answer again
in another way. I agree that integrity should be verified and inputs
validated at every possible layer, but that doesn’t help on a rooted
box. it is impossible to trust log files (or anything else) on a
system that is compromised. logs can be overwritten… Even if you use
hmac or signatures or whatever… The keys for such schemes are by
necessity on the same server. Maybe if WORM hardware were present…
This is why I and most other security folks prefer off-system logs.
And simply copying them elsewhere from the nginx box isn’t enough, you
need a write-once protocol like syslog. If the web server box has
write access to the off-server logs, you can’t trust them either.
No, it certainly is not a good idea to have your only logs on the web
server, but I never suggested any such silly thing anyway (nice one).
At least we clearly agree here ;).
But you can’t just copy them off-server with a script. The first thing
a successful attacker does is attempt to cover his tracks. So copying
logs even every 60 seconds leaves a big window. Secondly, almost all
protocols besides syslog would enable an attacker with root to
overwrite the or at least truncate the logs on the destination via the
same mechanism. Rsync, FTP, nfs, take your pick.
A “pull” script from another system might provide improved confidence
in the log integrity, but still suffers from the timing issue. Syslog
gets log data off-server in less than a milisecond usually.
Compromising a separate server that exposes only syslog as an
interface is a difficult hurdle to overcome. Precisely because the
system is simple and has little functionality, and uses established
protocols and code.
standard practices for meeting these needs.
That’s not what the other guy needed, and it’s apparently not what
most people need, because we don’t have a large group of users with
money clamoring to have Igor add in syslog officially.
Frankly, my employer might be willing to sponsor such work. But the
mechanism for such sponsored development is unclear from my reading of
the nginx site (perhaps because I can’t read Russian). Is there a
bounty program in place? Can maintenance be purchaed?
As a final point, I don’t mean to put it as if you were selling the
security blanket, because I would like to point out to you (and
everyone else) that I did note and appreciate your use of the term
“tamper resistant” logs, rather than “tamper proof” ;)… Â I just made
an offhand comment and look at us now XD
I guess I reacted to the term “security blanket”, which implies
ineffective security theater. Syslog is very effective at improving
the security of log files when implemented properly. Which is whiy it
is a critical part of almost all high-security architectures.
–
RPM
On Thursday, December 17, 2009, merlin corey [email protected]
wrote:
If you want to wear that security blanket, go ahead.
If you are worried about the integrity of your logfiles, you should
implement some kind of integrity checking on every important point.
This means that even if you do push things over your favorite secure
protocol to another system you’ll want to do some kind of integrity
checking there because someone could break in and tamper with the data
on the “secure” system.
Exploiting nginx or a web app gives you access to the system where the
logs are if they are on disk. It is not easy to get from there to a
completely separate syslog server that is hardened. Yes, you can send
fake data to the syslog server, but you cannot erase evidence of your
attack without breaking into it as well. WORM media can be used on the
log sever. Defense in depth.
Security folks know that everything breaks, so they plan for and
monitor breakages.
Yes, and one of those checks is “how can I trust my log files to
provide evidence of attack so I can fix things, comply with
regulations, and help law enforcement catch the bastards”. Having your
only logs on the system with the largest attack surface, the web
server, is not a good idea.
What’s the plan for when the syslog server goes down? Â No logs at all then?
Standard practice is to send logs to multiple log servers, via unicast
or multicast. Or at least send them to local disk and syslog so you
can compare. PCI, HIPPA, SOX, and many other regulations have
requirements for log retention and authentication.
Are you being serious here, or just contrarian?
–
RPM
Gabri Mate wrote:
Hey There!
Is there any chance to be able to log access through syslog? It would be
I doubt if there is built-in support for logging to syslog directly. But
you can achieve something almost as good by simply tailing the logs to
the syslog logger command.
try this:
tail -f access.log | logger -p local5.err -t nginx
you can ensure this is always running by using something like
daemontools.
Surely this will have worser performance than if it were implemented
natively within nginx.
On 20.12.2009, at 19:50, Vinay Y s [email protected] wrote:
try this:
tail -f access.log | logger -p local5.err -t nginxyou can ensure this is always running by using something like
daemontools.Surely this will have worser performance than if it were implemented
natively within nginx.
The recent descussion in the Russian list tells the opposite.
Performance may become even better due to packet nature of the
network: less packets on heavu loadef system.
Do not be afraid to use such a configuration
Best regards,
Peter.
i would like an official syslog patch also, or at least an option say
error_log syslog:info crit;
or something like that.
Kingsley
From: “Michael S.” [email protected]
Sent: Monday, December 21, 2009 9:26 AM
To: [email protected]
Subject: Re: loggint through syslog
i could see the draw for people to aggregate all their access logs
through syslog remoting to a central logging server, that’d be kind of
a neat way of handling it, but i’d be worried that the syslog
infrastructure isn’t designed for hundreds of messages per second or
more (maybe it is, but that would worry me :))
On Sun, Dec 20, 2009 at 3:04 PM, Kingsley F.
I’d still be +1 for a syslog module/patch to be official
If there’s a way to fire off the message and forget about it so it
doesn’t block, then I don’t see a problem (I wouldn’t recommend it for
access logging necessarily, but for error logging and such it would be
nice?)
Towards Ryan,
I think we’re in agreement and more or less understanding each other,
we’re just being pedantic about different ends of things. Perhaps it
will be helpful if I explicitly add that I was extrapolating from the
axiom “assume it’s compromised” - this is especially true of access
logs which receive most data from the user, and if you aren’t using
nginx you might even have a hard time determining if the user actually
loaded the content and not just requested it :-).
To answer your perhaps unasked question, while priority of getting
things into the mainline is really up to Igor himself and not
necessarily based on anything but his feelings/wishes/time/etc, I am
fairly certain that the mail modules for example were paid for by a
company that realized nginx’s asynchronous back-proxy engine would
work just as well for their mail architecture as it does for the HTTP
architecture. So it would at least seem there is a precedent for
donations leading to direct feature changes.
That said, even if Igor is not interested in the issue at the moment,
you can still try to find someone on the mailing list (or the new
devel list) who would be interested in creating the patch/module and
maintaining/improving it until it is ready for inclusion in the main
branch.
– Merlin
On Sunday, December 20, 2009, Vinay Y s [email protected] wrote:
try this:
tail -f access.log | logger -p local5.err -t nginxyou can ensure this is always running by using something like
daemontools.Surely this will have worser performance than if it were implemented
natively within nginx.
Interesting… Wouldn’t that construct result in one massive syslog
entry containing many lines of the nginx log (possibly overflowing a
syslog max limit and truncating)? The default interval for tail -f
seems to be 1s. That could be solved by an intermediate process I
suppose.
Unfortunately, I see no mention of TLS support in the man pages for
logger, but I think you are on to something… good suggestion. The
efficency of tail -f is the deciding factor I think.
Coordinating with log rotations will also be somewhat tricky, but
probably do-able by watching the descriptor instead of the name and
re-starting the process at log rotation.
I still think native syslog in nginx is more desirable, of course.
Complexity is the enemy of both stability and security.
–
RPM
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.
Sponsor our Newsletter | Privacy Policy | Terms of Service | Remote Ruby Jobs