Forum: Ruby on Rails Performance slow down with increasing log files size?

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
Shark Fin S. (Guest)
on 2006-03-24 08:15
(Received via mailing list)
Dear all,

I have been noticing this.

Whenever I delete my log files and restart lighttpd, my rail site
runs very fast.

However when my log files become large, my rail site becomes slower.

Then I repeat the process of deleting log files again, and my site is
fast again.

Is this a normal behavior? Is there something I can do to make
performance scale better?

Thank you,

Sharkie
Ezra Z. (Guest)
on 2006-03-24 08:33
(Received via mailing list)
On Mar 23, 2006, at 10:15 PM, Shark Fin S. wrote:

> is fast again.
> http://lists.rubyonrails.org/mailman/listinfo/rails
>


	You should look into a log rotator. This way your logs get swapped
out for you in timed intervals or by filesize aqnd your site can run
fast because it doesn't have to append text to the huge log files on
each request to your site.


-Ezra
Shark Fin S. (Guest)
on 2006-03-24 11:45
(Received via mailing list)
So is it true that appending text to a larger file takes longer than
to a smaller file? The reason I am asking is, I am wondering whether
the slowness really comes from my large log files.

Thank you,

Sharkie
Jay L. (Guest)
on 2006-03-24 15:28
(Received via mailing list)
On Fri, 24 Mar 2006 16:43:23 +0700, Shark Fin S. wrote:

> So is it true that appending text to a larger file takes longer than
> to a smaller file? The reason I am asking is, I am wondering whether
> the slowness really comes from my large log files.

That depends on what file system you're using.  In general, any
improvement
in a [sequential, random, keyed] [read, write, update, create, delete]
operation will result in a slowdown in one of the other operation types.
Different file systems optimize for different use cases.

And it's not always linear.  I remember a file system that would work
fine
up to a certain file length, and then hit a dramatic slowdown as it
reorganized its two-level tree into a three-level tree.  HPFS used to
have
great difficulty with directories that contained more than a thousand or
so
files, to the point where sendmail became a write-only application,
because
it could never work through the queue.

I'm no expert on current file systems like ext3, xfs, or Reiser, but if
you
would prefer not to rotate logs, you should check into these, and see
which
is optimal for your use.

Jay L.
This topic is locked and can not be replied to.