Forum: Ruby simple garbage problem

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
unknown (Guest)
on 2009-01-02 10:17
(Received via mailing list)
When I run the program:

STDIN.each_line { |line| line.split("\t") }

with a big input file, the process size grows rapidly, reaching 1G in
less than 1 minute.
If I let it go, it continues to grow at this rate until the whole
system fails.
This is ruby 1.8.6 on fedora 9.

Is this a known problem?
Is there some way to work around it?
Andrew T. (Guest)
on 2009-01-02 10:31
(Received via mailing list)
On Fri, Jan 2, 2009 at 10:10 AM, <removed_email_address@domain.invalid> wrote:

> Is this a known problem?
> Is there some way to work around it?
>
>
I haven't done much input processing in ruby but I've used this with no
problems on large files:

while line = STDIN.gets
  ...
end

Just check that your file line breaks and the line breaks being used by
gets/each_line are the same.
Also make sure you actually have line breaks (sometimes you just gotta
check)

Andrew
http://ramblingsonrails.com
Roger P. (Guest)
on 2009-01-03 00:27
> Is this a known problem?
> Is there some way to work around it?

Which version of ruby are you using?  Very old ones have some leaks.
Besides that you may be able to apply some recent patches to GC [1] and
see if they help.
-=r
[1] http://www.ruby-forum.com/topic/170608
Robert K. (Guest)
on 2009-01-04 21:30
(Received via mailing list)
On 31.12.2008 17:28, removed_email_address@domain.invalid wrote:
> When I run the program:
>
> STDIN.each_line { |line| line.split("\t") }

Is it really only this line?  How do you feed the big file to stdin?
Does the big file actually _have_ lines?  Because if not you would
likely see that behavior because each_line needs to read at least until
it finds a line terminator.

> with a big input file, the process size grows rapidly, reaching 1G in
> less than 1 minute.
> If I let it go, it continues to grow at this rate until the whole
> system fails.

What does this mean?  Does the kernel panic?  Or does the Ruby process
terminate with an error?

> This is ruby 1.8.6 on fedora 9.

You do not accidentally have switched off GC completely, do you?

> Is this a known problem?
> Is there some way to work around it?

1.8.6 is not really current any more.  I would upgrade if possible - at
least get the latest version of the package.  For more advice we have a
bit too little information, I am afraid.

When I try this with my cygwin version memory consumption stays at
roughly 3MB:

robert@fussel ~
$ perl -e 'foreach $i (1..10000000) {print $i, "--\t--\t--\t--\n";}' \
 > | ruby -e 'STDIN.each_line { |line| line.split("\t") }'

robert@fussel ~
$ ruby -v
ruby 1.8.7 (2008-08-11 patchlevel 72) [i386-cygwin]

robert@fussel ~
$

But if I read a file that does not have lines behavior is as you
describe, memory goes up and up

ruby -e 'STDIN.each_line { |line| line.split("\t") }' </dev/zero

Kind regards

  robert
Charles Oliver N. (Guest)
on 2009-01-06 12:13
(Received via mailing list)
removed_email_address@domain.invalid wrote:
> Is this a known problem?
> Is there some way to work around it?

Might be worth trying on JRuby. We've had no such reports, and the
garbage collector is rock-solid.

- Charlie
This topic is locked and can not be replied to.