Forum: Ruby Tokenizing a large file

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
4a5ad89e5a16e46dc7155fa8c6cee83c?d=identicon&s=25 Don Wood (tinnidril)
on 2009-04-15 18:48
I have a large file that I need to tokenize.  The method I am using now
is fast, but eats up a ton of memory by reading in the entire file first
as a String.  I would also like to reuse existing tokens for duplicates.
(I have no control over the file format, but this Regex works well for
what I need.)

Here is what I am doing today.

tokens= File.read(filename).scan(/'[^']*'|"[^"]*"|[(:)]|[^(:)\s]+/)

And here is what I would like to do.

tokens= []
File.open(filename) do |fh|
  fh.scan(/'[^']*'|"[^"]*"|[(:)]|[^(:)\s]+/) do |token|
    tokens << i=tokens.index(token) ? tokens[i] : token
  end
end

So what I would like to have is a scan method for File objects that
yields the tokens when called with a block, instead of returning an
array.  (It would be nice if String#scan could do this as well.)  This
isn’t a big issue, it just causes my machine to overflow to the swap
file periodically.  I could easily fix that with a couple DIMMs, but I
can’t help thinking that there should be a better way.
58479f76374a3ba3c69b9804163f39f4?d=identicon&s=25 Eric Hodel (Guest)
on 2009-04-15 23:18
(Received via mailing list)
On Apr 15, 2009, at 09:48, Don Wood wrote:
>
>
> So what I would like to have is a scan method for File objects that
> yields the tokens when called with a block, instead of returning an
> array.  (It would be nice if String#scan could do this as well.)  This
> isn’t a big issue, it just causes my machine to overflow to the swap
> file periodically.  I could easily fix that with a couple DIMMs, but I
> can’t help thinking that there should be a better way.

You should look at StringScanner in strscan.rb, it'll allow you to
intern your tokens like you want.
47b1910084592eb77a032bc7d8d1a84e?d=identicon&s=25 Joel VanderWerf (Guest)
on 2009-04-15 23:46
(Received via mailing list)
Eric Hodel wrote:
>>
>> yields the tokens when called with a block, instead of returning an
>> array.  (It would be nice if String#scan could do this as well.)  This
>> isn’t a big issue, it just causes my machine to overflow to the swap
>> file periodically.  I could easily fix that with a couple DIMMs, but I
>> can’t help thinking that there should be a better way.
>
> You should look at StringScanner in strscan.rb, it'll allow you to
> intern your tokens like you want.

I was going to suggest that, but:

$ irb -r strscan
irb(main):001:0> StringScanner.new(File.open('tmp/t'))
TypeError: can't convert File into String
         from (irb):1:in `initialize'
         from (irb):1:in `new'
         from (irb):1

Is there some way to use StringScanner with an open file?

(also, my ruby 1.8.6 only comes with ext/strscan, not lib/strscan.rb...
maybe we're talking about different things)
Ab870531383eea6e4d9110317f5401e7?d=identicon&s=25 Caleb Clausen (Guest)
on 2009-04-16 06:04
(Received via mailing list)
On 4/15/09, Don Wood <dwood@biped.us> wrote:
> And here is what I would like to do.
> array.  (It would be nice if String#scan could do this as well.)  This
> isn’t a big issue, it just causes my machine to overflow to the swap
> file periodically.  I could easily fix that with a couple DIMMs, but I
> can’t help thinking that there should be a better way.

The sequence gem permits scanning a file directly with a regexp.
Something like this should work:

require 'rubygems'
require 'sequence'
require 'sequence/file'
tokens= []
fh=Sequence::File.new(open(filename))
until fh.eof?
  tokens<<fh.scan(/'[^']*'|"[^"]*"|[(:)]|[^(:)\s]+/) #or yield token
up to the caller...
  fh.scan "\n"
end
fh.close

As I don't know your data format, I'm not sure if this is right. I'm
assuming that your tokens are separated by newlines, but if it's more
complicated than that, you will have to fiddle with the argument to
the 2nd scan. (As Sequence doesn't have String#scan's bump-a-long
behavior, you have to explicitly match the things between scanned
patterns yourself.)

Note that Sequence::File#scan will match patterns only up to a certain
size (4k bytes, I think). This is an inevitable consequence of using a
Regexp against a file; you wouldn't want arbitrary amounts of
backtracking in a 1GB+ file. Java had this restriction as well, last
time I knew (several years ago).

On the other hand, if you really do have one token per line, it will
be simpler and probably faster to use #readline to get tokens one by
one and no special library is needed.

Joel: I think the original ruby implementation of strscan was replaced
by a c extension long ago.
E0d864d9677f3c1482a20152b7cac0e2?d=identicon&s=25 Robert Klemme (Guest)
on 2009-04-16 11:28
(Received via mailing list)
2009/4/15 Don Wood <dwood@biped.us>:
> And here is what I would like to do.
>
> tokens= []
> File.open(filename) do |fh|
>  fh.scan(/'[^']*'|"[^"]*"|[(:)]|[^(:)\s]+/) do |token|
>    tokens << i=tokens.index(token) ? tokens[i] : token
>  end
> end

If tokens cannot span multiple lines:

tokens = Hash.new {|h,k| h[k.freeze] = k}
token_sequence = []

File.foreach filename do |line|
  line.scan(/'[^']*'|"[^"]*"|[(:)]|[^(:)\s]+/) do |token|
    token_sequence << tokens[token]
  end
end

> So what I would like to have is a scan method for File objects that
> yields the tokens when called with a block, instead of returning an
> array.  (It would be nice if String#scan could do this as well.)  This
> isn’t a big issue, it just causes my machine to overflow to the swap
> file periodically.  I could easily fix that with a couple DIMMs, but I
> can’t help thinking that there should be a better way.

Converted to the block form:

def my_tokenize file
  tokens = Hash.new {|h,k| h[k.freeze] = k}

  File.foreach filename do |line|
    line.scan(/'[^']*'|"[^"]*"|[(:)]|[^(:)\s]+/) do |token|
      yield tokens[token]
    end
  end
end

my_tokenize "foo" do |token|
  puts token
end

Cheers

robert
5a837592409354297424994e8d62f722?d=identicon&s=25 Ryan Davis (Guest)
on 2009-04-16 11:35
(Received via mailing list)
On Apr 16, 2009, at 02:27 , Robert Klemme wrote:

> Converted to the block form:
>
> def my_tokenize file
>  tokens = Hash.new {|h,k| h[k.freeze] = k}

FYI:

% irb
 >> h = {}
=> {}
 >> h["key"] = 42
=> 42
 >> h.keys.map { |k| k.frozen? }
=> [true]

hashes dupe and freeze string keys to prevent them from being mutated
while hash keys.
E0d864d9677f3c1482a20152b7cac0e2?d=identicon&s=25 Robert Klemme (Guest)
on 2009-04-16 11:47
(Received via mailing list)
2009/4/16 Ryan Davis <ryand-ruby@zenspider.com>:
> % irb
>>> h = {}
> => {}
>>> h["key"] = 42
> => 42
>>> h.keys.map { |k| k.frozen? }
> => [true]
>
> hashes dupe and freeze string keys to prevent them from being mutated while
> hash keys.

Only if they are not frozen yet.

irb(main):001:0> h = {}
=> {}
irb(main):002:0> s = "abc"
=> "abc"
irb(main):003:0> h[s] = s
=> "abc"
irb(main):004:0> s = "bar".freeze
=> "bar"
irb(main):005:0> h[s] = s
=> "bar"
irb(main):006:0> h
=> {"abc"=>"abc", "bar"=>"bar"}
irb(main):007:0> h.each {|kv| p kv.map {|x| x.object_id}}
[134954550, 134972840]
[134951170, 134951170]
=> {"abc"=>"abc", "bar"=>"bar"}

Do you now know why I did it the way I did?

Cheers

robert
4a5ad89e5a16e46dc7155fa8c6cee83c?d=identicon&s=25 Don Wood (tinnidril)
on 2009-04-16 19:31
Caleb Clausen wrote:
> On 4/15/09, Don Wood <dwood@biped.us> wrote:
>> And here is what I would like to do.
>> array.  (It would be nice if String#scan could do this as well.)  This
>> isn�t a big issue, it just causes my machine to overflow to the swap
>> file periodically.  I could easily fix that with a couple DIMMs, but I
>> can�t help thinking that there should be a better way.
>
> The sequence gem permits scanning a file directly with a regexp.
> Something like this should work:
>
> require 'rubygems'
> require 'sequence'
> require 'sequence/file'
> tokens= []
> fh=Sequence::File.new(open(filename))
> until fh.eof?
>   tokens<<fh.scan(/'[^']*'|"[^"]*"|[(:)]|[^(:)\s]+/) #or yield token
> up to the caller...
>   fh.scan "\n"
> end
> fh.close
>
> As I don't know your data format, I'm not sure if this is right. I'm
> assuming that your tokens are separated by newlines, but if it's more
> complicated than that, you will have to fiddle with the argument to
> the 2nd scan. (As Sequence doesn't have String#scan's bump-a-long
> behavior, you have to explicitly match the things between scanned
> patterns yourself.)
>
> Note that Sequence::File#scan will match patterns only up to a certain
> size (4k bytes, I think). This is an inevitable consequence of using a
> Regexp against a file; you wouldn't want arbitrary amounts of
> backtracking in a 1GB+ file. Java had this restriction as well, last
> time I knew (several years ago).

Thanks Caleb,

This looks like exactly what I needed.  I'm not sure I understand the
point of the second scan though.  The first scan should already ignore
unquoted whitespace, including "\n".  (At least that is how it currently
works when I scan a string.)  I don't think that I will get anywhere
near the per-token 4k limit.
4a5ad89e5a16e46dc7155fa8c6cee83c?d=identicon&s=25 Don Wood (tinnidril)
on 2009-04-16 19:40
Robert Klemme wrote:
> 2009/4/16 Ryan Davis <ryand-ruby@zenspider.com>:
>> % irb
>>>> h = {}
>> => {}
>>>> h["key"] = 42
>> => 42
>>>> h.keys.map { |k| k.frozen? }
>> => [true]
>>
>> hashes dupe and freeze string keys to prevent them from being mutated while
>> hash keys.
>
> Only if they are not frozen yet.
>
> irb(main):001:0> h = {}
> => {}
> irb(main):002:0> s = "abc"
> => "abc"
> irb(main):003:0> h[s] = s
> => "abc"
> irb(main):004:0> s = "bar".freeze
> => "bar"
> irb(main):005:0> h[s] = s
> => "bar"
> irb(main):006:0> h
> => {"abc"=>"abc", "bar"=>"bar"}
> irb(main):007:0> h.each {|kv| p kv.map {|x| x.object_id}}
> [134954550, 134972840]
> [134951170, 134951170]
> => {"abc"=>"abc", "bar"=>"bar"}
>
> Do you now know why I did it the way I did?
>
> Cheers
>
> robert

Thanks Robert,

I see what you did there.  This looks like the perfect solution for
finding duplicate strings quickly.  I don't want to assume that tokens
don't span lines but, combining this with Caleb's suggestion of using
the sequence gem, I should have all I need to drastically cut my memory
footprint.
This topic is locked and can not be replied to.