Forum: Ruby Text parser (text into sentences) that works with UTF-8 and

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
mike b. (Guest)
on 2007-07-30 10:52
(Received via mailing list)
Hi all,

I have to parse about 2000 files that are written in multiple
languages (some English, some Korean, some Arabic and some Japanese).
I have to split these UTF-8 encoded into individual sentences. Has
anyone written a good parser that can parse all these non-Latin
character languages or can someone give me some advice on how to go
about writing a parser that can handle all these fairly different
languages?

Thank you,

Mike
Robert Klemme (Guest)
on 2007-07-30 11:27
(Received via mailing list)
2007/7/30, mike b. <michael.w.bell@gmail.com>:
> I have to parse about 2000 files that are written in multiple
> languages (some English, some Korean, some Arabic and some Japanese).
> I have to split these UTF-8 encoded into individual sentences. Has
> anyone written a good parser that can parse all these non-Latin
> character languages or can someone give me some advice on how to go
> about writing a parser that can handle all these fairly different
> languages?

I would consider doing this in Java, as Java's regular expressions
support Unicode.  That might make the job much easier.  OTOH, if all
files use only dot, question mark etc. (i.e. ASCII chars) as sentence
delimiters then Ruby's regular expressions might as well do the job.

Kind regards

robert
Oblomov (Guest)
on 2007-07-30 12:02
(Received via mailing list)
On Jul 30, 11:26 am, "Robert Klemme" <shortcut...@googlemail.com>
wrote:
> I would consider doing this in Java, as Java's regular expressions
> support Unicode.  That might make the job much easier.  OTOH, if all
> files use only dot, question mark etc. (i.e. ASCII chars) as sentence
> delimiters then Ruby's regular expressions might as well do the job.

Ruby supports UTF-8 regular expressions: for example, /\w+|\W/u can be
used
to scan a string splitting it into words and non-words. There were
some bugs
with Unicode character classifications in older versions of Ruby, but
I'm not
aware of any in 1.8.6; OTOH I've never tried it with non-latin text so
I don't
know if it works correctly in those cases too.
58aa8536f985277ebef53fa931863a3e?d=identicon&s=25 James G. (bbazzarrakk)
on 2007-07-30 14:54
(Received via mailing list)
On Jul 30, 2007, at 3:50 AM, mike b. wrote:

> I have to parse about 2000 files that are written in multiple
> languages (some English, some Korean, some Arabic and some Japanese).
> I have to split these UTF-8 encoded into individual sentences.

As has been stated, Ruby's regular expression engine has a Unicode
mode and that may be all you need here, depending on how you
recognize sentence boundaries.

> Has anyone written a good parser that can parse all these non-Latin
> character languages or can someone give me some advice on how to go
> about writing a parser that can handle all these fairly different
> languages?

I've released an initial version of my Ghost Wheel parser generator
library.  It doesn't have documentation yet, but it was built using
TDD and you should be able to look over the tests to see how it
works.  I'm also happy to answer questions.

My hope is that it works fine for non-Latin languages, but I'll
confess that I haven't tested it that way yet.  I would try to fix
any issues you uncovered though.

James Edward Gray II
This topic is locked and can not be replied to.