Forum: Ferret How to do case-sensitive searches

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
Carl Y. (Guest)
on 2006-04-19 09:35
(Received via mailing list)
Forgive me if this topic has already been discussed on the list.  I
googled but couldn't find much.  I'd like to search through text for
US state abbreviations that are written in capitals.  What is the best
way to do this?  I read somewhere that tokenized fields are stored in
the index in lowercase, so I am concerned that I will lose precision.
What is the best way to store a field so that normal searches are
case-insensitive but case-sensitive searches can still be made?

Thanks,
Carl
Jens K. (Guest)
on 2006-04-23 14:44
(Received via mailing list)
Hi Carl,

On Tue, Apr 18, 2006 at 11:32:36PM -0600, Carl Y. wrote:
> Forgive me if this topic has already been discussed on the list.  I
> googled but couldn't find much.  I'd like to search through text for
> US state abbreviations that are written in capitals.  What is the best
> way to do this?  I read somewhere that tokenized fields are stored in
> the index in lowercase, so I am concerned that I will lose precision.
> What is the best way to store a field so that normal searches are
> case-insensitive but case-sensitive searches can still be made?

Are you sure this is a problem, i.e. do you get wrong hits because
the lowercase variant of an abbreviation is used in another context ?
I don't know what those abbrevs look like...

To run case-sensitive and case-insensitive searches you'd need two
fields, a tokenized one for normal case-insensitive searches, and an
untokenized one for looking up the abbreviations.

To reduce overhead in the index, you could filter the text for the
known set of abbreviations at indexing time and only store those
values in the untokenized field. Possibly this could be done in a
custom analyzer.

regards,
Jens


--
webit! Gesellschaft für neue Medien mbH          www.webit.de
Dipl.-Wirtschaftsingenieur Jens Krämer 
removed_email_address@domain.invalid
Schnorrstraße 76                         Tel +49 351 46766  0
D-01069 Dresden                          Fax +49 351 46766 66
David B. (Guest)
on 2006-04-25 18:07
(Received via mailing list)
Hey guys,

This might help. The following code takes th input;

    "I used to live in NSW but now I live in the A.C.T."

and produces;

    token["used":2:6:2]
    token["live":10:14:2]
    token["nsw":18:21:2]
    token["NSW":18:21:0]
    token["now":26:29:2]
    token["live":32:36:2]
    token["act":44:49:3]
    token["ACT":44:49:0]

As you can see, the Australian states have been entered twice, once in
upper case. The first two numbers are the start and end offsets, ie
how many bytes from the start. The third digit is the position
increment. So "live" occurs two words after "used" but "nsw" is in the
same position as "NSW". Now you just have to make sure your query
parser uses the correct Analyzer.

Hope this helps.

Dave

require 'ferret'

module Ferret::Analysis
  class TokenFilter < TokenStream
    protected
      # Construct a token stream filtering the given input.
      def initialize(input)
        @input = input
      end
  end

  class StateFilter < TokenFilter
    STATES = {
      "nsw" => "new south wales",
      "vic" => "victoria",
      "qld" => "queensland",
      "tas" => "tasmania",
      "sa" => "south australia",
      "wa" => "western australia",
      "nt" => "northern territory",
      "act" => "australian capital territory"
    }
    def next()
      if @state
        t = @state
        @state = nil
        return t
      end
      t = @input.next
      return nil if t.nil?
      if STATES[t.text]
        @state = Token.new(t.text.upcase, t.start_offset, t.end_offset,
0)
      end
      return t
    end
  end

  class StateAnalyzer < StandardAnalyzer
    def token_stream(field, text)
      StateFilter.new(super)
    end
  end

  analyzer = StateAnalyzer.new

  ts = analyzer.token_stream(nil,
    "I used to live in NSW but now I live in the A.C.T.")

  while t = ts.next
    puts t
  end

end
This topic is locked and can not be replied to.