I am using ferret 0.10.9. I have indexed a whole set of data using the
standard tokenizer and stem filter. Its stemming well for english
characters. But when i enter any non english character the whole
crashes down. although the index doesn’t get corrupted. Instead of
down it should atleast so no results.Am I missing out something.