Lock errors and segfaults

Greetings,

I’ve been using ferret with great results now for a while, but in the
last week, I’ve
been running into some issues.

I will occasionally see this message:

Exception Message: Lock Error occured at <except.c>:103 in xpop_context
Error occured in index.c:5368 - iw_open
Couldn’t obtain write lock when opening IndexWriter

Which is accompanied by mongrel segfaulting with this message in the
logs:

/usr/local/lib/ruby/gems/1.8/gems/ferret-0.10.13/lib/ferret/index.rb:284:
[BUG] Segmentation fault

This is only happening for one particular project within the app I
maintain, each
project has it’s own indices.

I’ve tried deleting and rebuilding the index, but it didn’t seem to fix
the problem.

I read that newer versions of ferret have resolved some issues with
users who have
locking problems. Would that be a good strategy here as well?

Thanks for your help,

dan

Hi Dan,

I had these problems too with 0.10.* and they’ve gone in 0.11.* (not
sure which exact version though, latest is 0.11.4 and no segfaults there
for me).

You still need to be careful when having multiple processes writing to
the same Ferret database though (which I’m assuming you’re doing here to
get this error). It won’t segfault any more but you could still get
write lock errors. The general advice seems to be to use DRb to provide
a ferret object for multiple processes (aaf has this in trunk now I
believe).

John.

On Thu, 2007-04-19 at 10:12 -0700, dph wrote:

/usr/local/lib/ruby/gems/1.8/gems/ferret-0.10.13/lib/ferret/index.rb:284: [BUG] Segmentation fault


http://johnleach.co.uk

0.11 does change the on-disk format iirc, so you’ll need to rebuild.

Let me know how you get on with the DRb process, as I’ve seen some
recycled object and invalid reference errors with my code under load.

John.

On Fri, 2007-04-20 at 07:53 -0700, dph wrote:

John,

Thanks very much for your help! I’m going to write a DRb process to
build the indexes for search today.

When you upgraded to 0.11, did you have to destroy and recreate all
of your indexes?

dan


http://johnleach.co.uk

John,

Thanks very much for your help! I’m going to write a DRb process to
build the indexes for search today.

When you upgraded to 0.11, did you have to destroy and recreate all
of your indexes?

dan

Hi there,

I am getting this error very often on our website now that it is
experiencing heavey traffic (www.mintd.com). I’d like to address this
problem. Can you please explain what is meant by “DRb” to avoid these
write lock errors?

Many Thanks,
Lachlan

John L. wrote:

The general advice seems to be to use DRb to provide
a ferret object for multiple processes (aaf has this in trunk now I
believe).

Seem to have such an error.
Configuration:
Slackware 11/Ruby 1.8.6/Ferret 0.11.4, acts_as_ferret 0.4/ mongrel 1.0.1
I had this problem when mongrel was started under non-privileged user
(of course, this user had a full access to app’s directory). And no
problem running under root…

On Tue, Apr 24, 2007 at 01:58:35AM +0200, Lachlan Laycock wrote:

Hi there,

I am getting this error very often on our website now that it is
experiencing heavey traffic (www.mintd.com). I’d like to address this
problem. Can you please explain what is meant by “DRb” to avoid these
write lock errors?

DRb is a way to for Ruby programs to communicate between processes. In
the context of Ferret it means to set up one server process that does
all the indexing (and searching). The ‘real’ application processes
(e.g. Mongrel instances) only access the index through this DRb server.

This usually eliminates all the locking problems that might arise since
now always only one process is accessing the index.

If you use acts_as_ferret, you can use the built-in DRb server:
http://projects.jkraemer.net/acts_as_ferret/wiki/DrbServer

Jens


Jens Krämer
webit! Gesellschaft für neue Medien mbH
Schnorrstraße 76 | 01069 Dresden
Telefon +49 351 46766-0 | Telefax +49 351 46766-66
[email protected] | www.webit.de

Amtsgericht Dresden | HRB 15422
GF Sven Haubold, Hagen Malessa

BTW, my server is for testing only, so it has no strong workload.