Pessimistic locking locks the entire table?

Hi guys,

I must be missing something obvious with pessimistic locking. Here is
what I do (Rails 2.3, mySQL):

In my Foo model:

def test
Foo.transaction do
lock = UpdateLockForFoo.find_by_foo_id(self.id, :lock => true)
sleep 30
end
end

For each Foo model, I have an UpdateLockForFoo. This “lock” needs to
be acquired before doing any changes to Foo.

I believe it works, except that my entire UpdateLockForFoo table is
blocked during the sleep. If I open two terminals and do:

1st terminal:
f = Foo.find(1)
f.test

2nd termincal:
f = Foo.find(2)
f.test

The 2nd terminal waits 30s before doing what it has to do. My
understanding was that :lock => true only locks a specific record, but
it seems it is not the case and it locks the entire table.

Do you know?

Please note: it might not look like a good idea to have this separate
UpdateLockForFoo model but I am just simplifying the whole thing here
(I need this second model).

Thanks!

Hi Scott,

Thanks. In case it helps others, it is explained here:

Setting transaction to Read Committed did not work for me though, but
creating an index on UpdateLockForFoo did the trick.

Thanks!

On Apr 2, 2012, at 4:33 PM, PierreW wrote:

My
understanding was that :lock => true only locks a specific record, but
it seems it is not the case and it locks the entire table.

That depends on what kind of locking the underlying database offers.
Which with MySQL depends on the storage manager you’re using.


Scott R.
[email protected]
http://www.elevated-dev.com/
(303) 722-0567 voice

On Apr 3, 7:52pm, PierreW [email protected] wrote:

Hi Scott,

Thanks. In case it helps others, it is explained here:

mysql - InnoDB SELECT ... FOR UPDATE statement locking all rows in a table - Stack Overflow

Setting transaction to Read Committed did not work for me though, but
creating an index on UpdateLockForFoo did the trick.

InnoDB’s row level locks are actually index-range locks - if there is
no index for the column you are using to select the records to lock
then this will result in locking the whole table.

Fred

wam r. wrote in post #1054714:

Hi guys,

I must be missing something obvious with pessimistic locking. Here is
what I do (Rails 2.3, mySQL):

So is there a reasonable use case for pessimistic locking on a web
application? That seems insane to me.

I don’t know if using pessimistic locking is the best way to do it,
but here is why I used this:

  • every X hours a demon runs and updates records
  • thing is, this demon “action” can last Y with Y > X

So there is a risk that two instances of the demon try to update the
same record. So each demon needs to acquire the lock before they can
do anything on a given record.

I guess an alternative could be to try and make sure that a demon only
starts if the previous one has finished but this was not an option in
my case.

Adding an index was actually fairly easy.

wam r. wrote in post #1054916:

I don’t know if using pessimistic locking is the best way to do it,
but here is why I used this:

  • every X hours a demon runs and updates records
  • thing is, this demon “action” can last Y with Y > X

So there is a risk that two instances of the demon try to update the
same record. So each demon needs to acquire the lock before they can
do anything on a given record.

I guess an alternative could be to try and make sure that a demon only
starts if the previous one has finished but this was not an option in
my case.

Adding an index was actually fairly easy.

Thanks for the clarification. I hope I didn’t sound condescending in my
previous reply. I was asking because I was interested to know of a case
where pessimistic locking might be useful in a web environment.

This does make some sense in that case. Have you attempted to calculate
the performance effect on user driven queries while the daemon is
performing this pessimistic locking batch update? It would be
interesting to know whether the overhead of acquiring the locks would be
significant compared to using optimistic locking and handing the likely
few unresolvable conflicts that might arise.

I’d imagine that most optimistic locking issues could be resolved by
catching the optimistic failure, merging the changes and rewriting. Of
course that leaves the possibility of a conflict on changes to the same
field, where another strategy might be need. Something like a “last
write wins,” or "user changes override daemon changes (or vice versa ).
In a worst case scenario unresolvable conflicts might just have to be
recorded and skipped over by the daemon process.

This might sound “bad” on the surface, but when compared with
conflicting changes between two actual users it’s likely not as big a
deal as it seems. In the end it might actually turn out to be safer than
introducing the possibility of deadlocking the database due to a
pessimistic locking scheme.

Of course this all depends on the specific nature of the app (i.e.
whether user changes are fairly isolated, or multiple users often
manipulate the same data)?

In the end though it all comes down to metrics. It’s far too easy to
spend too much time optimizing only to find out later that time spent
gained you almost nothing.

On Apr 4, 12:47am, PierreW [email protected] wrote:

I guess an alternative could be to try and make sure that a demon only
starts if the previous one has finished but this was not an option in
my case.

You might want to look at separating the concept of the SQL lock
(which do have costs, can land you with deadlocks etc) from the
application level lock.

You could have a locked column (better a locked_since column) that
indicates that the row is locked. Use optimistic or pessimistic
locking to update the locked_since column, and make your update stuff
ignore records with a non null locked_since

Fred

What happens when a 2nd process tries to write to a record/table that is
locked? Does it stall until the lock is released or
does it throw an exception?

This is similar to a application I am working on. In my case, all
updates to
the database, with one exception, can be done atomically. For examples,
use
SQL UPDATE to increment a field rather than read with pure
Rails/ActiveRecord,
increment model instance, and write. The one exception is adding new
instances of model with several assocations.

My solution is that all problematic actions are done with ONE Resque
worker
that runs forever. The cron jobs are Ruby (not Rails) programs that
enqueue a
task to the Resque worker. This has worked very well. I was pleasantly
surprised how easy it went together. The one con is that Resque workers
are
not good about reporting exceptions and other problems and in the
development
environment they reload properly after most, but not all changes. So in
the
development environment if there are problems or I am making big and
deep
changes, I will stop the Resque worker and run the problematic code with
script/runner or whatever the Rails 3 equivalent is.

HTH,
Jeffrey

Quoting PierreW [email protected]:

On Wednesday, November 21, 2012 7:24:05 PM UTC, jmcguckin wrote:

What happens when a 2nd process tries to write to a record/table that is
locked? Does it stall until the lock is released or
does it throw an exception?

It will block until the lock is released. If the lock is never released,
depending on your database settings, it will give up at some point and
throw an exception

Fred