Global Mutex?

Is there any thing in ruby that acts as a global mutex?

For example, across multiple mongrels each running the same application,
is there anyway for them to communicate between them and do something
like

global_semaphore.synchronize do
#block
end

So far Im thinking this would involve a PID file that stores which ruby
process currently has focus, and then you would constantly poll this
file to see when it changes to another ruby process can take the focus.

Is there anything like this?

Aryk G.

Aryk G. wrote:

Is there any thing in ruby that acts as a global mutex?

Between multiple processes on the same machine: I’d suggest using
File#flock on a local file (this probably won’t work over NFS) with
File::LOCK_EX. You can include File::LOCK_NB if you want to test the
lock but not block if someone else has it.

Between multiple processes on multiple machines: probably run a DRb
server one one machine which has a local mutex and yields back to the
caller.

2009/8/26 Brian C. [email protected]:

caller.
Aryk, just keep in mind that you are also potentially creating a
global bottleneck at the same time… :wink:

Kind regards

robert

I actually ran into that global bottleneck issue when ruby processes
would get KILLED midprocess leaving this lock file. I added a “cleanup”
action which makes sure the PID file is no being occupied by a process
that doesn’t exist.

Thanks for the pointers guys. I think my solution is 98% robust.

Robert K. wrote:

2009/8/26 Brian C. [email protected]:

caller.
Aryk, just keep in mind that you are also potentially creating a
global bottleneck at the same time… :wink:

Kind regards

robert

On 11.09.2009 04:30, Aryk G. wrote:

I actually ran into that global bottleneck issue when ruby processes
would get KILLED midprocess leaving this lock file. I added a “cleanup”
action which makes sure the PID file is no being occupied by a process
that doesn’t exist.

Actually that was not what I meant. I referred to the fact that if you
have a single global lock that all concurrent processes must acquire,
then your overall performance will suffer.

Thanks for the pointers guys. I think my solution is 98% robust.

Let’s see… That makes 365 days * 2% = 7.3 days downtime per year. :wink:

Cheers

robert

Abhinav S. wrote:

I used something like this using “ln -s” system command. Used to work
great,
so if you want a simple approach and implementation you can use it. One
catch: if the process which has the lock dies in between, you will have
to
ensure locks are properly released.

Both versions I proposed don’t suffer from this problem.

  • if you flock a file, the lock is dropped when the process owning it
    dies

  • if you synchronize using ‘yield’ within a DRb server, then if the
    client dies the TCP connection will be dropped and the block will
    terminate.

I used something like this using “ln -s” system command. Used to work
great,
so if you want a simple approach and implementation you can use it. One
catch: if the process which has the lock dies in between, you will have
to
ensure locks are properly released.

BTW, “ln -s” approached worked with processes running on multiple
machines
as well. But yes, all the machines had access to a small common working
directory.

Thanks,
Abhinav

अभिनव
http://twitter.com/abhinav

Thanks Brian. I have never used flock and in fact, didn’t know that they
can
handle this problem.

Thanks,
Abhinav

अभिनव
http://twitter.com/abhinav