I have two programs accessing the same file. Both programs open and
close the file multiple times while running (for example reading it
first, and then writing it back using global functions).
In order to avoid race conditions i was wondering if it’s possible to
lock the file at the start of the running process and then unlock it at
the end. During that time the running process should be able to open and
close the file without problems.
I have two programs accessing the same file. Both programs open and
close the file multiple times while running (for example reading it
first, and then writing it back using global functions).
In order to avoid race conditions i was wondering if it’s possible to
lock the file at the start of the running process and then unlock it at
the end. During that time the running process should be able to open and
close the file without problems.
There’s File#flock, but keep in mind that flock is only an advisory lock
(i.e. it only notifies cooperating processes that try and get a lock
themselves) and is notoriously flakey on NFS mounts.
There’s File#flock, but keep in mind that flock is only an advisory lock
(i.e. it only notifies cooperating processes that try and get a lock
themselves) and is notoriously flakey on NFS mounts.
I don’t see a mention of your platform. However, you might consider Ara
Howard’s lockfile.rb. If is an NFS safe file lock library that works
well. It just doesn’t work on Windows.
If you need something that can work on both Windows and not-windows,
email
me. I have a hacked and a bit simplified version that will autodetect
whether the primary locking scheme will work or not, and will fall back
to
flock based locking. It remembers the result of the autodetection so
subsequent locks opened in the same process later don’t incur the cost
of
detection again.
Ara’s library works well, and I believe he uses it extensively.
My hacked version probably works less well because it is an undertested
ugly hack, but in my (less extensive) use of it, I haven’t encountered
any
problems so far.
I have two programs accessing the same file. Both programs open and
close the file multiple times while running (for example reading it
first, and then writing it back using global functions).
In order to avoid race conditions i was wondering if it’s possible to
lock the file at the start of the running process and then unlock it at
the end. During that time the running process should be able to open and
close the file without problems.
If both programs are willing to cooperate with each other, why not use a
flag file, as is often done to lock and unlock various resources in
Linux?
This approach has the advantage of being platform-neutral and doesn’t
require any explicit file-locking facilities.
#!/usr/bin/ruby -w
flag_file = “FLAGFILE”
puts “Waiting for access …”
while FileTest.exists? flag_file
sleep 1
end
puts “Creating flag file …”
ff = File.open(flag_file,“w”) {}
puts “Processing …”
sleep 5
puts “Removing flag file …”
File.delete flag_file
There is an obvious collision possibility in this arrangement, that may
or
may not be an issue.