On Wed, 2007-07-18 at 10:32 +0900, John C. wrote:
And yes… I did, as always, measure before I optimized and my Queue
of Array trick did speed things measurably.
Hmm, did you just mean sending batches of objects as arrays, rather than
sending individual objects? That would make sense and it’s a reasonable
optimization, provided you’re careful about not touching the array on
the sending side once it’s been added to the queue. I’d just finished
reading the DDJ article and was imagining something more elaborate at
Depends on the version of Ruby. It certainly is if you’re using
fastthread. If he’s not using fastthread, I’d recommend giving it a
I’m not using fastthread since it caused code that had been working
fine for several years to curl up and die. So I disabled it.
It might be worth looking into why – if it doesn’t work with
fastthread, it’s unlikely to work with future versions of Ruby
(including 1.8.6 or later) or alternate Ruby implementations like JRuby.
Absent fastthread bugs (which do crop up occasionally, though rarely at
this point), there are two main sources of problems:
Code that tries to manipulate the internal implementation of
thread.rb objects (Mutex, ConditionVariable, Queue, etc); not always a
bug, but it isn’t portable or future-proof.
Code which has existing concurrency bugs which show up more clearly
when the scheduling behavior changes.
“Spin Buffers” are snake oil; they get their “speed” at the expense of
correctness – the implementation given in the DDJ article is not
Well that was a bit of a problem with the article… it didn’t
actually include the code so I can’t say one way or the other on
that. Any pointers as to what the incorrect bit is?
The code is included in the source archive on the DDJ ftp site as
spin.txt (and spin.zip, which has his test harness):
It’s not just a one-line bug, but a systemic problem: the author assumes
that the only reason he needs synchronization is to prevent two threads
from modifying the same data at the same time (hence the elaborate dance
with the buffers); in reality, however, it’s also important to protect
the code from compiler (and CPU) optimizations which will alter its
behavior in undesirable ways if it is used in a multi-threaded context
(e.g. it becomes possible for the reader to see the writer’s addition of
the object but see the object in an uninitalized state!). He’s
basically trying to get a performance “free lunch” by not using
Concurrency bugs are notoriously hard to find through testing, and it
doesn’t help that the author only ever tested on hardware which is
extremely forgiving about concurrency (a single hyperthreaded x86 CPU).
It also didn’t help that all his test did was count how many objects
were read from the queue.