On Tue, Jun 9, 2009 at 2:31 PM, James G.
I’ve very much a Fiber newbie, so forgive my dumb questions, butâ€¦
Fibers don’t really give true concurrency either, right?
Correct. Fibers are coroutines which can cooperatively switch between
other. However, they can provide an excellent mechanism for modeling
concurrent I/O where your concurrency primitives spend most of their
sleeping waiting for I/O events to happen, which is what’s being done
Revactor is a library which provides an Erlang-like actor model which
fibers as the underlying concurrency primitive (although in Erlang and
MenTaLguY’s thread-based actor library Omnibus actors are actually
The real advantage of this approach is a synchronous facade on top of
is underneath a fully asynchronous event system. Revactor is built on
or Rev, which is an asynchronous event library that uses libev to do
handling and I/O multiplexing.
If you look at the code for the concurrent HTTP fetcher in Revactor:
…it’s extremely clean compared to the twisted (excuse the pun) mess of
inverted control constructs you’d get in a framework like EventMachine
Twisted. Making an HTTP request is as simple as:
HTTP is a synchronous request/response protocol so it makes much more
to model it as such.
When you call “get”, it sends the request to the server, then suspends
current Fiber (which waits for the response data to get streamed back to
inbox). This allows any other Fibers who have incoming data to process
The Actor::HttpClient.get method thus effectively “blocks” until the
response body has been consumed.
I’m not understanding how creating 64 of them speeds things up.
You don’t need to use Fibers here. You could write everything fully
asynchronously and not need fibers at all.
Or you could use threads! On 1.8 threads are nice and lightweight but
I/O performance sucks and net/http and threads get kind of nasty. On
threads are much slower but the I/O performance is better because
can actually make blocking system calls.
Revactor is using Fibers to give you the best of both worlds: you can do
concurrent I/O as if it were synchronous/threaded while leveraging the
performance benefits of Ruby 1.9 (and libev) and your underlying
primitive is nice and lightweight.
Unlike threads, messaging is baked in, and fully asynchronous unlike
Queue class. This allows parts of your program to “fire and forget”
messages to other parts of the system, and those other parts can consume
messages when they’re ready.