Whoops. Reposting to all.
On Saturday 26 July 2008 20:43:37 Steven P. wrote:
From: David M. [mailto:[email protected]]
Keep in mind, I don’t care about implementation at this
point, but design.
Well, we’re together on that, though, of course, peoples’ opinion of design
That’s part of why I posted. (The other reason is to try to figure out
release( s ).push(1,2,3,4,5) => nil
What does “release” do, in this context? And why not make it
a method on the
I didn’t make it a method because I didn’t want it to always have to be
there, i.e., I wanted to be able to use
s.push in the simple case (as opposed to s.sync.push and s.async.push).
I was thinking it would be easier to be able to do s.sync.push (or
s.async.push), and still have the semantics you want, as in:
released_s = s.release
In fact, that’s part of where my syntax came from – the object returned
every method call is a “ReturnPath” object, which can then be used to
what happens after the call. That’s why I have things like this:
What I’m thinking now is that I should be returning futures instead, so
keep the asynchronous-by-default behavior, but no extra effort is needed
use things synchronously.
The one danger here is (again) exception handling. If something goes
don’t know when I actually make the method call, I know when I check the
future – and one of the appeals of the design is that if I don’t check
future, it’s a blind call.
With that constraint, I didn’t want to make it a method because it impinges
on the namespace of the serial behavior of the actor, i.e., if sync is the
default, and you have to say s.async to get async, you can’t (easily) use an
#async method on the actor itself.
Very early on, I realized I was going to end up doing this. I’d much
pollute the actor’s namespace than the kernel namespace, and there are
assumptions being made here: First, that most actors will be
written for that purpose, and second, that there would be some sort of
standard override – some #actor_send method.
So far, though, I haven’t actually modified the real objects, only the
I would much rather use GC, if it would work. I’m not sure
how to make GC work
here, though – and certainly not for one thread/actor.
Yeah: you could have a problem with the thread-per-actor because it might
not be clear when the actor is not actually doing anything (it can’t be GC’d
while its doing something)?
Well, I would love for Ruby to GC them on their own.
I do want to know how “async by default” was painful, though.
I really want code that looks serial to do the right serial thing, even if
the objects are actors. So far, this works in dramatis.
But I also want parallel code to not only be easy to write, I want it to
natural as serial code.
Brings up selective receive again, though. Can the calling actor receive any
other messages while it’s waiting for #now?
In short, no. The implementation is absurdly simple – I believe it’s
something like 100 lines of code and 200 lines of specs.
So, that said, here’s some relevant code:
def initialize obj
@thread = Thread.new do
message = queue.pop
break if message.nil?
The messages sent are actually blocks. Specifically:
def thread_eval &block
queue << block
And, predictably, the main usage is:
def method_missing *arguments, &block
ReturnPath.new.tap do |path|
thread_eval do |obj|
path.value = obj.public_send(*arguments, &block)
So, in short, nothing can happen in that thread outside the loop. The
will block calling that method on the object (indirectly). So if the
itself ever blocks, the entire thread is blocked.
Messages can be sent while this happens, but they will be queued.
This was, in fact, the whole point – from beginning to end of the
call, nothing else may interfere. Within the object itself, there is no
concurrency, and you don’t have to think about concurrency.
But this introduces a big difference between serial and actor code even in
the rpc case, which I don’t like.
I> In single-threaded code, it’s easy – it’s up to the caller.
Right. There’s no ambiguity. No choice. Here there’s a choice. As soon as
you have multiple actors, you have multiple stacks and in theory you can
send the exception up either. I have cases where both are useful but I don’t
have anyway of making the runtime figure out the right way to handle things
except making it explicit.
I see it as more a semantic problem – I started this because I like
with sequential Ruby, and I want to keep most of the semantics of that.
concurrency does require at least thinking in a different way…
And how are we catching this, then? A method, maybe – something like
Something similar to that. More likely I’ll provide a method that takes a
block: if you want to catch an exception signal (using Erlang terminology),
the actor calls this method with the block that it wants to get the signal.
That block will be called when the signal is received, in which case the
recipient won’t be killed. This is more or less what Erlang does (I forget
the BIF you call to do this.)
I can see that – one advantage is, no pollution of the actor’s own
In what context would it run?
This is getting pretty deep into the guts. I started a list a few weeks ago
for people discussing actor issues across languages/implementations:
http://groups.google.com/group/actor-talk. Would it make more sense to do
this there? There’s also a list for dramatis
(http://groups.google.com/group/dramatis) but if you just want to compare,
actor-talk is probably better.
I want to compare, at first, and learn. Dramatis looks more complete,
like my syntax better (hey, I’m biased) – ultimately, I’d rather not
duplicate code. (Unless I dig deeper and find myself hating yours, in
case, it’s on! :P)
I am specifically interested in doing this in Ruby, so I don’t think
entirely offtopic for ruby-talk, either.