Mblock update

I’ve merged the work-in-progress on mblocks + the usrp inband
signaling stuff into the trunk as of r5221.

The “standalone mblock” stuff is effectively done, and is usable in
its current state. By “standalone” I mean all mblocks, no
combinations of flow graphs and mblocks.

Still remaining are:

  • finishing the host and FPGA work for inband signaling
  • gluing mblocks and flow graphs together in the same system

If you’re interested in seeing what mblocks look like, there are some
examples in the QA code that show fairly typical usage (but are of
course contrived tests).

I’d suggest looking at:

mblock/src/lib/qa_bitset.cc:
examples of mblock composition and INTERNAL, EXTERNAL and RELAY ports.

mblock/src/lib/qa_disconnect.cc:
examples of reconfiguration “on the fly”

mblock/src/lib/qa_timeouts.cc:
examples of one-shot and periodic timeouts

The primary include files are:

mblock/src/lib:

mb_runtime.h
mb_mblock.h
mb_port.h
mb_message.h

Eric

Eric B. wrote:

Still remaining are:

  • gluing mblocks and flow graphs together in the same system

Part of this includes the scheduler, right?

When we get to the point of the scheduler I want to toss it up for
discussion. Or we can
just toss it up for discussion now :smiley: I haven’t been fully convinced by
the BBN doc that
it’s the kind of scheduler we want and that we can’t get a scheduler
that inter-operates
with both m-blocks and traditional blocks.

When we first started working on the in-band project and we were looking
in to the BBN
doc, something struck us wrong about the scheduler. (us being me and
Thibaud) We think
its catering too much to the m-block when you can create a scheduler
that can operate with
other blocks users might create that desire priority queues. Basically,
we see increased
complexity in the system by running two schedulers, one of which caters
to a specific
block type. On top of that, two schedulers is going to add additional
scheduling
overhead. We want to either mash them together and try to build a
scheduler that works
with both types of blocks, or at least not cater the new scheduler so
much to m-blocks,
but to make m-blocks work with it.

We’re not sure what’s 100% feasible and not, which is where you come in
:smiley: But we think
it’s at least worth some more discussion.

  • George

‘Guile’ is now required for mblock compilation, so I’m updating my
OSX install guide/script with this info. Guile seems to have stable
versions 1.8.1 or 1.6.8 … which version is recommended or required
for mblocks? - MLD

On Wed, May 02, 2007 at 09:49:28AM -0400, George N. wrote:

it’s the kind of scheduler we want and that we can’t get a scheduler that inter-operates
but to make m-blocks work with it.

We’re not sure what’s 100% feasible and not, which is where you come in :smiley: But we think
it’s at least worth some more discussion.

  • George

I think there might be a bit of misunderstanding here.

The biggest piece of the problem is interfacing the i/o between the
two abstractions. This isn’t really an OS “scheduler” problem.

FYI, the mblock runtime currently puts every mblock instance in its
own thread. We’ll be trying a similar experiment with the flow graph
stuff relatively soon (every gr_block in its own thread). In both of
these cases, we’ll be dependent on the underlying OS to to schedule
the blocks in those cases where there are more ready to run than you
have processors/cores. We will provide hooks to allow the app
developer to specify desired priority, processor affinity and NUMA
bindings, but I suspect that in most cases these will be mostly for
tuning.

Independent of the underlying OS, gr_blocks and mblocks have
different constraints that must be satisfied in order for them to be
considered runnable. E.g., for an mblock, it’s runnable if there are
messages in it’s message queue. For a gr_block, it’s runnable if
there is sufficient input and sufficient down stream buffer space to
write the output.

Now, as part of the desire to combine the data flow abstraction and
the message passing abstraction, there are use cases where the data
flow seems like it should be subordinate to the message passing
abstraction (I.e., feels like a procedure call). This is particularly
true when the high level message passing code knows about for example
packet boundaries, but the data flow code doesn’t. In these cases one
could imagine the packet based code feeding bytes to the data flow
code, and receiving samples back. When all the samples that
correspond to a given packet have been generated, the packet based
code may want to take “packet based” action. E.g., send this frame of
samples to the i/o device (e.g., USRP) as a single logical entity, to
be transmitted on a particular frequency, at a given time, with a
specific power level.

I believe that we’re going to find that there is a natural
decomposition of problems across the two domains. E.g., pretty much
anything that looks MAC-like is going to want to run as an mblock.
The data is inherently packet based, and the logic is based on events
such as packets received and timeouts. I suspect that much of channel
coding will fall in this category too. On the other hand, lots of PHY
layer kinds of things (low level mods and demods) fit quite nicely in
the data flow abstraction.

I’m not sure if I’ve addressed your concerns.

I believe the question that remains is how would you want to
interface mblocks and gr_blocks/flow_graphs? I suspect that the right
answer is use case dependent.

Eric