Comments on "BBN's Proposed extensions for data networking"

Here’s the first round of comments from me. I’ve read through the
document a few times, and along with the other discussion am slowly
putting pieces together. These commends reflect my current beliefs /
understandings, and of course are subject to change as I get
corrected / educated. - MLD

ps> I’m not correcting typos or bad grammar, but I’m am offering
suggestions for questionable word usage.

  • “enhance” doesn’t sound right to me; it implies that the -current
    code base- (gnuradio-core) will be changed. I like “extend” or
    “augment” instead, which are used up-front in 4.1 onward. There are
    6 instances of “enhance” in some form, and 41 uses of “extension” or
    the like. I’d change the former to the latter.
  1. p52, table 4.1: “an enhanced version of the GNU Radio block” …

–> The m-block will not inherit from a gr-block. Hence they are
separate concepts and implementations, and one has no relationship to
the other, except that they both belong to the overall GR project. I
would say “a new version of the GNU Radio block”

  1. p55, 4.5.1, top bullet: “Dynamic reconfiguration is intertwined
    with scheduling and requires support from both the scheduler and
    enhanced blocks.”

–> Ditto from above: “both the m-block scheduler and related blocks.”

  1. p55, 4.6: “plans for enhancing the GNU Radio architecture”

–> “plans for extending the”

  1. p60, 4.8.2.2: “Figure 4.4 shows how the enhanced scheduling scheme
    interworks with GNU Radio scheduling.”

–> “how the new m-block scheduling scheme interworks with current gr-
block scheduling”

  1. p61, figure 4.4: “Enhanced Flow Graph”

–> “Extended Multi-Level Flow Graph”

  1. p61, 4.8.2: “The proposed m-block executes under the control of
    the new enhanced scheduler”

–> “the new scheduler”

  • p49, 4.1; p69, 4.11: “can be implemented in an incremental fashion
    on the existing GNU Radio framework and will have no impact on
    existing GNU software or functionality.”
  1. I would remove the last “GNU”, it’s redundant and maybe even
    deceptive

  2. I would change/add: “… implemented in an incremental fashion as
    separate modules on …”

  • p50, 4.2: “The Media Access Control (MAC) layer needs low-latency
    transmission control - faster than ``just in time’’ processing
    through a flow-graph.”

–> From a certain perspective, all data processing is JIT: it either
happens in time, or it doesn’t. If a very complex set of
computations are specified for m-blocks in the meta-data, then the
processing might not be in time; there is no guarantee and nothing
that the m-scheduler could do about it. I you believe what I just
wrote (as I do), then this sentence is misleading. m-blocks will not
be any faster than gr-blocks, but they will “know” about latency and
thus can determine if they’re “on time” or not. Or am I mixing
issues - control signals versus data processing?

  • p53, 4.5.1: ``systolic’’ … I don’t think what’s there makes
    sense, unless there is a new definition online dict’s are not aware
    of. Maybe -> “systematic”?

  • p54, Algorithm 1 GNU Radio Scheduling loop. “while True do”

–> This is no longer the case, should read instead something
equivalent to “d_enabled && nalive > 0”

  • p56, 4.6.1: Needs to be filled in, and I would add something
    describing that the proposed extensions are to implemented as modules
    which are independent of the current ones, giving the user the choice
    between the new m-blocks and the current gr-blocks, or combining them
    with the requirement that the m-blocks are primary.

  • p56, 4.6.4: “These items are typically floats, doubles or complex
    values.”

–> I would rewrite this to state that “These items can be any
standard C/C++ element, including ints, floats, doubles, complex
values, and even structs.” Yes, I’ve done testing on structs and
it’s possible to send those around. It’s easiest when their size is
“small”, but possible no matter their size.

–> I would move this whole paragraph up to 4.5.1 since there are no
other comparisons between the gr-stuff and m-stuff in 4.6.

  • p57, 4.6.4: “A block can have zero or more input streams and zero
    or more output streams.” … but not both zero input and output
    streams.

  • p57, 4.6.5: “The proposed extensions support interoperation between
    GNU Radio blocks and m-block s, reconcile the scheduling of GNU Radio
    flow graphs, m-block s and real-time scheduling.”

–> This doesn’t make sense yet. I think it’s trying to say
something like: “The proposed extensions support the needs of time-
knowledgeable, priority-based scheduling required for processing m-
blocks, as well as reconcile the interoperation between current GNU
Radio flow graphs and the new m-blocks.”

  • p58, 4.8.1: At the end of the first paragraph, I move “An m-block
    may have zero or more bi-directional message ports.” to the end, and
    would add something to the effect of “There may be multiple ports of
    the same protocol and ports can be uni-directional if needed.” I
    think it needs to be made clearer up front that there can be lots of
    ports into/from/inside an m-block, and these can be of the same
    protocol as well as bi- or uni-directional … that ports are not -
    required- to be bi-directional, but it is an option.

  • p59, Figure 4.2: The drawing makes it look like there is an input
    port and an output port for each port type, yet it says “bi-
    directional” which would imply to me that there is a single port
    handling both input and output. After looking at this a few times, I
    now see what it’s meant to describe. I think it would make sense to
    separate the port name (“Control Messages”) from the “bi-directional”
    text. Put the “bi-directional” part between the “input” and “output”
    blocks, somehow connecting them in such a way that the “bi” part is
    more clear.

  • p60, 4.8.1: “In order to support real-time scheduling, a mechanism
    is required that will allow m-block s to relinquish control of a
    processor after a certain number of processor cycles have been used
    (we will refer to this number of processor cycles as the scheduling
    quantum ).” and “The gr single threaded scheduler will run until it
    walks all the incoming data (one scheduling quantum worth) through
    the graph to the final sink, at which point it returns.”

–> These seem in conflict with each other. Further, the only
obvious reference in 4.8.2 to the relinquishing of control is the
“yield()” function. Maybe: The “gr single threaded scheduler” is run
in a separate thread from the executing m-block, and the m-block
sleeps for a set time (the “scheduling quantum”) … it will wake up
if the thread finishes or if the sleep returns, at which point it
checks the state of the thread, and either yields to the m-scheduler,
or returns the gr-scheduler’s data.

  • p61, 4.8.3: I would move the first part (ending in “signal
    processing blocks”) up to 4.5 somewhere. It really doesn’t belong
    here, but it is useful as part of the current GR baseline.

  • p65, Figure 4.5 & related reference on p64: It would be -really-
    useful to describe the elements in the Figure … what the box is
    (the m-block), and the “ports” both conjugated and not (or
    different?). It would be useful to have 3 different output ports,
    with a brief description of

  • p65, 4.9.5: “The conjugation operator swaps the incoming and
    outgoing message sets associated with a protocol class.”

–> This is redundant with the previous paragraph; I’d remove it.

  • P66, 4.9.7: “decomposition” -> “composition” makes more sense to me.

  • p67: Could there be some text about what “C1” and “C2” are, and how
    they are connected to the new m-scheduler? Are they invoked by the
    encompassing m-block, as part of it’s data processing?

On Wed, Jun 07, 2006 at 02:09:06PM -0400, Michael D. wrote:

Here’s the first round of comments from me. I’ve read through the
document a few times, and along with the other discussion am slowly
putting pieces together. These commends reflect my current beliefs /
understandings, and of course are subject to change as I get
corrected / educated. - MLD

Hi Michael,

Thanks for the comments. I’m only responding to a few of them.
I’ll let the BBN’ers deal with the rest :wink:

  • p56, 4.6.4: “These items are typically floats, doubles or complex
    values.”

–> I would rewrite this to state that “These items can be any
standard C/C++ element, including ints, floats, doubles, complex
values, and even structs.” Yes, I’ve done testing on structs and
it’s possible to send those around. It’s easiest when their size is
“small”, but possible no matter their size.

In reality, it will work with any C++ element for which memcpy is a
valid copy constructor. This includes many structures and classes,
but not all.

  • p67: Could there be some text about what “C1” and “C2” are, and how
    they are connected to the new m-scheduler? Are they invoked by the
    encompassing m-block, as part of it’s data processing?

They are mblocks “contained” in the one illustrated.

Containment has zero to do with scheduling; it’s for managing
complexity / reuse. There is only a single mblock scheduler, and it
schedules all messages across all mblocks, nested or not.

Eric

Michael D. wrote:

Here’s the first round of comments from me. I’ve read through the
document a few times, and along with the other discussion am slowly
putting pieces together. These commends reflect my current beliefs /
understandings, and of course are subject to change as I get corrected /
educated. - MLD

Hi Michael,

Thank you for the comments and discussion. I’ve cut out all of the
comments that I have already incorporated into the document. See my
other comments below.

sentence is misleading. m-blocks will not be any faster than gr-blocks,
but they will “know” about latency and thus can determine if they’re “on
time” or not. Or am I mixing issues - control signals versus data
processing?

This sentence is confusing. How about the following:

“The Media Access Control (MAC) layer needs low-latency transmission
control – faster than the FIFO processing currently implemented in GNU
Radio flow graphs.”?

The main point here is that with m-blocks, a high priority message can
propagate through an entire set of m-blocks in a very short time by
“overtaking” lower priority messages that are already queued in the
m-block buffers along the way.

There are quite a few concepts in this paragraph. One is that of
pre-computed data sample buffers. These are created and then await a
signal before being transmitted over the air. The second concept is
that of low latency propagation of messages through a set of m-blocks
(e.g. a control signal to transmit a buffer). The third is the idea of
using processing latency information to try and manage the scheduling of
m-blocks in a way that maximizes the chances of meeting timing
requirements (hopefully, under some circumstances the chances of meeting
these requirements will be quite high). Somehow, it didn’t come out
quite as clearly as we had hoped.

  • p53, 4.5.1: ``systolic’’ … I don’t think what’s there makes sense,
    unless there is a new definition online dict’s are not aware of. Maybe
    -> “systematic”?

Here “systolic” is used to describe the “pulsing” movement of the data
through a flow graph (as in the “systolic” phase of a heart beat when
blood is pumped from the heart into the arteries). Is that ok?

  • p60, 4.8.1: “In order to support real-time scheduling, a mechanism is
    required that will allow m-block s to relinquish control of a processor
    after a certain number of processor cycles have been used (we will refer
    to this number of processor cycles as the scheduling quantum ).” and
    “The gr single threaded scheduler will run until it walks all the
    incoming data (one scheduling quantum worth) through the graph to the
    final sink, at which point it returns.”

I believe these two statements are consistent. The two cases are:

  • A generic m-block (no GNU Radio blocks inside it): in this case, the

    processing inside the m-block must relinquish control of the
    processor
    after “quantum” processor cycles.

  • An m-block enclosing a GNU Radio flow graph: in this case, the
    enclosing m-block will pass a block of data that should take one
    quantum’s worth of processor cycles to pass through the GNU Radio
    flow
    graph.

In either case, the m-block is relinquishing processor control after one
quantum’s worth of cycles.

–> These seem in conflict with each other. Further, the only obvious
reference in 4.8.2 to the relinquishing of control is the “yield()”
function. Maybe: The “gr single threaded scheduler” is run in a
separate thread from the executing m-block, and the m-block sleeps for a
set time (the “scheduling quantum”) … it will wake up if the thread
finishes or if the sleep returns, at which point it checks the state of
the thread, and either yields to the m-scheduler, or returns the
gr-scheduler’s data.

Yes. The “gr single threaded scheduler” runs in its own thread. The
sequence you suggest should work just fine.

Thanks again for the comments! I’ll send an updated version of the
document to the list at the end of the week.

Dave.

Eric B. wrote:

sequence you suggest should work just fine.

Nope. gr_single_threaded_scheduler (or something above it) will be
called from the handle_message callback of the mblock, and thus will
execute on the mblock’s thread.

OK.

FWIW, I think all this stuff about scheduler quanta is confusing, and
should be deleted. I’m assuming that everything is “run to completion”.
If you want a shorter interval until completion, bite off smaller
chunks. I don’t think any of this really needs to be in the doc, but
will fall out as an implementation detail.

Agreed. I’ll make the necessary changes.

Thanks!

Dave.

On Thu, Jun 08, 2006 at 12:38:02AM -0400, David Lapsley wrote:

In either case, the m-block is relinquishing processor control after one

Yes. The “gr single threaded scheduler” runs in its own thread. The
sequence you suggest should work just fine.

Nope. gr_single_threaded_scheduler (or something above it) will be
called from the handle_message callback of the mblock, and thus will
execute on the mblock’s thread.

FWIW, I think all this stuff about scheduler quanta is confusing, and
should be deleted. I’m assuming that everything is “run to completion”.
If you want a shorter interval until completion, bite off smaller
chunks. I don’t think any of this really needs to be in the doc, but
will fall out as an implementation detail.

Eric

valid copy constructor. This includes many structures and classes,
but not all.

You get my point. “floats, doubles, and complex values” is too
limiting. Not that one needs to be all-inclusive, but int’s are a
big part of what’s passed around and leaving them out seems like a
crime :wink: Structs or classes can be left out, but I think any C-
struct can be passed around (since it’s just a bit of structured
memory with no methods attached to it) and thus something should be
included as a voice for the power of the current GR capabilities.

  • p67: Could there be some text about what “C1” and “C2” are, and how
    they are connected to the new m-scheduler? Are they invoked by the
    encompassing m-block, as part of it’s data processing?

They are mblocks “contained” in the one illustrated.

Containment has zero to do with scheduling; it’s for managing
complexity / reuse. There is only a single mblock scheduler, and it
schedules all messages across all mblocks, nested or not.

Ah, that’s what I thought. I don’t remember reading this anywhere.
I will try to reread (for the ?'th time) and find an appropriate
place for it, sometime in the next few days. - MLD

On Thu, Jun 08, 2006 at 02:27:34PM +0100, John Aldridge wrote:

Michael D. wrote:

In reality, it will work with any C++ element for which memcpy is a
valid copy constructor. This includes many structures and classes,
but not all.

I.e. those things which the C++ standard calls “POD types” (see section
3.9 paras 2 & 10)?

I don’t recall if pointers are POD types, but copying structures
containing pointers could be problematic if there are object
lifetime/ownership conventions that should be observed.

Eric

Michael D. wrote:

In reality, it will work with any C++ element for which memcpy is a
valid copy constructor. This includes many structures and classes,
but not all.

I.e. those things which the C++ standard calls “POD types” (see section
3.9 paras 2 & 10)?


John

We have incorporated the suggested changes into the document. Thank
you for the great feedback. The latest version is now available for
download at:

http://acert.ir.bbn.com/downloads/adroit/gnuradio-architectural-enhancements-3.pdf

We would appreciate any additional feedback, sent to gnuradio-discuss,
or feel free to email us privately if there’s some reason
gnuradio-discuss isn’t appropriate.

Cheers,

Dave.

Dave - Working on v3 of this document. There are some changes from
the previous version which greatly improve clarity! Thanks! Here
are some more suggestions, comments, questions, and thoughts. - MLD

p60, 4.2: “The Media Access Control (MAC) layer needs low-latency
transmission control – faster than the FIFO processing currently
implemented in GNU Radio flow graphs.” : How about just removing the
reference to GNU Radio’s flow graphs (or maybe, moving it to the GR
baseline section), since it’s the only “comparison” in that whole
paragraph. You can certainly add something else on the tail of that
short sentence, but it needs to be an extension which describes the
“low-latency transmission control”. If this concept is solely for
prioritizing messages, then you could make it something like “The
Media Access Control (MAC) layer needs low-latency transmission
control, allowing for high-priority messages to propagate through a
set of m-blocks undisturbed so-as to minimize computational latency.”
if you felt that was appropriate. If, for the purposes of your
description, this concept is more than that, then add something
else. Either way, removing the comparison IMHO is useful.

p61, REQ 4.10: I would add the words “the current” to “GR Framework”
to make it explicit what you’re not going to change.

p61, REQ 4.13, 4.14: Could these be combined, or 4.14 just removed?
What is the difference, since the latter seems to be a subset of the
former. What are the concepts supposed to be, as they seem
duplicative to me?

p63, 4.5.1; and p70, 4.8.1: It would probably be worth noting that in
the gr-stuff world there is no generic built-in functionality for
stopping computations to allow for higher-priority computations to
take place (even if the gr-scheduler were written to handle
prioritization) [NOTE: it would be possible for individual gr-blocks
to do this, but none at this time do that I know of]. Thus -any- use
of the gr-scheduler must wait until completion of all data being
processed. This goes for m-blocks too when encapsulating a gr-flow-
graph, not just gr_blocks and flow-graphs standing alone.

p64: “systolic”: I still feel it just doesn’t work. Yes, I read what
you wrote about the “pulsing” nature of the GR scheduler, of which I
agree that’s pretty much how it works. But “systolic”, at least in
my understanding and def’n reading, means contraction/compression
(specifically of the heart muscles, but could be others) … the
result of which is the pulsing nature of blood-flow. The pulsing is
a result of the systole, but isn’t what the systole does or is. I
still feel that “systematic” or “round robin” or even (arranged,
methodic, orderly, well-ordered, well-regulated) might be better when
used appropriately.

Also, the whole discussion of packet radio requirements doesn’t
really fit into the GR baseline, and should instead probably be in
4.3, or at least elsewhere.

p66, 4.6.3: I’d move the second paragraph to the second or third
sentence of the first paragraph … just fits there better.

  • Also, I don’t understand “m-block s may be enclosed within a single
    m-block and treated as a single entity.” down to the flow-graph
    stuff From our previous discussion, the aggregation is purely
    symbolic in nature … there is still a virtual, possibly dynamic,
    graph upon which the m-scheduler must work. Thus, how can
    aggregation make a “single entity” w/r.t. the m-scheduler?
    Aggregation implies to me that the primary m-block’s “handle_message”
    is called, and then that m-block deals with all the aggregated
    internal m-blocks (and gr_blocks, if any).

  • Regarding: “Primitives will be provided to allow the connection of
    internal components of the m-block to each other and to external
    interfaces.” … So the m-block class will provide the “graph”
    functionality, instead of having it be an external entity as is
    currently done in GR? 'tis a good idea, IMHO.

p67, 4.6.4: " transformed by the blocks in the flow graph." --> “any
internal flow graph”. I assume you’re referring to the gr_block flow-
graph here …

p67: 4.6.7: which “flow graph” is this referring to? Looks like the
m-block stuff, which isn’t a flow graph at all, so I think this needs
to be changed. You might want to do a global search for this term to
make sure all references are appropriate, and change those which are
not.

p68, 4.8.1; and p74, 4.8.4: How can an m-block have zero ports? I
thought that all data / metadata / signals / whatever were
transported via these ports. Without a port of any type, what can an
m-block do?

p68&70, 4.8.1: How can you implement “a mechanism is required that
will allow m-block s to relinquish control of a processor after a
certain number of processor cycles have been used” for a gr-flow-
graph and guarantee that the internal flow-graph’s memory is
maintained? How is this implemented in general? I guess you could
start a new thread, which would execute for X us or ms, but that
seems like a costly overhead when “real-time” needs very low
latency. You might want to tweak the last paragraph, first sentence,
to state that the gr_single… will run all the data through
regardless of the number of processor cycles it takes, in order to
guarantee memory coherency (or something like that).

p70, 4.8.2.2: “Within an m-block, data is moved through the GNU Radio
flow graph using a GNU Radio scheduler that runs within its own
thread of execution.” … this doesn’t sound correct, regarding the
previous discussion.

Also, looks like a new number (or more) is missing, starting with
“However, information …”. Or, well, something is just wrong with
the rest of this section … seems non-coherent. Could you please
number / reorganize / rewrite / make more coherent?

p75, 4.9.5: Could you make a drawing depicting replicated ports?
Might add insight for some folks.

p76, 4.9.11: “Messages arriving at an unconnected relay port are
discarded.” … while it’s nice to have unconnected ports, this
takes extra processing to deal with. Is it possible to never have
unconnected ports, and/or to always make use of all ports? Or in the
dynamic graphing, is this just a possibility which can happen and
thus needs to be considered?

p77, 4.9.11: “These ports are not visible from the outside of the m-
block, and are not a part of the peer interface.” … what does
“visible” mean here? I thought that all m-blocks are dealt with by
the m-scheduler? I think what you’re trying to say is that any
internal ports will be defined solely inside the definition of the
enclosing m-block, when using this semi-formal description, and hence
will not be available for connections outside of the enclosing m-
block. Yes?

p77, (4.14-4.16): Would it make more sense to first state that
peer interface = external end ports U relay ports
then go on with the current 4.14 and 4.15 (if desired, or not).

On 6/14/06 2:24 PM, “Michael D.” [email protected] wrote:

Dave - Working on v3 of this document. There are some changes from
the previous version which greatly improve clarity! Thanks! Here
are some more suggestions, comments, questions, and thoughts. - MLD

Michael,

No problems. Thank you for the great comments!

I’ll work on getting your comments into the next revision. I’ll have
that
out early next week.

I’d also encourage any other folks on the list to send in their comments
too. The more the merrier!

Also, the whole discussion of packet radio requirements doesn’t
really fit into the GR baseline, and should instead probably be in
4.3, or at least elsewhere.

Do you mean 4.5.2? The intent here was to describe the current
packet-capabilities in GNU Radio. The last paragraph could be moved to
the
requirements section, but do you think the whole section should go?

  • Also, I don’t understand “m-block s may be enclosed within a single
    m-block and treated as a single entity.” down to the flow-graph
    stuff From our previous discussion, the aggregation is purely
    symbolic in nature … there is still a virtual, possibly dynamic,
    graph upon which the m-scheduler must work. Thus, how can
    aggregation make a “single entity” w/r.t. the m-scheduler?
    Aggregation implies to me that the primary m-block’s “handle_message”
    is called, and then that m-block deals with all the aggregated
    internal m-blocks (and gr_blocks, if any).

Good point. From the user’s point of view, an aggregate m-block
enclosing
component m-blocks looks like a single entity, but the aggregation is
purely
symbolic, so there is no aggregation w.r.t. The m-scheduler.

  • Regarding: “Primitives will be provided to allow the connection of
    internal components of the m-block to each other and to external
    interfaces.” … So the m-block class will provide the “graph”
    functionality, instead of having it be an external entity as is
    currently done in GR? 'tis a good idea, IMHO.

That’s right!

p67, 4.6.4: " transformed by the blocks in the flow graph." → “any
internal flow graph”. I assume you’re referring to the gr_block flow-
graph here …

Actually no. This is a mistake. The transformation could be along a
path
through a set of connected m-blocks or through an enclosed GNU Radio
flow_graph. I’ll fix it.

p67: 4.6.7: which “flow graph” is this referring to? Looks like the
m-block stuff, which isn’t a flow graph at all, so I think this needs
to be changed. You might want to do a global search for this term to
make sure all references are appropriate, and change those which are
not.

That’s right. This is also a mistake. I thought I’d caught all of
these,
but have missed a few. I’ll fix that.

p68, 4.8.1; and p74, 4.8.4: How can an m-block have zero ports? I
thought that all data / metadata / signals / whatever were
transported via these ports. Without a port of any type, what can an
m-block do?

Not much! This is a bug. I’ll fix it.

p68&70, 4.8.1: How can you implement “a mechanism is required that
will allow m-block s to relinquish control of a processor after a
certain number of processor cycles have been used” for a gr-flow-
graph and guarantee that the internal flow-graph’s memory is
maintained? How is this implemented in general? I guess you could

I think Eric had discussed this earlier. You are correct that it is not
possible to pre-empt a gr-flow-graph once it has started. The idea is
to
ensure that the amount of data fed into the gr-flow-graph can be
processed
within/close to the allowed time. By making use of the timing
information
carried in the m-blocks, it should be possible to estimate the
processing
throughput of different gr-flow-graphs and then use this estimate to
work
out the maximum amount of data that can be fed into a gr-flow-graph in
order
to complete processing within the time budget.

start a new thread, which would execute for X us or ms, but that
seems like a costly overhead when “real-time” needs very low
latency. You might want to tweak the last paragraph, first sentence,
to state that the gr_single… will run all the data through
regardless of the number of processor cycles it takes, in order to
guarantee memory coherency (or something like that).

Hopefully my previous paragraph answers these concerns. I’ll take
another
look at this section and see if I can make it a bit clearer.

p70, 4.8.2.2: “Within an m-block, data is moved through the GNU Radio
flow graph using a GNU Radio scheduler that runs within its own
thread of execution.” … this doesn’t sound correct, regarding the
previous discussion.

You’re right. This is a bug. The GNU Radio scheduler runs within the
context of the m-block’s handle_message() function.

Also, looks like a new number (or more) is missing, starting with
“However, information …”. Or, well, something is just wrong with
the rest of this section … seems non-coherent. Could you please
number / reorganize / rewrite / make more coherent?

Sure.

p75, 4.9.5: Could you make a drawing depicting replicated ports?
Might add insight for some folks.

OK.

p76, 4.9.11: “Messages arriving at an unconnected relay port are
discarded.” … while it’s nice to have unconnected ports, this
takes extra processing to deal with. Is it possible to never have
unconnected ports, and/or to always make use of all ports? Or in the
dynamic graphing, is this just a possibility which can happen and
thus needs to be considered?

It would be possible to prohibit unconnected ports, but there seems to
be an
extra degree of freedom including unconnected ports than excluding them.
For example, a port could be initially unconnected, and then connected
at a
later stage.

p77, 4.9.11: “These ports are not visible from the outside of the m-
block, and are not a part of the peer interface.” … what does
“visible” mean here? I thought that all m-blocks are dealt with by
the m-scheduler? I think what you’re trying to say is that any
internal ports will be defined solely inside the definition of the
enclosing m-block, when using this semi-formal description, and hence
will not be available for connections outside of the enclosing m-
block. Yes?

Yes.

p77, (4.14-4.16): Would it make more sense to first state that
peer interface = external end ports U relay ports
then go on with the current 4.14 and 4.15 (if desired, or not).

I think that would be nice.

Thanks again for the comments!

Dave.

On Jun 15, 2006, at 2:03 AM, Eric B. wrote:

Not much! This is a bug. I’ll fix it.

Actually an m-block could have zero external ports with no problem.
In fact that’s exactly what the top level m-block looks like :wink:

OK, I’ll bite. How does data get into or out of something with zero
external ports? Via internal ports? So, e.g., a source could be
made internal-only, and connect internally to other m-blocks, and
eventually drop to an internal-only sink? Are there any advantages
to setting up this way versus using a single m-block per signal-
processing concept (source, processing, sink)? IMHO it would be
helpful to have a quick example in the text, just to be (more)
complete. - MLD

On Thu, Jun 15, 2006 at 12:51:05AM -0400, David Lapsley wrote:

On 6/14/06 2:24 PM, “Michael D.” [email protected] wrote:

p68, 4.8.1; and p74, 4.8.4: How can an m-block have zero ports? I
thought that all data / metadata / signals / whatever were
transported via these ports. Without a port of any type, what can an
m-block do?

Not much! This is a bug. I’ll fix it.

Actually an m-block could have zero external ports with no problem.
In fact that’s exactly what the top level m-block looks like :wink:

Eric

On Jun 15, 2006, at 12:51 AM, David Lapsley wrote:

On 6/14/06 2:24 PM, “Michael D.” [email protected] wrote:

Also, the whole discussion of packet radio requirements doesn’t
really fit into the GR baseline, and should instead probably be in
4.3, or at least elsewhere.

Do you mean 4.5.2? The intent here was to describe the current
packet-capabilities in GNU Radio. The last paragraph could be
moved to the
requirements section, but do you think the whole section should go?

Sure, you could move them to 4.5.2, so long as they’re rephrased as
“limitations of the current framework” as opposed to “packet-radio
needs”. Limitations are OK, since they work within the baseline
concept; packet-radio needs do not work there, since they have
nothing directly to do with the baseline.

ensure that the amount of data fed into the gr-flow-graph can be
processed
within/close to the allowed time. By making use of the timing
information
carried in the m-blocks, it should be possible to estimate the
processing
throughput of different gr-flow-graphs and then use this estimate
to work
out the maximum amount of data that can be fed into a gr-flow-graph
in order
to complete processing within the time budget.

Ahhh … so will there be some “test runs” to get timing
information, in order to have a better estimate of latencies? From
another perspective: How does this info get gathered without any
estimate of how long it will take and thus how many CPU cycles to
allow for the gr-flow-graph computation? Yes, you can surely do what
you’ve written … I’m just wondering how the estimates are initialized.

them.
For example, a port could be initially unconnected, and then
connected at a
later stage.

Hmmm … good point. In a dynamic system, ports could get dropped or
connected “on the fly”. Could you write a quick blurp about this,
somewhere before 4.9? Maybe 4.6.8 or 4.8.6?

On 6/15/06 2:03 AM, “Eric B.” [email protected] wrote:

Actually an m-block could have zero external ports with no problem.
In fact that’s exactly what the top level m-block looks like :wink:

Eric

Oops! I wasn’t thinking when I typed that :slight_smile:

Dave.

On 6/15/06 8:37 AM, “Michael D.” [email protected] wrote:

Sure, you could move them to 4.5.2, so long as they’re rephrased as
“limitations of the current framework” as opposed to “packet-radio
needs”. Limitations are OK, since they work within the baseline
concept; packet-radio needs do not work there, since they have
nothing directly to do with the baseline.

Sorry, I thought your initial comment was referring to the last
paragraph of
4.5.2, but actually, it seems it was referring to the bullet points at
the
bottom of page 64. I’ll go with your original suggestion and move these
into the requirements suggestion.

ensure that the amount of data fed into the gr-flow-graph can be

Ahhh … so will there be some “test runs” to get timing
information, in order to have a better estimate of latencies? From
another perspective: How does this info get gathered without any
estimate of how long it will take and thus how many CPU cycles to
allow for the gr-flow-graph computation? Yes, you can surely do what
you’ve written … I’m just wondering how the estimates are initialized.

Yes, there could be test runs and the initialization is the tricky part.
I
don’t know that we should go into that much detail in the architecture
document (other than making clear what will be available to the
developer/user).

Eric, what are your feelings on this?

them.
For example, a port could be initially unconnected, and then
connected at a
later stage.

Hmmm … good point. In a dynamic system, ports could get dropped or
connected “on the fly”. Could you write a quick blurp about this,
somewhere before 4.9? Maybe 4.6.8 or 4.8.6?

Sure. No problems.

Cheers,

Dave.

On Thu, Jun 15, 2006 at 08:13:52AM -0400, Michael D. wrote:

Not much! This is a bug. I’ll fix it.

Actually an m-block could have zero external ports with no problem.
In fact that’s exactly what the top level m-block looks like :wink:

OK, I’ll bite. How does data get into or out of something with zero
external ports? Via internal ports? So, e.g., a source could be
made internal-only, and connect internally to other m-blocks, and
eventually drop to an internal-only sink?

Yes. Or it may not have anything to do with “signal processing”. It
could all exist in the one m-block. The initial transition will
be triggered. It’s free to request timeout notifications from the
system (which sends a message to an internal port). It could just go
about it’s business without talking to any other block. Yes, I know
this isn’t all documented, but I think it’s time to start coding.

Are there any advantages to setting up this way versus using a
single m-block per signal- processing concept (source, processing,
sink)? IMHO it would be helpful to have a quick example in the
text, just to be (more) complete. - MLD

I’m not going to spend the time arguing the point. However, it
would be short sighted to preclude something that could be useful and
has zero cost to implement.

The real documentation for this stuff will be written after it’s coded
:wink:

Eric

On Thursday, June 15, 2006, at 11:46 AM, Eric B. wrote:

On Thu, Jun 15, 2006 at 08:37:48AM -0400, Michael D. wrote:

Hmmm … good point. In a dynamic system, ports could get dropped or
connected “on the fly”. Could you write a quick blurp about this,
somewhere before 4.9? Maybe 4.6.8 or 4.8.6?

I think we’re approaching the “polishing the turd” stage…

My point was that, all the sudden in 4.9, it’s mentioned that “oh, and
there can be unconnected ports”. As the reading, I was caught unaware
of this as an option, since there had never been mention of it before.
Further, there was no rationale given for their existence; it was just
stated. Thus my asking if a quick sentence or 2 could be written in an
appropriate place, which at minimum talks about the possibility of
unconnected ports due to the dynamic reconfiguration possibilities.
4.6.8, or 4.8.6, or 4.8.5; there are many possible places where a quick
sentence could be inserted and would make the text less surprising
w/r.t. this one issue.

Polishing? Maybe. Discussing, understanding, and making the
architecture document more readable? Definitely. :wink: - MLD

On Thu, Jun 15, 2006 at 08:37:48AM -0400, Michael D. wrote:

Hmmm … good point. In a dynamic system, ports could get dropped or
connected “on the fly”. Could you write a quick blurp about this,
somewhere before 4.9? Maybe 4.6.8 or 4.8.6?

I think we’re approaching the “polishing the turd” stage…

Eric

On Thu, Jun 15, 2006 at 02:37:54PM -0500, Michael D. wrote:

On Thursday, June 15, 2006, at 11:46 AM, Eric B. wrote:

On Thu, Jun 15, 2006 at 08:37:48AM -0400, Michael D. wrote:

Hmmm … good point. In a dynamic system, ports could get dropped or
connected “on the fly”. Could you write a quick blurp about this,
somewhere before 4.9? Maybe 4.6.8 or 4.8.6?

I think we’re approaching the “polishing the turd” stage…

OK, maybe we’re not polishing :wink:

My point was that, all the sudden in 4.9, it’s mentioned that “oh, and
there can be unconnected ports”. As the reading, I was caught unaware
of this as an option, since there had never been mention of it before.

My theory on this document is less is more. This is addressed more to
David than to you. The semi-formal spec for the portref lists a min
and max replication count. One could infer that a min of 0 and a
max of 1 would indicate that a port was “optionally connected”.

Further, there was no rationale given for their existence; it was just
stated.

To allow substitutability of mblocks for other mblocks.

Eric