From: Aditya D. [mailto:[email protected]]
Sent: Wednesday, February 26, 2014 8:53 AM
To: Nowlan, Sean
Cc: [email protected]
Subject: Re: [Discuss-gnuradio] Message API questions
On Wed, Feb 26, 2014 at 8:45 AM, Nowlan, Sean
<[email protected]mailto:[email protected]> wrote:
I have a few questions regarding messages in GR.
Is it possible to mix-and-match the old style message
sink/source blocks with the new style message passing API? Any guidance
on how to make the connections? I didn’t have much luck with
msg_connect. I don’t think the message sink/source blocks have an
associated port name to make this possible. Perhaps that’s something
worth adding internally?
I’m not sure I completely understand your question.
Have you looked at the OFDM Tx/Rx examples in
PATH/gr-digital/examples/ofdm? The flowgraph is a combination of
standard connections within blocks, along with a message passing
connection (look at the header/payload demux block).
Thanks! What I was referring to are the gr::blocks::message_source and
gr::blocks::message_sink blocks. They don’t use the new style message
passing API in which you register ports and message handlers. Instead,
gr::blocks::message_source has an internal message queue. It blocks in
within its work function waiting for a message to enter the queue. What
I’m wondering is how to connect a new style block’s message output with
the input to this block, and the inverse case for connecting a
gr::blocks::message_sink to a new style block’s message input.
I see 2 implementations of msg_queue, one in gr namespace and
one in gr::messages namespace. What are the differences between these?
How does one achieve flow control with the new style message
passing API? I have a use case in which I’m generating packets in one
flowgraph and pushing them through a pdu_to_tagged_stream (P2TS) block
to be modulated in another flowgraph. I believe I’m overwhelming the
P2TS block’s queue because I get warnings about dropped messages. One
hack I made was to insert a throttle block into the packet generating
flowgraph. This helped a bit, but I have to guess the magic throttle
rate at which I don’t fill up the queue. Is there a way to have P2TS
block when its queue is full and therefore generate backpressure on the
Are you using actual hardware or is this a software only simulation?
I basically have flowgraph (FG1) --> message domain --> flowgraph (FG2)
–> USRP. FG1’s flow rate is not constrained by streaming backpressure.
FG2’s flow rate is constrained by the USRP. To constrain FG1’s flow rate
I either have to use a throttle block or find a way to enforce flow
control in the message domain.