CPU usage when hdlc framer is used

Hi,

CPU usage goes to 100% when the hdlc_framer block is used!
To verify this I did the following tests:

  1. message_strobe_random ----> message_debug (CPU usage 2%)
  2. message_strobe_random ----> hdlc_framer----> null_sink (CPU usage
    100%)

Any idea how can I avoid this overload?

Thank you
Thanasis

On Tue, 2015-06-16 at 12:01 -0400, [email protected]
wrote:

From: Thanasis B. [email protected]
To: “[email protected][email protected]
Subject: [Discuss-gnuradio] CPU usage when hdlc framer is used

Hi,

CPU usage goes to 100% when the hdlc_framer block is used!
To verify this I did the following tests:

  1. message_strobe_random ----> message_debug (CPU usage 2%)
  2. message_strobe_random ----> hdlc_framer----> null_sink (CPU usage 100%)

Any idea how can I avoid this overload?

Off the cuff, it appears the hdlc_framer needs at least one fix:

  1. If oidx == 0, it should use a blocking fetch of a message here:
    https://github.com/gnuradio/gnuradio/blob/master/gr-digital/lib/hdlc_framer_pb_impl.cc#L132
    If oidx != 0 then using the current delete_head_nowait() call is
    correct.
    This is probably the major contribution to the CPU spinning.

Maybe some other changes to the hdlc_framer block might help for high
sample rates:

  1. The block does a number of dynamic std:vector<> and pmt
    instantiations and destructions in a call to work(). If you use the
    oprofile tools, those likely generate some measureable overhead
    attributable to malloc() and atomic increments and decrements related to
    boost shared pointer for the PMTs. You might gain a few % CPU back by
    avoiding the constant allocation & deallocation by converting the block
    to use class variables where it makes sense.

  2. The block also has 2 memcpy()'s in a call to work(). If you can
    figure out an way to optimize one away, you’ll get some savings. I
    don’t see an obvious way to do that right now.

Also, your flowgraph does not have a throttle block to enforce a
realistic timing for the sample rate you have chosen, so the hdlc_framer
is going to run as fast as it can. The null sink essentially ensures
there is always output buffer space for the hdlc_framer.

Thank you
Thanasis

Regards,
Andy

Issue #1 is likely the problem. Thanks for pointing that out, Andy! The
other two issues should be pretty miniscule in overhead, but are
probably
good ideas nonetheless. The block was written for clarity rather than
for
speed, but even so it shouldn’t be particularly heavy-duty.

–n

On Tue, Jun 16, 2015 at 9:53 AM Andy W. [email protected]

On Tue, 2015-06-16 at 12:27 -0700, Johnathan C. wrote:

On Tue, Jun 16, 2015 at 10:16 AM, Nick F. [email protected]
wrote:

    Issue #1 is likely the problem. Thanks for pointing that out,
    Andy! The other two issues should be pretty miniscule in
    overhead, but are probably good ideas nonetheless. The block
    was written for clarity rather than for speed, but even so it
    shouldn't be particularly heavy-duty.

In the future, we plan to allow designating a non-empty message queue
as a prior condition to be satisfied before the scheduler calls
work(), thus it will won’t ever be necessary to block inside work
directly on the queue itself. This is a common issue with source
blocks that rely on input message ports for content.

Hi John,

That will help for blocks which can somehow guarantee that every
incoming message will never generate more than noutput_items in every
call to work. Otherwise the block is going to have to save away the
residual output items, to be output in a later call to work(). There is
no need to block on message input to output those residual output items.

I.e. in a call to work():
0. output any residual items saved from last call to work()

  1. grab message off queue (making a decision to block or not as
    appropriate)
  2. generate items, which may number more than noutput_items
  3. store any residual items for output for next call to work()
  4. return number of items generated up to noutput_items

The hdlc_framer looks like it has this notion of a residual in
d_leftovers.

Although different from GnuRadio internal messaging, some internal fixes
I made to the gr-zeromq sub_source_impl., to work reliably and at high
throughput, also needed to deal with residual parts of messages that
could not be sent out in the current call to work() and had to deal with
context depending blocking when checking for messages. (The current
sub_source_impl.
on the master branch discards the items in incoming
ZeroMQ messages that are beyond noutput_items - not a nice thing to do.)

My $0.02

Regards,
Andy

On Tue, Jun 16, 2015 at 10:16 AM, Nick F. [email protected]
wrote:

Issue #1 is likely the problem. Thanks for pointing that out, Andy! The
other two issues should be pretty miniscule in overhead, but are probably
good ideas nonetheless. The block was written for clarity rather than for
speed, but even so it shouldn’t be particularly heavy-duty.

In the future, we plan to allow designating a non-empty message queue as
a
prior condition to be satisfied before the scheduler calls work(), thus
it
will won’t ever be necessary to block inside work directly on the queue
itself. This is a common issue with source blocks that rely on input
message ports for content.

On Tue, Jun 16, 2015 at 1:32 PM, Marcus Müller
[email protected]
wrote:

if I understand you correctly, you think the situation calls for
something like a “wait for message flag” that a block would need to set
every work call?

Actually, this is the idea–that, when needed, make a call to the
scheduler just before ending work, that says, in in effect–don’t call
me
again until input message port X is not empty.

Otherwise, call me again whenever you otherwise would, which, for a
source
block, is when there is room in the output buffer (subject to other
constraints.)

This would allow work() to be invoked one more times as needed, to
exhaust
any pending data that might exceed noutput_items in a given call, yet
then
finally wait on a message for more content.

The key point would be to not block in work, which prevents the
scheduler
from that block from processing any other events, but instead allow the
scheduler itself to do so.

On Wed, 2015-06-17 at 12:01 -0400, [email protected]
wrote:

On Tue, Jun 16, 2015 at 1:32 PM, Marcus M?ller [email protected]
again until input message port X is not empty.
from that block from processing any other events, but instead allow the
scheduler itself to do so.

Hi John,

Yes, that seems right.

The block needs to continually keep the gnuradio infrastructure informed
about when to wait for a message or not, before calling work(). The
infrastructure can’t possibly know otherwise.

Regards,
Andy

Hi Andy,
if I understand you correctly, you think the situation calls for
something like a “wait for message flag” that a block would need to set
every work call?

Best regards,
Marcus

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs