Performnace monitor runtime usage to the whole system

I’ve learned that sum of runtime usage values of all blocks should be
and
is one.

But I don’t think that GNU Radio uses 100 percent (= one) of CPU
capability.

And I think that ‘one’ is a portion to the capability which is allowed
and
allocated to the GNU Radio.

In order to calculate runtime usage of each block, therefore, it can be
done by multiplying usage of GNU Radio process.

That is, if python running the flow graph is occupying 10 percent
capability of CPU and a demodulator block is indicating 20 percent in
performance monitor, actual runtime usage of the block would be 0.1 *
0.2 =
2 percent of CPU.

(How to know usage of python? Maybe, top command in POSIX)

Is what I think is correct?

Regards,
Jeon.

Hi Jeon,

But I don’t think that GNU Radio uses 100 percent (= one) of CPU
capability.
Well, that obviously depends on what you /do /with GNU Radio.
Generally, GNU Radio scales pretty well, so I’m going to reply with:
GNU Radio tries to consume as much CPU as possible. There’s limiting
factors, mainly RAM access and IO that limit how much CPU can get
consumed.

As you seem to be running a receiver: There’s the upper limit on how
much CPU can get used of samples coming in. You can only process as much
signal as there is. Also, things that are out of the scope of the GNU
Radio process tend to play an important rule here: The kernel has to
talk to your radio hardware, etc.

I’m not quite sure what you refer to with “one”; do you mean the 1 that
tools like “top” would display (namely: one fully occupied CPU core
according to a more or less useful statistic; single processes can in
that metric actually have CPU loads > 1)?

In order to calculate runtime usage of each block, therefore, it can
be done by multiplying usage of GNU Radio process.
No. GNU Radio is a heavily multi-threaded architecture, so each block
runs in its own thread. Assuming you have a multi-core CPU, multiple
threads will run at once; one core of your CPU might be 100% occupied by
the GNU Radio block thread(s) running on it, whereas another is only 80%
busy etc. This does not allow direct mapping of “percentage of CPU load”
to actual time.

However, the performance counters offer exactly what you seem to need:
The percentages your looking at are computed from the microseconds that
each block spends in its work function. So just look at these total
times.

I think it would be interesting to hear what you want to do, maybe we
have an idea how to measure what is of interest to you.

Best regards,
Marcus

Date: Wed, 26 Aug 2015 19:09:55 +0200
From: Marcus M?ller

signal as there is. Also, things that are out of the scope of the GNU
No. GNU Radio is a heavily multi-threaded architecture, so each block
I think it would be interesting to hear what you want to do, maybe we
have an idea how to measure what is of interest to you.

Best regards,
Marcus

If you just want %CPU, then I tend to do this to see if GNURadio is
keeping up:

$ top
If your system is less than 15% idle, odds are something is not keeping
up some of the time. Looking at individual gnuradio threads is not
helpful in this case, as system capcity limits will mask blocks’ desired
CPU demand. Scale down your sample rate or number of blocks in the
flowgraph, and try again.

$ ps -eLo pcpu,pid,tid,cls,rtprio,pcpu,comm | grep | sort -n
Assuming that your total system is 15% idle or more, if any thread is
above 95% CPU, odds are that thread/block is not able to keep up at
least some of the time.

From there, I use oprofile to profile blocks to see where a block might
be wasting CPU cycles.

Regards,
Andy

Agreed :slight_smile:

Daer Marcus,

Thank you for your detailed answer. Now I feel I am getting to it…
But,
not fully, yet :slight_smile:

What I’ve said ‘one’ in the previous post is, you can understand with
the
figure:
http://i.imgur.com/QG5uryH.png
I’ve posted the same figure in another thread some days ago.

Anyway, ‘one’ I meant is, the total sum of percent runtime. That is one
and
should be.

Regards,
Jeon.

2015-08-27 2:09 GMT+09:00 Marcus Müller [email protected]: