Serious performance regression in Gnu Radio recent

I have an application, SIDSuite, that I’ve been running on my hardware
here for about 18 months continuously, with reasonable performance
(the UI isn’t totally-snappy, but acceptable). Some time recently,
with an upgrade of Gnu Radio, the performance became utterly
unacceptable–the UI became unusable, and updates to the FFT and
Waterfall sinks became very “chunky”. I haven’t changed the
app in months and months.

So, I started taking my Gnu Radio back further and further in time,
until I was back to “normal”. I had to regress my GIT tree back to:

commit 2ed887b69a3b15840830998c4e6157176d427f60
Author: Josh B. [email protected]
Date: Sat Dec 31 13:06:01 2011 -0800

In order to get decent performance again.

I have no idea what’s causing the performance melt-down, but regressing
back to that commit fixes it, again with no changes to the
application in question.

I will try creeping forward from this commit to see if I can narrow it
down. Blah.


Marcus L.
Principal Investigator
Shirleys Bay Radio Astronomy Consortium

On 02/05/2012 12:49 PM, Marcus D. Leech wrote:

I will try creeping forward from this commit to see if I can narrow it
down. Blah.

See if it was this merge: ab7cfce4a78dbb95a7c8871f56f4cb037e5b1bb2

From the max outputs branch.

-Josh

On 02/05/2012 04:08 PM, Tom R. wrote:

 app in months and months.

Principal Investigator

All this does is add a std::min check in line for sources and normal
blocks, though, and on my machines showed absolutely no
performance degradation. If this is seriously what’s causing your
problems, then you must have been right on the edge performance-wise
and these few added cycles took you over the top.

Tom

That doesn’t appear to be it. Just built and installed that, and app is
still fine. Fast-forwarding a little bit and see what happens.

On Sun, Feb 5, 2012 at 3:49 PM, Marcus D. Leech [email protected]
wrote:

I will try creeping forward from this commit to see if I can narrow it
down. Blah.


Marcus L.
Principal Investigator
Shirleys Bay Radio Astronomy Consortium
http://www.sbrac.org

The only addition that I can think of is the max_noutputs addition that
went into the scheduler, which was merge:
ab7cfce4a78dbb95a7c8871f56f4cb037e5b1bb2
Made on Jan 3.

All this does is add a std::min check in line for sources and normal
blocks, though, and on my machines showed absolutely no
performance degradation. If this is seriously what’s causing your
problems,
then you must have been right on the edge performance-wise and these few
added cycles took you over the top.

Tom

On 02/05/2012 04:08 PM, Tom R. wrote:

So, what if it was your own commit that seemed to be causing a
problem? Wouldn’t that be embarrassing? I think so :frowning: :frowning:

commit 2a2411a598c222e2ef82b857c0b53e0a9d1daf3f
Author: Marcus L. [email protected]
Date: Sun Jan 15 23:49:52 2012 -0500

 core: fix for off-by-one issue in strip chart. Increases buffer

size for longer displays.

I have no idea why this should be a problem, the buffer-shifting for
stripchart mode is all done on the C++ side, and the updates are
done at a fairly lazy rate (a few Hz). So it’s probably an issue on
the Python side with bigger plot buffers. Perhaps there’s some kind of
N**2 scaling ugliness going on that wasn’t immediately obvious to me
when I did that patch.

I think ultimately, the plotting stuff has to move so that most of the
computational stuff is done in C++ land, with only the thinnest pieces
done on the Python side.


Marcus L.
Principal Investigator
Shirleys Bay Radio Astronomy Consortium

On Sun, Feb 5, 2012 at 5:07 PM, Marcus D. Leech [email protected]
wrote:

when I did that patch.
big the messages are that are sent along to the plotter routines in
Python, but the STRIPCHART routine uses it to store the strip-chart
samples in.


Marcus L.
Principal Investigator
Shirleys Bay Radio Astronomy Consortium
http://www.sbrac.org

It sounds like we need someone to work on a strip-chart functionality
for
the qtgui sinks. They do everything in C++ land, so we could optimize
there
more easily.

In other news, glad you found the problem. What do you want to do about
it
in the meantime, since it’s your code, and I think you’re the primary
user
of the oscope under these conditions.

Tom

On 02/05/2012 04:44 PM, Marcus D. Leech wrote:

size for longer displays.
I think ultimately, the plotting stuff has to move so that most of the
computational stuff is done in C++ land, with only the thinnest pieces
done on the Python side.

So performance degradation seems to scale with the size of
OUTPUT_RECORD_SIZE in gr_oscope_guts.cc. This basically determines how
big the messages are that are sent along to the plotter routines in
Python, but the STRIPCHART routine uses it to store the strip-chart
samples in.


Marcus L.
Principal Investigator
Shirleys Bay Radio Astronomy Consortium

On 02/05/2012 05:19 PM, Tom R. wrote:

Tom

I’ve backed OUTPUT_RECORD_SIZE down to 4096 on my system here, and it
has acceptable performance for me. In the particular
application at hand, there are six channels being displayed in a
strip-chart, and the chart is updated at about 4Hz (I think, I’ll have
to
check in the flow-graph). Looking at the plotter code in
scope_window.py there’s a lot of computational goo going on in Python
land,
including iterating over the input message buffer*number of
channels. It’s pretty ugly down there :frowning:


Marcus L.
Principal Investigator
Shirleys Bay Radio Astronomy Consortium