Duration of Calculations in Python scripts

Hello,

in the context of spectrum sensing in the 2.4 GHz band
using a modified version of the usrp_spectrum_sense.py
script, I am having problems with high latency times.
Since the time for recording samples and all the tuning
stuff is supposed to be much less than what I am currently
dealing with (something around 88ms between two center
frequencies),
I was wondering if this might be a problem of some
additional calculations the python script is doing with
every message I get from the c++ code.
In particular these calculations contain summing up the
vector I get from the queue of the source and comparing
the sum to a previously calculated threshold wrapped into
some if-statements

Any statements highly appreciated.

Thanks.
-Sebastian

On Thu, Feb 23, 2012 at 4:39 AM, Sebastian D.
[email protected]wrote:

In particular these calculations contain summing up the vector I get from
the queue of the source and comparing the sum to a previously calculated
threshold wrapped into some if-statements

Any statements highly appreciated.

Thanks.
-Sebastian

If it’s latency in the flowgraph, you can try and use the new
max_noutput_items (pass this value to tb.start(N) or tb.run(N),
whichever
is being used). The smaller this number, the faster blocks will pass
data
between eachother, but also the harder your computer is going to have to
work to keep up.

If you think that the latency is due to Python calculations, you can
think
about finding a more efficient scipy implementation of the calculations.
If
one doesn’t exist, you can write some C code and look at the f2py
program
that comes with Python; it’s a simpler wrapper generator than SWIG that
converts from FORTAN to Python, but I believe it nicely supports C
functions as well.

Just a couple of thoughts.

Tom

On Thu, 23 Feb 2012 10:29:02 -0500
Tom R. [email protected] wrote:

Since the time for recording samples and all the tuning
the queue of the source and comparing the sum to a
If it’s latency in the flowgraph, you can try and use
calculations, you can think
Just a couple of thoughts.

Tom

Thanks Tom,

I have proofed your suggestions, but none of them seemed
to help much.

I also ran the script under cProfile to get a better
timing analysis and it turns out that
“gr_py_msg_queue__delete_head” is the real evildoer.
According to cProfile it takes 94ms per call. Since
cProfile produces some overhead itself, this is pretty
much what fits into these almost 90ms I measured myself.
Does anyone have an idea if and how this could be speeded
up?

Thanks
-Sebastian