Usrp_basic_rx::stop appears to take a long time, and reading


Discuss-gnuradio mailing list
[email protected]
http://lists.gnu.org/mailman/listinfo/discuss-gnuradio

Admin note: please don’t post HTML to the list. Thanks.

On Fri, May 25, 2007 at 04:23:29PM -0700, Dave Gotwisner wrote:

The program loop, is essentially, a series of configuration commands
(such as set_rx_frequency, etc.), followed by a start() command.  I
then do read() until I get the requested number of samples.  I then do
a stop(), and for the hell of it, loop on a read (until there is no
data).



For the purpose of the test, I am using the default buffer sizes for
the ::make call (I also tried fusbBlockSize == 1024 and fusbNblocks =
8K).  The decim rate used for the usrp_standard_rx constructor is 8.  I
am trying to capture 100,000 samples at each frequency.

There’s absolutely no reason to be using fusb_nblocks == 8192.
Try using fusb_block_size = 4096 and fusb_nblocks = 16

The second observation, is that I loop forever (30 minutes before I


Thanks,



    Dave

In the GNU Radio code, which you don’t appear to be using, we have
gnuradio-examples/python/usrp/usrp_spectrum_sense.py, which does
something similar to what you are doing.

Eric

Eric B. wrote:

Admin note: please don’t post HTML to the list. Thanks.

Sorry, hope this is better. I should have caught that before sending.

and capture a small number of samples at each frequency for further
8K).  The decim rate used for the usrp_standard_rx constructor is 8.  I
am trying to capture 100,000 samples at each frequency.

There’s absolutely no reason to be using fusb_nblocks == 8192.
Try using fusb_block_size = 4096 and fusb_nblocks = 16

My preference was to use the default settings, which the documentation
indicated should be sufficient. When we got overruns, and before
looking into the stop() not working, I bumped the values as a test.

of a second shouldn’t then take 8 seconds closing itself out.

Is something broke, or (as is more likely the case), am I missing
In the GNU Radio code, which you don’t appear to be using, we have
gnuradio-examples/python/usrp/usrp_spectrum_sense.py, which does
something similar to what you are doing.

No, we aren’t using the GNU Radio code. We are using libusrp directly,
since the target platform for this work won’t have Python. We are using
Gnuradio-3.0.3, and In looking at that directory, I don’t see the
usrp_spectrum_sense.py program. I’ll upload a more recent version of
gnu radio on Tuesday, and look for it then.

Eric

Thanks,

Dave

On Sat, May 26, 2007 at 05:24:18PM -0700, Dave Gotwisner wrote:

something similar to what you are doing.
Thanks,

Dave

usrp_spectrum_sense.py is in the subversion trunk.

See http://gnuradio.org/trac/wiki/Download for directions to download

Eric

Eric B. wrote:

a stop(), and for the hell of it, loop on a read (until there is no

I tried it with 4K/16. I am now running at fusb_block_size = 16K and
fusb_nblocks = 512. I also tried with 16K/16. Results for both are
similar.

Changing to the larger block size significantly sped up the stop()
call. I have now dropped (improved) to about 6 captures a second,
instead of one every 8 seconds. The increased block size apparently
decreased the URB release overhead significantly.

The second observation, is that I loop forever (30 minutes before I
gave up) after doing the stop, where read() is still returning data
(valid or not, I don’t know). I would expect that stop() should flush
any data, or at least, prevent any new data from coming into the
system, but this doesn’t appear to be the case. Given my application,
I must tie the data for each sample to a specific frequency, so
guaranteeing that the first (through last) reads for any tuning
operation all apply to that tuning operation.

Reading the documentation for the usrp_standard_rx class’s start() and
stop() commands indicate they are to start and stop data transfers. The
read after stop returning forever was my mistake. I am used to read()
returning a size_t. The problem is, size_t is unsigned, and the usrp
read returns -1 (or in unsigned land, a very large number). Changing
from size_t to int fixed this problem.

Is something broke, or (as is more likely the case), am I missing
something? Is anyone else trying to use libusrp in a similar manner?

In the GNU Radio code, which you don’t appear to be using, we have
gnuradio-examples/python/usrp/usrp_spectrum_sense.py, which does
something similar to what you are doing.

I looked at the example, and if my understanding of the code is right,
you never stop getting data from the USRP (or shut it off). You change
the frequency, and suck samples for a fixed period of time (throwing
them out [basically, the amount of time it would take to flush the old
data through the USB buffering system]) before capturing again (and
using them). Does my usrp_spectrum_sense.py understanding match
reality? I am not really a Python person. It seems to me that an
efficient start/stop implementation would be more effective than having
to read data that you never need.

In our case, we want to walk a large frequency range, capturing data for
approximately 100 - 200 milliseconds per frequency, and would prefer to
have less than 50 milliseconds of overhead between captures. We also
need to do this on a potentially loaded CPU, so we need large enough
buffering to reduce the likelyhood of us overrunning (assuming other
tasks, such as games or other CPU hogs want much of the available CPU
resources). The amount of CPU resource we need should be out of the
available CPU after other things run, rather than as the highest
priority task. From calculations based upon your proposed buffering, I
get (4096*16)/32 MB/s) = ~2 milliseconds of buffering, we feel we need a
minimum of about 50 milliseconds of buffering, hence the large numbers
for fusb_block_size.

FYI, I tried building the trunk code on my ubuntu box, and when I did
the “./configure” command, it reported problems finding guile. If I
look at the packages on my machine, synaptic reports that guile 1.6.7-2
is installed on the machine, which should match the requirements from
the readme file.

Dave

Eric B. wrote:

I haven’t seen any comments about the suggestions I made regarding the
file systems issues with ext3 vs ext2 and/or lame laptop disk performance.

Care to comment on those? I’ve been assuming that you’re running
under GNU/Linux. If not, then all the fusb_* stuff may be a nop.

By similar, I meant that the behavior appears to be independent upon
buffer size (16K/512 and 16K/16). Yes, about the overruns. More info
on those: I have modified my program to continually perform “configure;
start; read; stop” for a fixed sample. I have eliminated the
variability of the frequency from the issue, as I now tune to the same
frequency. If I capture 100000 samples, every 8th read group overruns.
If I go to 200000 samples, it increases to every 4th. If I go to 50000
samples, it goes to every 16th.

The time elapsed (100000 samples) from after the stop to before the
start is 12 milliseconds. If you include the start/stop calls, it goes
to 90 milliseconds.

The software is running ubuntu linux with the hard drive being an NFS
mount. I am not writing any of the data to disk, so the disk I/O /
network I/O should essentially be limited to output across telnet back
to my host (another linux running VNC), and any demand paging that the
program is doing. Running or not running oprofile makes no difference,
the load average hovers between 0.00 and 0.10. My program consumes at
most 20% of the cpu.

The ext2/3 stuff was with respect to someone elses query, not mine. I
spent today trying to get to the bottom of start/stop timings and only
spent about an hour on the overruns. If you think putting the code on a
EXT2 fs vs a network fs will make a difference, I will do so, but, I
doubt it, since I am not writing to disk.

efficient start/stop implementation would be more effective than having
There are no plans to change this behavior. If you’d like to, and
In our case, we want to walk a large frequency range, capturing data for
We also need to do this on a potentially loaded CPU, so we need

Attempt to enable realtime scheduling

I’ll pursue this more tomorrow.

the “./configure” command,

did you do a ./bootstrap first?

Yes. I did everything as me, not as root, though, if that makes a
difference.

Eric

Dave

On Tue, May 29, 2007 at 05:06:37PM -0700, Dave Gotwisner wrote:

(such as set_rx_frequency, etc.), followed by a start() command. I
There’s absolutely no reason to be using fusb_nblocks == 8192.
Try using fusb_block_size = 4096 and fusb_nblocks = 16

I tried it with 4K/16. I am now running at fusb_block_size = 16K and
fusb_nblocks = 512. I also tried with 16K/16. Results for both are similar.

When you say the results are similar, do you mean that you are still
seeing the overruns?

I haven’t seen any comments about the suggestions I made regarding the
file systems issues with ext3 vs ext2 and/or lame laptop disk
performance.

Care to comment on those? I’ve been assuming that you’re running
under GNU/Linux. If not, then all the fusb_* stuff may be a nop.

In the GNU Radio code, which you don’t appear to be using, we have
gnuradio-examples/python/usrp/usrp_spectrum_sense.py, which does
something similar to what you are doing.

I looked at the example, and if my understanding of the code is right,
you never stop getting data from the USRP (or shut it off).

That’s correct.

You change the frequency, and suck samples for a fixed period of time (throwing
them out [basically, the amount of time it would take to flush the old
data through the USB buffering system]) before capturing again (and
using them). Does my usrp_spectrum_sense.py understanding match
reality? I am not really a Python person. It seems to me that an
efficient start/stop implementation would be more effective than having
to read data that you never need.

start and stop are actually quite heavy-weight. They aren’t really
designed to do what you’re trying to do, but were added just to solve
the problem of there potentially being quite a bit of time between
when the constructor was called and when you really wanted the data to
start streaming.

There are no plans to change this behavior. If you’d like to, and
are willing to generate patches and assign copyright for the changes
to the Free Software Foundation, I would consider them. Assuming they
don’t break anything else.

The work currently going on with “inband signaling” should moot most
of these concerns, since we’ll be able to accurately track when a
frequency change took place with regard to the data stream.

In our case, we want to walk a large frequency range, capturing data for
approximately 100 - 200 milliseconds per frequency, and would prefer to
have less than 50 milliseconds of overhead between captures.

That’s exactly why we are NOT calling stop/start, but are rather
skipping the samples in the zone where the tuning and buffering matter.

We also need to do this on a potentially loaded CPU, so we need
large enough buffering to reduce the likelyhood of us overrunning
(assuming other tasks, such as games or other CPU hogs want much of
the available CPU resources).

That’s what real time scheduling is for. Increasing the total
buffersize
increases the worst case latency that you have to account for if you
leave everything running. Hence our choice of smaller values.

# Attempt to enable realtime scheduling
r = gr.enable_realtime_scheduling()
if r == gr.RT_OK:
    realtime = True
else:
    realtime = False
    print "Note: failed to enable realtime scheduling"

In C++ it’s called gr_enable_realtime_scheduling().
See gr_realtime.h

The amount of CPU resource we need should be out of the
available CPU after other things run, rather than as the highest
priority task. From calculations based upon your proposed buffering, I
get (4096*16)/32 MB/s) = ~2 milliseconds of buffering, we feel we need a
minimum of about 50 milliseconds of buffering, hence the large numbers
for fusb_block_size.

FYI, I tried building the trunk code on my ubuntu box, and when I did
the “./configure” command,

did you do a ./bootstrap first?

it reported problems finding guile. If I
look at the packages on my machine, synaptic reports that guile 1.6.7-2
is installed on the machine, which should match the requirements from
the readme file.

Dave

Eric

On Tue, May 29, 2007 at 07:04:26PM -0700, Dave Gotwisner wrote:

Eric B. wrote:

The software is running ubuntu linux with the hard drive being an NFS
mount. I am not writing any of the data to disk, so the disk I/O /
network I/O should essentially be limited to output across telnet back
to my host (another linux running VNC), and any demand paging that the
program is doing. Running or not running oprofile makes no difference,
the load average hovers between 0.00 and 0.10. My program consumes at
most 20% of the cpu.

The ext2/3 stuff was with respect to someone elses query, not mine. I
spent today trying to get to the bottom of start/stop timings and only
spent about an hour on the overruns. If you think putting the code on a
EXT2 fs vs a network fs will make a difference, I will do so, but, I
doubt it, since I am not writing to disk.

OK, sorry, confused two sets of issues.

  print "Note: failed to enable realtime scheduling"

In C++ it’s called gr_enable_realtime_scheduling().
See gr_realtime.h

I’ll pursue this more tomorrow.

Good. That should keep the usrp from being shut out by the X-server,
etc.

did you do a ./bootstrap first?

Yes. I did everything as me, not as root, though, if that makes a
difference.

Nope. FWIW, I compile and install everything as my normal user.
Principle of least priviledge.

Eric