USRP I/O Buffering

Hello,

I have a simple application written in Python using GNURadio. All I am
trying to accomplish is to have the USRP data be written to disk. The
application works fine when I dump data to /dev/null or run it at
reduced
sampling rates. However, when I run at my desired sampling rate, I have
a
good amount of buffer overflows (a series of “O” characters get
printed).

The host machine that I am working on should have no problems sampling
at
the higher rates, but I have found a curious issue in that not a whole
lot
of memory is used up by my GNURadio application.

As a result, I am wondering if there is any way to tell GNURadio to use
larger buffers (on the order of a few GB) in order to prevent data from
being dropped. I noted that there seem to be several function calls
available in the C++ API, but there seems to be a limited set of these
calls in the Python wrappers. Latency is a non-issue for me at the
moment,
but I need to capture all the data without dropping a large amount of
data. Note that I have the code running with “real-time” priorities in
Linux.

Thanks for your help. FYI I am running GNURadio on Ubuntu 14.04. Also,
I
know that my RAID set-up is capable of writing to disk at twice the rate
of
data coming in per benchmarking the HDDs.

On 09/07/2014 04:24 PM, Peter W. wrote:

at the higher rates, but I have found a curious issue in that not a

Discuss-gnuradio mailing list
[email protected]
Discuss-gnuradio Info Page
Adding buffering for long-term recording simply delays, by a few
seconds, that point at which your system cannot keep up.

What sample rates are you trying to record? What does your flow-graph
look like?

Buffering is useful to allow you to “ride through” short-term shortfalls
in the ability to handle samples. It is useless for handling the
situation
where your long-term ability to keep up falls short of what you
actually need.

Have you tried setting up a ramdisk, and writing to that?

Try messing around with the buffer size in
gnuradio-runtime/lib/flat_flowgraph.cc

#define GR_FIXED_BUFFER_SIZE

Not sure I follow. If I have a large enough buffer, the data coming in
and
the data coming out should be free of concurrency issues and should be
able
to work just fine. That is, as long as the producer thread can keep
adding
data to the message queue, I should be OK. If it gets locked out due to
concurrency issues, I can see that this is where there could be issues
(e.g., I drop packets because the producer thread can’t push data since
it’s locked out of the message queue). Also, a large enough buffer
could
also alleviate the issue all together since I would be able to record
for
hours before the overflow would take place.

My sample rates are on the order of 400 MB/s (well within PCIe x4 spec).
Also, my RAID array has been benchmarked to handle roughly 800 MB/s.
The
flow-graph is nothing more than a USRP source tied into a File Meta
Sink.

On 09/07/2014 07:08 PM, Peter W. wrote:

My sample rates are on the order of 400 MB/s (well within PCIe x4
spec). Also, my RAID array has been benchmarked to handle roughly 800
MB/s. The flow-graph is nothing more than a USRP source tied into a
File Meta Sink.

Certainly, if you could somehow put together hours of buffering at
400MB/sec, you’d be in good shape, provided you don’t exceed that
buffer.
But, well, at 400MB per second, 10 seconds is 4GB, 100 seconds is
400GB, etc. Pretty soon you run out of DRAM.

So, at 100Msps, you’ll have a hard time on most systems. Try just
writing with rx_samples_to_file (a UHD-only example that comes with
UHD), which avoids Gnu Radio overheads entirely. But really,
recording at 100Msps (400MB/sec in your example) is challenging, even if
you don’t do much with the samples. Gnu Radio adds some overhead
that you probably don’t need just to record samples.

Hi,
this is very interesting!

As this actually seems to be more of a storage issue, I think
kernel-based solutions might be good, too: (as root)

|echo 90 > /proc/sys/vm/dirty_ratio
echo 75 > /proc/sys/vm/dirty_background_ratio

|

|Should allow your kernel to use up up to 90% of your RAM for write
caching and only start making calls to write() block when 75% of RAM are
used.|
|As a somewhat related side note: some filesystems are better suited for
sequential fast writes than other, and depending on file systems there
are a lot of mount and file system options that| can reduce the overhead
that comes with using a filesystem. A typical candidate is “noatime”,
which would disable the updating of the access time flag of your
recording file. Journaling file systems (ext3, ext4, JFS, reiser, xfs…)
also have barrier flags, which you can use to disable syncing of file
system journals (which would imply that you’d lose data in a case of
power loss).

I know it’s your time, but I’d actually be very interested to hear if
you could successfully use uhd_rx_cfile after making the above kernel
adjustments, or if disabling journal syncing helps, if that wasn’t
enough.

Greetings,
Marcus
||

As a heads up, I was able to get everything working using only GNURadio
Companion. I tied the USRP Source into a Stream-to-Vector block and
created a 20 MS buffer. From there I sent the 20 MS vector to the File
Meta Sink.

This adds some latency, but for my application this is perfectly OK. I
have tested this running for about half an hour or so without a single
dropout (this is about all the data I would want to sample at any given
time).

Also, it should be noted that the following did not work:

  1. Using only the driver. uhd_rx_cfile created the overflows at the
    exact
    same rate as my GNURadio application.
  2. Giving my GNURadio application real-time priority. When I say real
    time, I’m using a priority of 99 with SCHED_FIFO in Linux. It should
    also
    be noted that the driver spawns an niDriverThread process that runs with
    standard user-level priority.

I will post any updates if the problem comes back for whatever reason.
But
for now, buffering via the Stream-to-Vector block seems to have fixed
the
issue.