USRP2 eth_buffer

Hi,

I have been trying to get 25 MHz to disk with USRP2. I am using the
C++ interface and a five disk software raid 0 that can do about 150
MB/s. I can easily run at 25 MHz with a simple nop_handler that only
checks for underruns and timestamps continuity, but when I write to
disk, I can barely do 10 MHz for longer than 30 s without overruns. I
have tried just about every filesystem with the same result every
time.

The reason seems to be lack of buffering. I have gone with the easiest
path of increasing the eth_buffer from approximately 25 MB to 500 MB.
I know people will flame me for using this much kernel memory, but it
seems to work fairly reliably (I have been saving 25 MHz to a five
disk software raid already for an hour without problems).

I think there should be a user configurable option similar to
fusb_blocksize and fusb_nblocks with usrp1, which defines the
eth_buffer size. I am willing write a patch if somebody is interested,
but I don’t fully understand why there is a MAX_SLAB_SIZE in
eth_buffer.cc:

// Calculate number of blocks
req.tp_block_nr = std::min((int)(MAX_SLAB_SIZE/sizeof(void*)),
(int)(d_buflen/req.tp_block_size));

Why is there a (int)(MAX_SLAB_SIZE/sizeof(void*)) limit?

juha

On Wed, Apr 22, 2009 at 1:48 PM, Juha V. [email protected]
wrote:

I have been trying to get 25 MHz to disk with USRP2. I am using the
C++ interface and a five disk software raid 0 that can do about 150
MB/s. I can easily run at 25 MHz with a simple nop_handler that only
checks for underruns and timestamps continuity, but when I write to
disk, I can barely do 10 MHz for longer than 30 s without overruns. I
have tried just about every filesystem with the same result every
time.

Try setting your application to run using real-time scheduling
priority. This is done in C++ via a call to:

gr_enable_realtime_scheduling()

or from Python:

gr.enable_realtime_scheduling()

Check the return value to ensure that it worked, it should ==
gruel::RT_OK from C++ or in python gr.RT_OK.

You must have permission to do this, either by virtue of running as
root, or by allowing your user/group to do so by adding a line to
/etc/security/limits.conf:

@usrp - rtprio 50

Then add your username to the ‘usrp’ group (which needs to be created
if it doesn’t already exist.)

Why is there a (int)(MAX_SLAB_SIZE/sizeof(void*)) limit?

We use the Linux kernel packet ring method of receiving packets from
sockets. This is a speed optimized method that maps memory in such a
way that the kernel sees it as kernel memory and the user process sees
it at its own memory, so there is no copying from kernel to user
space. It also lets us receive multiple packets with one system call.
(At full rate, we process about 50 packets per system call.)

The kernel maintains a ring of pointers to pending packets, and these
ring descriptors must be stored in one kernel memory region. These
memory regions are of MAX_SLAB_SIZE, and each descriptor is
sizeof(void*). So the tp_block_nr variable calculates the number of
possible packets by dividing the buffer length by the block size, and
if that is more than can be stored in MAX_SLAB_SIZE, it reduces it to
the limit that imposes.

So you probably aren’t using all 500 MB of that memory. You can
uncomment the debug printf in that part of code to see the number of
blocks actually allocated.

What tends to happen if you aren’t running your user process as RTPRIO
is that the libusrp2 driver grabs the packets from the kernel okay,
but your flowgraph doesn’t read them from the driver fast enough, and
you get backed up into an overflow.

Johnathan

Try setting your application to run using real-time scheduling
priority. This is done in C++ via a call to:

gr_enable_realtime_scheduling()

I am using this.

sizeof(void*). So the tp_block_nr variable calculates the number of
possible packets by dividing the buffer length by the block size, and
if that is more than can be stored in MAX_SLAB_SIZE, it reduces it to
the limit that imposes.

Doesn’t this apply only for pre 2.6.5 and 2.4.26 kernels? At least
that is what the Documentation/networking/packet_mmap.txt says.

BTW, shouldn’t the MAX_SLAB_SIZE should be 131072 instead of 131702?

So you probably aren’t using all 500 MB of that memory. You can
uncomment the debug printf in that part of code to see the number of
blocks actually allocated.

I think I’m using it all. I removed the MAX_SLAB_SIZE constraint and
it still works in mmaped mode. The setsockopt still succeeds and the
data looks ok.

What tends to happen if you aren’t running your user process as RTPRIO
is that the libusrp2 driver grabs the packets from the kernel okay,
but your flowgraph doesn’t read them from the driver fast enough, and
you get backed up into an overflow.

This is exactly the problem. On average the disk bandwidth is more
than enough, but there are fairly large “hiccups” that cause the
buffer to overrun. I could try to write my own buffer, but that would
be one extra memory copy, I’d prefer a large kernel-space buffer to a
large user space buffer.

juha

On Wed, Apr 22, 2009 at 11:06:19PM +0000, Juha V. wrote:

Try setting your application to run using real-time scheduling
priority. This is done in C++ via a call to:

gr_enable_realtime_scheduling()

I am using this.

Juha,

What kind of filesystem are you using? I’ve never been able to stream
reliably to disk using the ext3 filesystem. I think it chokes when
posting its journal. I have been successful when mounting an ext3
filesystem as an ext2 filesystem.

I’ve got a RAID 10 system using 6 drives plus 1 hot spare.

Eric

ext file system is the go, with my high speed digitizer I stream 250
MB/s (thats bytes) to a six disk raid (0) array. The raid zero is the go
if you can afford to loose data in the unlikely event of a disk failure.

I’d guess that your high-speed digitizer has a buffer that is larger
than 25 MB too. Do you know what the buffer size is for your sampler?

I did a simple benchmark of filesystem bandwidth with xfs and ext2.

xfs:
j@liang:/data0$ sudo time dd if=/dev/zero of=tmp.bin bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 68.1847 s, 154 MB/s

ext2:
j@liang:/data0$ sudo time dd if=/dev/zero of=tmp.bin bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 68.1712 s, 154 MB/s

Both give approximately the same bandwidth. I totally agree that ext2
might have less variability in i/o bandwidth, but at the same time, I
don’t really think there is that large difference between any decent
modern filesystem in terms of long time average bandwidth of writing
large files to disk. Large distributed filesystems are a different
issue, and I’d guess that XFS and IBM’s GPFS are good for those uses.

I now took the time to reformat the disk to ext2 and tried to write 25
MHz to disk with the vanilla eth_buffer. It also gave an underrun
after a few seconds. This might be because I am chopping the data into
100 MB files, but this is a necessity. I cannot have 24 hours of 25
MHz data in one large file.

I have suggested a modification to the usrp2 API that would allow
increasing the packet_ring buffer, why is that not a good idea? Isn’t
it a good idea to add a feature that allows people to reliably sample
and store to disk at high bandwidth, even with more jittery
filesystems? I think nobody is using a pre 2.6.5 kernel, so this there
shouldn’t really be any reason to restrict the size to the number of
pointers that fit into a kernel slab size.

I’ll write a patch anyway and send it to the list.

BR,
juha

I have attached a patch to allow users to define the ethernet packet
ring size. I remove the SLAB_SIZE restriction. I think gnuradio needs
a fairly new >2.6.5 kernel anyway.

Why is this needed? I challenge anyone to sample at 25 MHz
continuously for two hours without overruns or missing packets to a
five disk raid array (be it ext2 or anything else) with the default 25
MB buffer.

Still, the patch maintains the original 25e6 buffer size. A value of
250e6 to 500e6 allows fairly reliable sampling to disk at 25 MHz, so I
recommend increasing the default buffer size to something higher than
25 MB. Otherwise new users will have problems with overruns. Even
Firefox consumes hundreds of megabytes.

juha

---------- Forwarded message ----------
From: Juha V. [email protected]
Date: Thu, Apr 23, 2009 at 11:00
Subject: Re: [Discuss-gnuradio] USRP2 eth_buffer
To: Bruce Stansby [email protected]
Cc: Eric B. [email protected], Johnathan C.
[email protected], “[email protected]
[email protected]

ext file system is the go, with my high speed digitizer I stream 250
MB/s (thats bytes) to a six disk raid (0) array. The raid zero is the go
if you can afford to loose data in the unlikely event of a disk failure.

I’d guess that your high-speed digitizer has a buffer that is larger
than 25 MB too. Do you know what the buffer size is for your sampler?

I did a simple benchmark of filesystem bandwidth with xfs and ext2.

xfs:
j@liang:/data0$ sudo time dd if=/dev/zero of=tmp.bin bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 68.1847 s, 154 MB/s

ext2:
j@liang:/data0$ sudo time dd if=/dev/zero of=tmp.bin bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 68.1712 s, 154 MB/s

Both give approximately the same bandwidth. I totally agree that ext2
might have less variability in i/o bandwidth, but at the same time, I
don’t really think there is that large difference between any decent
modern filesystem in terms of long time average bandwidth of writing
large files to disk. Large distributed filesystems are a different
issue, and I’d guess that XFS and IBM’s GPFS are good for those uses.

I now took the time to reformat the disk to ext2 and tried to write 25
MHz to disk with the vanilla eth_buffer. It also gave an underrun
after a few seconds. This might be because I am chopping the data into
100 MB files, but this is a necessity. I cannot have 24 hours of 25
MHz data in one large file.

I have suggested a modification to the usrp2 API that would allow
increasing the packet_ring buffer, why is that not a good idea? Isn’t
it a good idea to add a feature that allows people to reliably sample
and store to disk at high bandwidth, even with more jittery
filesystems? I think nobody is using a pre 2.6.5 kernel, so this there
shouldn’t really be any reason to restrict the size to the number of
pointers that fit into a kernel slab size.

I’ll write a patch anyway and send it to the list.

BR,
juha

On Thu, Apr 23, 2009 at 01:15:56PM +0300, Juha V. wrote:

250e6 to 500e6 allows fairly reliable sampling to disk at 25 MHz, so I
recommend increasing the default buffer size to something higher than
25 MB. Otherwise new users will have problems with overruns. Even
Firefox consumes hundreds of megabytes.

juha

Juha, thanks for the patch!

Eric

On Thu, Apr 23, 2009 at 9:40 PM, Eric B. [email protected] wrote:

Juha, thanks for the patch!

This has been applied to the trunk at revision 11000.

Johnathan

Hi all

ext file system is the go, with my high speed digitizer I stream 250
MB/s (thats bytes) to a six disk raid (0) array. The raid zero is the go
if you can afford to loose data in the unlikely event of a disk failure.

Bruce

----- Original Message -----
From: Eric B. [email protected]
Date: Thursday, April 23, 2009 8:04 am
Subject: Re: [Discuss-gnuradio] USRP2 eth_buffer
To: [email protected]
Cc: “[email protected][email protected], Johnathan
Corgan [email protected]