Usrp2::tx_16sc calling rate

Hi all,
I think I’m experiencing some buffer underrun using an instance of
usrp2, to transmit my
samples I’m using the method usrp2::tx_16sc.

I have the usrp2 equipped with a rfx400 and using as interpolation a
value of 8.

I found out that 192 calls to usrp2::tx_16sc last 0.56 seconds passing
to it a total of 7104000
complex. At my rate 7104000 complex are transmitted in air in a bit
less than 0.56 seconds
and it seems I’m very tight in the time I can loose between two
usrp2::tx_16sc calls.

My questions are:
.) How do I check if I had a real underrun?
.) How much time I can loose roughly between two calls to not incur in
underruns?
.) Is there a way to make the tx_16sc async ?

Sorry for those questions, I’m still new to gnuradio api.

On Tue, Dec 15, 2009 at 01:01:22AM +0100, Gaetano M. wrote:

less than 0.56 seconds
and it seems I’m very tight in the time I can loose between two
usrp2::tx_16sc calls.

At interp = 8, you need to generate 12.5MS/s, so your average rate
looks OK (710400/0.56 = 12.67MS/s), but you’ll want to check this
overa longer window, say 10s.

Have you enabled real time scheduling in your transmitter code?

#include <gruel/realtime.h>

gruel::rt_status_t r = gruel::enable_realtime_scheduling();
if (r != RT_OK)
std::cerr << “Failed to enable realtime scheduling\n”;

For this to work, you’ll need to be in group “usrp” and will have to add
this line to /etc/security/limits.conf:

@usrp - rtprio 50

My questions are:
.) How do I check if I had a real underrun?

Right now you can’t unless you hook up a cmos-to-serial port converter
to the USRP2. (This is being fixed as part of the VRT work).

.) How much time I can loose roughly between two calls to not incur in
underruns?

There’s a small amount of buffering on the USRP2. It’s on the order of
a few ethernet frames. The host needs to pretty much not miss a
beat to keep it fed.

.) Is there a way to make the tx_16sc async ?

Sure, put the caller in it’s own thread. Or use GNU Radio to talk to
the USRP2. We already do that.

Sorry for those questions, I’m still new to gnuradio api.

No problem! Good luck with it.

Eric

On Tue, Dec 15, 2009 at 8:51 PM, Eric B. [email protected] wrote:

complex. At my rate 7104000 complex are transmitted in air in a bit
less than 0.56 seconds
and it seems I’m very tight in the time I can loose between two
usrp2::tx_16sc calls.

At interp = 8, you need to generate 12.5MS/s, so your average rate
looks OK (710400/0.56 = 12.67MS/s), but you’ll want to check this
overa longer window, say 10s.

The data I was reporting was an average on 192 calls over more than
20 seconds of transmission.

this line to /etc/security/limits.conf:

@usrp - rtprio 50

That was the next inline activity I had plan to do.

My questions are:
.) How do I check if I had a real underrun?

Right now you can’t unless you hook up a cmos-to-serial port converter
to the USRP2. (This is being fixed as part of the VRT work).

No idea what VRT is, I will google for it :smiley:

.) How much time I can loose roughly between two calls to not incur in
underruns?

There’s a small amount of buffering on the USRP2. It’s on the order of
a few ethernet frames. The host needs to pretty much not miss a
beat to keep it fed.

At my rate 2 ethernet frames are realy not that much :s

.) Is there a way to make the tx_16sc async ?

Sure, put the caller in it’s own thread. Or use GNU Radio to talk to
the USRP2. We already do that.

The caller is already on his own thread and between the samples producer
and the thread calling the tx_16sc I have a double buffered queue that
permits
small interactions between consumer and producer apart a small window
time
in where the two buffers are swapped. I have to replace those two queues
with
a rings in order to eliminate the need to do heap allocations.

actualy my caller thread main loop is something like this:

  1. samples* p = std::list<samples*>::front;
  2. 192 calls of usrp2::tx_16sc(…);
  3. delete p;
  4. std::list<samples*>::pop_front;

I see that the average time between two calls of those192 tx_16sc has
some
spikes, and those spikes are related to the average time for that
delete. I hope
that changing the scheduler, and removing the dynamic memory allocation
my
problem is solved. I’ll let you know otherwise :smiley:

Thank you for your explanations.

Regards
Gaetano M.