Latency time

Hi list
I am novice developer with gnuradio and uhd (USRPN210).
Where can i find the value of the usual time of latency between host
computer and usrp.
Thanks

On 02/12/2013 07:04 PM, Gonzalo Flores De La Parra wrote:

Hi list
I am novice developer with gnuradio and uhd (USRPN210).
Where can i find the value of the usual time of latency between host
computer and usrp.
Thanks

Useful plots and descriptions on this page:
http://code.ettus.com/redmine/ettus/projects/public/wiki/Latency

Cheers!
-josh

http://code.ettus.com/redmine/ettus/projects/public/wiki/Latency

This was very interesting. A few questions:

is “Samples per block” = samples per buffer ?

How is the old:
dev_addr[“recv_buff_size”]=20000;
uhd::usrp::multi_usrp::make(dev_addr);

related to sbp and spp ?

If I have a dual-boot machine with two different ubuntu versions. If I
tweak the NIC parameters from one boot - will it change the other as
well ?

Where do I find the code for “responder” ?
What is it doing ?

I can’t read the results for N210, two many lines, I can’t see which is
which. Is it reproduced somewhere else?

Thank you,
Per Z.

On 02/13/2013 01:36 AM, Per Z. wrote:

http://code.ettus.com/redmine/ettus/projects/public/wiki/Latency

This was very interesting. A few questions:

is “Samples per block” = samples per buffer ?

How is the old:
dev_addr[“recv_buff_size”]=20000; uhd::usrp::multi_usrp::make(dev_addr);

related to sbp and spp ?

Many of the examples have samples per buffer (spb) which effect the size
of the client app’s buffers for samples; its not really a concept in
UHD. Basically, this size is independent of how packets get fragmented
on the wire; because UHD will deal with the fragmentation on both the TX
and RX side to fill or read from the users buffer. For example, the
samples per buffer in gnuradio changes size from work call to work call.

Samples per packet (spp) affects the slicing of the packet framer on the
RX chain. This is by default, set so each packet is MTU sized. So SPP
can be used to shrink the packet size, mostly for latency reasons.

Setting a small recv_frame_size will also work for this purpose; since
this frame size basically sets the MTU by setting the internal buffers
used with the socket. And therefore, when the frame size is decreased,
the spp will automatically be smaller as well to fit into the MTU. You
can think of spp as a more fine grained MTU setting because it can be
set per streamer rather than once at device init time.

The recv_buff_size is unrelated, its just the amount of total buffering
a socket can offer.

I will have to leave the other questions to Balint.

-josh

Thanks i haven’t notice that, i always use wxgui plots and things like
that.
i’ll give it a shot first thing in the mourning, it’s about 10 pm in my
hometown, and let you know the results

2013/2/14 Balint S. [email protected]

Hi Balint i finally made the graph for latency on my laptop, i work with
a
VAIO i5, ubuntu 11.04 and usrpn210, i attached the graph of a quick test
so
you can help with a few doubts: what do you mean by “success of on-time
burst transmissions”? i guess it’s the on time round trip packages or
bursts, the delay gets bigger every iteration right? that’s why after a
while they all go to 1?
i hope you could help me by describing a little the parameters SPB and
SPP,
and how they affect the latency.
Thanks in advanced, i’ll be waiting your answer to make a bigger tests
with
a more significant graph, maybe you could suggest me some kind of test
to
provide some significant results

Greetings