the answer is a bit more complicated, I’m afraid:
so first of all, we need to distinguish latency from time it takes to
Latency happens if you want the device to tune as soon as possible, and
will be mainly dominated by the latency of your general purpose OS
running on your general purpose computer. It wildly varies, but good
estimates on a properly configured PC might be around 5ms; your actual
time might be different.
Now, you can /hide/ that latency by using timed commands if you know in
advance when you want your tuning process to start. Then all there’s
left is the “invalid” signal during the time the device actually tuned.
USRPs have a two-stage tuning: You get the configurable synthesizer that
generates the LO for mixing down (and up), and you have the digital
frequency shifter .
The digital tuning happens /almost/ instantly. Basically, the moment the
right register is set, the CORDIC uses a different phase increment ;
that can happen at every clock cycle of the master clock (ie. 200MHz by
default on X300).
Then, there’s the analog part. That’s where things get tricky, because
a) we now have to take the time into account it takes to tell the LO
synthesizer what we want to do, and b) the time it takes for the
synthesizer to get stable, and c) there might be DC offset filters that
might distort your signal.
a) communication time
This depends on the synthesizers used. The UBX uses MAX2871
synthesizers. These are SPI chips; from the top of my head I don’t know
at which rates the bus is clocked, but let’s assume it’s “sufficiently
fast” (>>1Mb/s). So let’s subsume that with 20?s max.
Now a speciality for the UBX: if the LO synthesizer you want to tune is
turned off, it must be started first. Typically, you’d only have that at
initialization, but since the UBX spans so much spectrum, different
clock lines have been introduced, and hence, a separate synthesizer for
frequencies below and above 500MHz each. So when crossing that boundary,
assume at least 20ms additional startup time.
b) analog/real-world hardware stuff
Things get tricky here, and I can’t actually give you a definite answer.
Tuning speed depends on too many factors, some of which aren’t even
controllable. Looking through datasheets and at the loop filter
components, I’d say a locking time of up to 100?s might be realistic.
However, that’s a bit up to you as user; the less accuracy you need, the
shorter you can wait for the LO to successfully lock. After all,
“locked” is just the notion that PLL oscillations have dropped below an
_c) DC offset filtering
To eliminate the DC offset that the ADC might measure, there’s a DC
offset filter. It’s an IIR whose sole accumulator gets reset at every
tune. Depending on how far the real DC offset is from 0, the IIR’s step
response might or might not be visible for up to 40ms, if I remember
correctly. You can disable the DC offset removal, however, with the
multi_usrp::set_rx_dc_offset(false); however, DC offset removal is then
up to you; this might or might not be a problem for your application.