Phase measurement with Ettus Research N210

Hello,

I’m using an Ettus R. N210 with a LFRX daughter-board to do data
measure the phase of a signals referred to a 10 MHz clock.

To start I want to characterize the phase noise of the device, therefore
I send to both the RX channel and to the frequency reference input the
same 10 MHz signal. I configured the N210 for 200 kHz sampling and a
carrier frequency of 10 MHz.

When I look at the data I obtain, I see a constant phase drift
corresponding to a 9.32 mHz frequency different between the signal I
send to the RX channel and the frequency at which the N210 does the
demodulation.

Given that the signal and the clock are derived from the same oscillator
(in this simple case are the exact same signal), where does this
difference come from? How can I get rid of it?

I imagine it comes from the fact that the ADC sampling frequency is not
an exact multiple of the signal frequency, but I haven’t found details
on how the ADC sampling frequency is generated, thus I have no idea
about how to make it an exact multiple of the signal frequency.

Thanks. Cheers,
Daniele

On 06/17/2014 04:56 PM, Daniele N. wrote:

I’ll try to see if this makes a difference. The minimum sampling rate I
can program is ~193 kHz (it is a strange fraction that I cannot check
right now).

Minimum sample rate = 100e6/512

The USRP devices do strictly-integer decimation in the FPGA.

The master clock on the N2xx series is derived from a 10MHz source
(on-board 10MHz VCTCXO, or external, or internal GPSDO), feeding an
AD9510 PLL clock generator, which in turn controls a 100MHz VFO,
implemented with a 100MHz VCTCXO–both clocks are in the 2.5PPM
category.


Marcus L.
Principal Investigator
Shirleys Bay Radio Astronomy Consortium

I did this exact experiment about a year ago. It’s caused by the
resolution
of the phase accumulator in the DDC.

Hello Daniele,

I’ve included the USRP users mailing list [1], since this is not related
to GNU Radio but to the USRP.

The N210 has a fixed master clock rate of 100MHz, generated from the
10MHz reference by using PLL controlled clock multipliers.
The ADC always samples at 100MHz complex, then passes this 100MS/s
signal to the FPGA, which then shifts it (if you use an RF frequency
that cannot be synthesized by the daughterboard in use exactly)
digitally by multiplying it with a complex sine, lowpasses it to fulfill
nyquist for your desired sampling rate, and then decimates it. The
sample stream at your desired rate is then passed on via gigabit
ethernet.

First of all, let’s get a relative error estimate: 9.32e-3/10e6 is about
1ppb error, which is fantastically low from my point of view; this might
as well be caused by numerical accuracy in the FPGA, e.g. when shifting
the signal or decimating it; this is all fixed point arithmetic!

Then, your 200kHz sampling rate is an odd fraction of 100MHz; try
250kHz, to get nicer low pass filtering (I always thought 250kHz was the
minimum usable sampling rate).
Also, how long did you observe your phase drift? To estimate a relative
error of 1e-9 reliably, you’ll need a lot of samples (remember: you
always have quantization noise in digital systems, so even given perfect
analog signals and analog components at 0K temperature, you don’t get
infinite SNR).

Hope that was a little helpful!

Greetings,
Marcus M.

[1] subscribe via
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Hi,

To start I want to characterize the phase noise of the device, therefore
I send to both the RX channel and to the frequency reference input the
same 10 MHz signal. I configured the N210 for 200 kHz sampling and a
carrier frequency of 10 MHz.

The LFTRX doesn’t have a tuner so if you set a carrier freq of 10 MHz
the frequency shift is done by the FPGA via CORDIC and you’ll have
numerical errors in there. You just can’t get rid of them.

Cheers,

Sylvain

Hello Marcus,

thank for your detailed response. Some comments and further questions:

On 17/06/2014 22:04, Marcus M. wrote:

The N210 has a fixed master clock rate of 100MHz, generated from the
10MHz reference by using PLL controlled clock multipliers.
The ADC always samples at 100MHz complex,

What do you mean by complex in this context? ADCs saple voltages, which
are a real quantity…

then passes this 100MS/s
signal to the FPGA, which then shifts it (if you use an RF frequency
that cannot be synthesized by the daughterboard in use exactly)
digitally by multiplying it with a complex sine, lowpasses it to fulfill
nyquist for your desired sampling rate, and then decimates it. The
sample stream at your desired rate is then passed on via gigabit ethernet.

Ok, I think that in the case of the LFRX daughter board the signal is
acquired as-it-is and the demodulation is done completely in the FPGA.

First of all, let’s get a relative error estimate: 9.32e-3/10e6 is about
1ppb error, which is fantastically low from my point of view; this might
as well be caused by numerical accuracy in the FPGA, e.g. when shifting
the signal or decimating it; this is all fixed point arithmetic!

Uhm, this is not a phase accuracy error (which I could maybe agree can
be explained by numerical issues) but a frequency accuracy error: the
phase error adds systematically.

Then, your 200kHz sampling rate is an odd fraction of 100MHz; try
250kHz, to get nicer low pass filtering (I always thought 250kHz was the
minimum usable sampling rate).

I’ll try to see if this makes a difference. The minimum sampling rate I
can program is ~193 kHz (it is a strange fraction that I cannot check
right now).

Also, how long did you observe your phase drift? To estimate a relative
error of 1e-9 reliably, you’ll need a lot of samples (remember: you
always have quantization noise in digital systems, so even given perfect
analog signals and analog components at 0K temperature, you don’t get
infinite SNR).

1e-9 is the relative frequency error: the phase drift is ~58 mrad/s
which is 0.58 rad in 10 seconds and this is very easily accessible.

Hope that was a little helpful!

It is helpful, thanks. However, I believe that the source of the problem
cannot be finite numerical accuracy.

Given your explanation I believe that the issue may come from finite
accuracy in the generation of the 100 MHz sampling rate: how is the 100
MHz clock generated exactly? If the 100 MHz clock is divided with a DDS
to be compared to the 10 MHz clock to derive the error signal for the
PLL, the finite precision of the DDS control register may explain the
small frequency error (a 32 bit DDS would introduce the right order of
magnitude effect, but I haven’t check the exact number).

Cheers,
Daniele

Just some quick calculations in python:

exact phase increment for 10 MHz:

(10e6/100e6)*2**32
429496729.6

Closest phase increment:

np.round((10e6/100e6)*2**32)
429496730.0

Resulting frequency:

(np.round(10e6/100e6*232)/232)*100e6
10000000.009313226

We are out by 9.3mHz!

Thanks for the answers.

I didn’t think that the sine wave in the FPGA were generated with an
integer phase accumulator (I don’t know much about how signal processing
is done in FPGAs). If this is the case, as I understand from Stephen
email, now I know where the frequency error comes from.

On the other hand, I think that the fact that the sine is computed via
the CORDIC method may introduce numerical errors in the amplitude only,
which would not result in a frequency systematic error.

Cheers,
Daniele

The Verilog source for the USRP N210 is available online. You can see
this
in ddc_chain.v:

wire [31:0] phase_inc;
reg [31:0] phase;

setting_reg #(.my_addr(BASE+0)) sr_0
(.clk(clk),.rst(rst),.strobe(set_stb),.addr(set_addr),
.in(set_data),.out(phase_inc),.changed());

// NCO
always @(posedge clk)
if(rst)
phase <= 0;
else if(~ddc_enb)
phase <= 0;
else
phase <= phase + phase_inc;

// CORDIC 24-bit I/O
cordic_z24 #(.bitwidth(cwidth))
cordic(.clock(clk), .reset(rst), .enable(ddc_enb),
.xi(to_cordic_i),. yi(to_cordic_q), .zi(phase[31:32-zwidth]),
.xo(i_cordic),.yo(q_cordic),.zo() );

On Tue, Jun 17, 2014 at 4:02 PM, Daniele N. [email protected]

There is a good treatment of errors in the CORDIC algorithm due to
finite
word length in this paper from IEEE transactions: The Quantization
Effects
of the CORDIC Algorithm (Yu Hen Hu, Senior Member, IEEE). I reproduced
the
results of section IV fairly easily a while ago. There are numerical
errors
in both amplitude and phase but you are right, this is not the cause of
the
frequency offset you observe.

On Tue, Jun 17, 2014 at 4:09 PM, Stephen Harrison
[email protected]

I don’t speak Verilog but I get the general gist of the code below.

Thanks. Cheers,
Daniele

On 18/06/2014 01:25, Stephen Harrison wrote:

There is a good treatment of errors in the CORDIC algorithm due to
finite word length in this paper from IEEE transactions: The
Quantization Effects of the CORDIC Algorithm (Yu Hen Hu, Senior Member,
IEEE). I reproduced the results of section IV fairly easily a while ago.
There are numerical errors in both amplitude and phase but you are
right, this is not the cause of the frequency offset you observe.

Just for the sake of academic discussion (and without having read the
paper): any amplitude error can be seen as a phase error, however in
this specific case I would expect those to have the same periodicity of
the generate sine wave. Therefore they cannot account for a frequency
error. Am I wrong?

Cheers,
Daniele