USRP Dynamic Range and 8 Bit Problem

Hi,

  1. Using high accuracy function generator and USRP with Basic-RX, I
    tried to
    investigate the dynamic range of this board. The following behaver was
    obtained (0 dB gain was used in all experiments with decimation of 8):

a) With Sinewave signal at frequency =250KHz and 0dBm input power to
BasicRX
we get ,

Theoretical calculations :
When 0dBm signal injected into 50 Ohm load, then signal RMS = 0.225Volt
and
Signal voltage P-P = 0.636396103Volt.

USRP readings :
The maximum counting obtained from USRP for this signal was 4845.

b) With Sinewave signal at frequency =250KHz and 5dBm input power to
BasicRX
we get ,

Theoretical calculations :
When +5dBm signal injected into 50 Ohm load, then signal RMS = 0.4Volt
and
Signal P-P = 1.13137085Volt.

USRP readings :
The maximum counting obtained from USRP for this signal was 8337

c) With Sinewave signal at frequency =250KHz and 9dBm input power to
BasicRX
we get ,

Theoretical calculations :
When +9dBm signal injected into 50 Ohm load, then signal RMS = 0.64Volt
and
Signal P-P = 1.81019336Volt.

USRP readings :
The maximum counting obtained from USRP for this signal was 13096

d) With Sinewave signal at frequency =250KHz and 10dBm input power to
BasicRX we get ,

Theoretical calculations :
When +10dBm signal injected into 50 Ohm load, then signal RMS = 0.71Volt
and
Signal P-P = 2.008183259Volt.

USRP readings :
The maximum counting obtained from USRP for this signal was 13570
(Saturated).

Discussion :
From the above results we see that the BASIC-RX is saturated when the
input
signal power was +10dBm. This is logical, since the maximum input signal
to
the USRP ADC AD9862 is 2Volt P-P (according to its data sheets).

But, what is not logical is the value obtained from USRP FPGA. Since the
incoming data is 16bit signed number (for I and Q), we should get +/-
32767
readings for the maximum input signal of 2V P-P. Instead, we have
maximum
reading of 13570. I don’t know how this number was generated, but it
should
be scaled to make use from the full dynamic range offered by using 16
Bit
data transfer.

  1. I Think there is a problem in the 8 bit data transfer. As we know, if
    we
    have 12 bit number (for example), and we would truncate it to 8 bit
    number,
    we should take the most 8 significant bits from it and discards the rest
    least 4 bits.

This is not happening in the 8 bit data transfer. When the input signal
is
power is low, we get weird waveforms as shown in the following
snapshots.

http://rapidshare.com/files/78343220/8-bit-samples-scope.tar.gz

As shown, unless the input signal power is high (9dBm was the best) we
get a
very distorted signal when using 8 bit data transfer.

Note:
I’m using the latest trunk USRP rbf file (the 8 bit samples transferring
was
restored).

  1. FYI, the maximum measured SFDR for Basic-RX was about 55dB (when the
    input signal was +4dBm and the gain was 0dB which gave this maximum
    SFDR)

  2. FYI, the maximum measured SFDR for DBSRX was about 40dB (when the
    input
    signal was -20dBm and the gain was 26 dB which gave this maximum SFDR).

I hope we use these experiments to enhance our magnificent USRP.

Best Regards,

Firas A.


View this message in context:
http://www.nabble.com/USRP-Dynamic-Range-and-8-Bit-Problem-tp14471573p14471573.html
Sent from the GnuRadio mailing list archive at Nabble.com.

On Sat, Dec 22, 2007 at 09:48:53AM -0800, Firas A. wrote:

  1. I Think there is a problem in the 8 bit data transfer. As we know, if we
    have 12 bit number (for example), and we would truncate it to 8 bit number,
    we should take the most 8 significant bits from it and discards the rest
    least 4 bits.

I believe that 8-bit mode is still disabled and/or broken in the FPGA.
See ticket:197.

Eric

Hi Firas,

[several examples snipped]

(Saturated).
32767
readings for the maximum input signal of 2V P-P. Instead, we have maximum
reading of 13570. I don’t know how this number was generated, but it
should
be scaled to make use from the full dynamic range offered by using 16 Bit
data transfer.

The USRP takes the 12-bit sample (peak value of 2047), shifts it left 3
bits
(peak value of 16376), then runs it through a CORDIC stage that
multiplies
it by approximately 1.647/2 (peak value of 13485). This is very close
to
what you see. For decimation by values that are not powers of 2, there
is a
further decrease in signal strength associated with the CIC decimator.

I think we could quickly double the processing gain (and gain 6 dB S/N
and
dynamic range at higher decimation rates) by either shifting by 4 bits
before the CORDIC stage or eliminating the divide by 2 at the end of the
CORDIC stage. The maximum processing gain would then be 8*1.647=13.176
(currently it is half that). We could also double the gain in the CIC
decimator for decimations where the current gain is less than 1/1.647.

– Don W.

Hi Don

Good analysis.

Don W. [email protected] wrote:

I think we could quickly double the processing gain (and gain 6 dB
S/N and dynamic range at higher decimation rates) by either shifting
by 4 bits before the CORDIC stage or eliminating the divide by 2 at
the end of the CORDIC stage.

Can you modify the FPGA quickly and send me the RBF file?. I want to
make use from the measuring equipments I have currently to test the
modifications before I return it back.

Regards,

Firas A.

Hi Firas,

I think we could quickly double the processing gain (and gain 6 dB >>S/N
and dynamic range at higher decimation rates) by either shifting >>by 4
bits before the CORDIC stage or eliminating the divide by 2 at >>the end
of the CORDIC stage.

Can you modify the FPGA quickly and send me the RBF file?. I want to make
use from the measuring equipments I have currently to test the
modifications before I return it back.

It has been on my (long) list of things to do, but so far I have never
built
an RBF file and don’t have the tools to do so. But I can point to the
places in the Verilog code that I would change, if that would help.

– Don W.

Hi Don,

Don W. [email protected] wrote:
I don’t have the tools to do so.

No tools are required. All what you have to do is to download the free
windows Altera FPGA design software (Quartus II Web edition) from :
http://www.altera.com/products/software/products/quartus2web/sof-quarwebmain.html

But I can point to the places in the Verilog code that I would change
if that would help.

Alternatively (In this case I think it is quicker), tell me the places
in verilog code to be changed, and I will modify it and recompile the
rbf file and test it.

Regards,

Firas A.

Don W. [email protected] wrote: Hi Firas,

I think we could quickly double the processing gain (and gain 6 dB >>S/N
and dynamic range at higher decimation rates) by either shifting >>by 4
bits before the CORDIC stage or eliminating the divide by 2 at >>the end
of the CORDIC stage.

Can you modify the FPGA quickly and send me the RBF file?. I want to make
use from the measuring equipments I have currently to test the
modifications before I return it back.

It has been on my (long) list of things to do, but so far I have never
built
an RBF file and don’t have the tools to do so. But I can point to the
places in the Verilog code that I would change, if that would help.

– Don W.

I don’t have the tools to do so.

No tools are required. All what you have to do is to download the free
windows Altera FPGA design software (Quartus II Web edition) . . .

Yeah, that’s the part I haven’t done yet . . .

Alternatively (In this case I think it is quicker), tell me the places in
verilog code to be changed, and I will modify it and recompile the rbf
file and test it.

The safest (I think) is to change the assignments at the end of cordic.v
from

assign xo = x12[bitwidth:1];
assign yo = y12[bitwidth:1];

to
assign xo = x12[bitwidth-1:0];
assign yo = y12[bitwidth-1:0];

You might (or might not) get slightly better results by changing the
scaling
in adc_interface.v instead, from

adc0[11],adc0,3'b0

to

adc0,4'b0

(and similar for adc1, adc2, and adc3), but it would need to be verified
that this won’t overflow the cordic stage. Change cordic.v or
adc_interface.v, but not both.

The final tweak would be to change the scaling table in
cic_dec_shifter.v
from

ceil(N*log2(rate))

to

ceil( N*log2(rate) + log2(1.6467/2) )

to use some of the range lost in the CORDIC stage with some decimations.

Regards,

– Don W.

Dear Don W.,

  1. Modifying the FPGA file (cordic.v) as you suggested was working fine.
    The USRP dynamic range was greatly enhanced as follows (the test was
    done for decimation rate of 8, using BasicRx board, and sine wave
    frequency =250KHz):

a) For 9 dBm i/p signal, max count was 26198 (previously was 13096).

b) For 10 dBm (Saturating signal), The max count was 26830 ( previously
was 13570).

  1. I did not tested modifying cic_dec_shifter.v scaling table yet ,but I
    will do it later.

  2. After doing more tests (checking other decimation rates), I think you
    (or I) must send this modification as a patch to patch-gnuradio.

Thank you.

Firas A.

Don W. wrote:

~1.41. If you take the max level of 26830 and multiply by 1.4 you
get 37562, which would cause an overflow.

I understand how the magnitude can be too big, but we never take the
magnitude in the FPGA; as long as I and Q are within range, shouldn’t
it be ok? The change would limit the usable input signal to complex
values with magnitude less than 2048 (at the A/D output), whereas now
the signal must have each component less than 2048. I can’t think of
a practical case where a signal would exceed the limit on the
magnitude without also exceeding the limit on the components.

The CORDIC is rotating the signal. Let’s say that the I and Q signals
are both 25000 at some point. When that gets rotated by 45 degrees, it
will become 25000*sqrt(2) for I and 0 for Q.

can be reduced by tweaking the scaling.
Quantization error should be reduced by this change, as should
quantization-induced IMD. However, in a real system, these two are
likely to already be much lower than IMD and noise from the RF front
ends, so overall I don’t think you’d see a big change. I could be
wrong, though, so please go ahead with the tests.

6.59. If we double the gain before the CORDIC, we can increase the
S/N gain to 26.59. At lower decimations the processing gain should
increase from min( sqrt(d), 6.59 ) to min( sqrt(d), 2
6.59 ).

I don’t follow your logic here. Ideally, you want signal gain to be 1,
regardless of decimation rate. If you decimate by 256, then in the
ideal situation, the noise power will be divided by 256. That would be
the noise voltage is divided by 16. This is equivalent to getting 4
more bits of precision. 12 ADC bits plus 4 gives 16. Of course, we
have a scale factor in there that limits this, but in reality there are
roundoff errors in there anyway.

On a somewhat related note, I’ve been to busy to do this, but it would
be nice if someone put together an RBF build which had only 1 TX and 1
RX channel. That would allow you to devote a lot of resources to really
taking good care of your signal. For example, you’d be able to fit
halfband filters on the TX, and you could put in multipliers to scale
the signal to the ideal ranges. The vast majority of USRP users only
use 1 channel at a time anyway, so this would be very useful. You might
even be able to do a halfband filter that works at the 4X decimation for
the 8-bit case.

Matt

Hi Matt, Don

I did the tests. We definitely have improvement in USRP SFDR by more
than 3 dB. I did the tests as follows :

  1. Test Setup:

a) Using USRP Rev 4.5 with Basic-RX board.
b) Using high accuracy function generator.
c) Using decimation rate =8, and gain =0.
d) Using single tone (The SFDR is usually tested using single tone).
e) Using two frequencies 250KHz and 5250KHz. This is because I noticed
a large difference in USRP SFDR between DDC frequency =0 and DDC
frequency = 5MHz (for example).
f) Using tone power levels of (+4dBm) which gives about 1Volt P-P into
ADC (according to AD9862 data sheets, this analog input signal gives
best THD performance) and (+8 dBm) slightly below the saturating level
which produces about 2Volt P-P into the ADC (according to AD9862 data
sheets, this analog input signal gives best Noise performance).
g) Using Intel Core2D PC with Ubuntu 7.10.
h) Using gnuradio 3.1.1.
i) Using usrp_fft.py.

  1. Prepared USRP FPGA Work :

a) The first FPGA rbf file (usrp_std_0.rbf) was generated by modifying
only the cordic.v file.

b) The second FPGA rbf file (usrp_std_1.rbf) was generated by modifying
the cordic.v and the cic_dec_shifter.v files.

c) The third FPGA rbf file (usrp_std_2.rbf) was generated by modifying
the adc_interface.v and the cic_dec_shifter.v files.

See the files at:
http://rapidshare.com/files/79109257/Files_Differences.tar.gz
http://rapidshare.com/files/79109346/all_fpga_rbf.tar.gz
http://rapidshare.com/files/79109391/Worked_files.tar.gz

  1. Test results:

Note 1:
I used in my tests the original rbf file std_2rxhb_2tx.rbf ,
usrp_std_1.rbf and usrp_std_2.rbf FPGA files (usrp_std_0.rbf was not
used).

Note 2:
Let us assume that I have eye reading error by about (1dB - 2dB)!!!.

Note 3:
See test results at :
http://rapidshare.com/files/79109205/Tests.tar.gz

a) The generated rbf file by modifying the cordic.v and the
cic_dec_shifter.v files (usrp_std_1.rbf) gave us more SFDR than the
original FPGA file by about 7 dB in case of input signal 5250KHz and
level=+8dBm as shown in graphs. I tested this rbf file for all input
signal levels (from -90dBm to +13 dBm) and for all decimations (8 to
256). It is working fine and great and should be used all the time
instead of the original rbf file.

b) The generated rbf file by modifying the adc_interface.v and the
cic_dec_shifter.v files (usrp_std_2.rbf) was not good as expected.

   Although it gave us more SFDR than the original FPGA file by 

about 5 dB in case of input signal 5250KHz and level 4dBm as shown in
the graphs, but the FPGA got crazy when the input signal level was 8dBm
as shown n the graphs (I think the cordic was overflowed).
When the FPGA went crazy, I reduced the input signal gradually by
1dB steps until I reached input signal =+4dBm then it worked back
normal. Thus, the file is working good only and only if the input signal
is equal or below 4dBm.

I think we should send this work as a patch to gnuradio to enhance our
fantastic USRP device.

Best Regards,

Firas A.

Firas,

Thanks for doing these tests. See my comments inline.

Firas A. wrote:

d) Using single tone (The SFDR is usually tested using single tone).
e) Using two frequencies 250KHz and 5250KHz. This is because I
noticed a large difference in USRP SFDR between DDC frequency =0 and
DDC frequency = 5MHz (for example).

The tones that you see on the 250 kHz measurements are harmonics. They
could be generated in the USRP, but they could also be there on your
signal generator. Can you take a look at your sig gen on a spectrum
analyzer to check if you still see the harmonics?

  1. Prepared USRP FPGA Work :
    See the files at:
    http://rapidshare.com/files/79109257/Files_Differences.tar.gz
    http://rapidshare.com/files/79109346/all_fpga_rbf.tar.gz
    http://rapidshare.com/files/79109391/Worked_files.tar.gz

When you make the following change:

  • rx_dcoffset #(`FR_ADC_OFFSET_0)
    rx_dcoffset0(.clock(clock),.enable(dco_en[0]),.reset(reset),.adc_in({adc0[11],adc0,3’b0}),.adc_out(adc0_corr),
  • rx_dcoffset #(`FR_ADC_OFFSET_0)
    rx_dcoffset0(.clock(clock),.enable(dco_en[0]),.reset(reset),.adc_in({adc0[11],adc0,4’b0}),.adc_out(adc0_corr),

You need to take into account that the input to rx_dcoffset is only 16
bits. So in the second line, you are really sending in {adc0,4’b0}
since the top repeated sign bit will be cut off. This block takes an
average of the DC offset and subtracts it form the signal. This can
cause an overflow if the signal is close to clipping and the DC offset
is nonzero. The best thing to do here would be to make the rx_dcoffset
block clip instead of wrap around. That would save a digit here.

Also be careful with the values in cic_dec_shifter, since each one is
used at only one decimation rate. You need to test all the ones that
change.

Note 3:

signal is equal or below 4dBm.

I think we should send this work as a patch to gnuradio to enhance our
fantastic USRP device.

I agree that the results do look better. My concern is overflow when
both I and Q are used. If you can try all of these with max-strength
signals on both I and Q at the same time, I would be more than happy to
include the final results in the standard build of the FPGA.

Also note that the spur you see at DC is a result of using truncation
instead of proper rounding. This occurs in a bunch of places, including
the CORDIC, CIC, and halfband outputs. Truncation of 2’s comp numbers
results in a slight bias. Rather than truncating, we should use the
technique used in the following:

http://gnuradio.org/trac/browser/usrp2/trunk/fpga/sdr_lib/round.v

This was not done in order to save space. I would really like to see a
single-channel FPGA build which added back in all of these little
details, had a TX halfband, and had wider internal datapaths to improve
signal quality.

Thanks for doing all this investigation,
Matt

Hi Eric

Eric B. [email protected] wrote:

I believe that Matt was referring to using max-strength I & Q inputs
to the Basic Rx along with an Rx mux setting that routes both ADC
outputs into the same DDC. (The default config with the Basic Rx
feeds a constant zero into the Q DDC input).

Eric

I think we can satisfy this requirement by using DBSRX board or any of
the RFX boards. I will start the tests again using these boards.

Regards,

Firas

On Wed, Dec 26, 2007 at 01:56:46AM -0800, Firas A. wrote:

I agree that the results do look better. My concern is overflow when
both I and Q are used. If you can try all of these with max-strength
signals on both I and Q at the same time, I would be more than happy to
include the final results in the standard build of the FPGA.

Actually we are using I & Q (The DDC out is complex as you know) and
the FFT used in usrp_fft.py is complex FFT.
However, If you prefer, I can repeat all the tests using DBSRX
board, or please suggest what tests you would like me to do.

I believe that Matt was referring to using max-strength I & Q inputs
to the Basic Rx along with an Rx mux setting that routes both ADC
outputs into the same DDC. (The default config with the Basic Rx
feeds a constant zero into the Q DDC input).

Eric

Hi Matt,

Kindly, see my comments below:

Matt E. [email protected] wrote:

Firas,

Thanks for doing these tests. See my comments inline.

Hi Matt, Don
e) Using two frequencies 250KHz and 5250KHz. This is because I
noticed a large difference in USRP SFDR between DDC frequency =0 and
DDC frequency = 5MHz (for example).
Your are welcome. USRP is really great and we thank you for inventing
it.

The tones that you see on the 250 kHz measurements are harmonics. They
could be generated in the USRP, but they could also be there on your
signal generator. Can you take a look at your sig gen on a spectrum
analyzer to check if you still see the harmonics?

I don’t see any harmonics from signal generator using spectrum analyzer.

  1. Prepared USRP FPGA Work :
    See the files at:
    http://rapidshare.com/files/79109257/Files_Differences.tar.gz
    http://rapidshare.com/files/79109346/all_fpga_rbf.tar.gz
    http://rapidshare.com/files/79109391/Worked_files.tar.gz
    When you make the following change:
  • rx_dcoffset #(`FR_ADC_OFFSET_0)
    rx_dcoffset0(.clock(clock),.enable(dco_en[0]),.reset(reset),.adc_in({adc0[11],adc0,3’b0}),.adc_out(adc0_corr),
  • rx_dcoffset #(`FR_ADC_OFFSET_0)
    rx_dcoffset0(.clock(clock),.enable(dco_en[0]),.reset(reset),.adc_in({adc0[11],adc0,4’b0}),.adc_out(adc0_corr),

You need to take into account that the input to rx_dcoffset is only 16
bits. So in the second line, you are really sending in {adc0,4’b0}
since the top repeated sign bit will be cut off. This block takes an
average of the DC offset and subtracts it form the signal. This can
cause an overflow if the signal is close to clipping and the DC offset
is nonzero. The best thing to do here would be to make the rx_dcoffset
block clip instead of wrap around. That would save a digit here.

Actually, I’m weak when it concern working with FPGA and verlog. All
what I did was implementing Don W. suggestions.
If you tell me where and what to change, I can do it and test it again.
I think as you can see from the graphs (and Don expected this also) the
changes in the adc_interface.v was better than the work done in the
cordic.v but because of the level problem (at 8dBm), I ignored it.

Also be careful with the values in cic_dec_shifter, since each one is
used at only one decimation rate. You need to test all the ones that
change.

I tested from decimation 8 to decimation 256. All worked fine.

Note 3:

signal is equal or below 4dBm.

I think we should send this work as a patch to gnuradio to enhance our
fantastic USRP device.
I agree that the results do look better. My concern is overflow when
both I and Q are used. If you can try all of these with max-strength
signals on both I and Q at the same time, I would be more than happy to
include the final results in the standard build of the FPGA.

Actually we are using I & Q (The DDC out is complex as you know) and the
FFT used in usrp_fft.py is complex FFT.
However, If you prefer, I can repeat all the tests using DBSRX board, or
please suggest what tests you would like me to do.

Thanks for doing all this investigation,
Matt

Best Regards,

Firas A.

Hi,

I believe that Matt was referring to using max-strength I & Q inputs
to the Basic Rx along with an Rx mux setting that routes both ADC
outputs into the same DDC. (The default config with the Basic Rx
feeds a constant zero into the Q DDC input).

Eric

I believe Matt was right about his suspects. The FPGA modifications was
bad
for the RFX boards. Although it was working ok for the dbsrx board for
all
ranges, and although we got a SFDR enhancement by 3 dB in low level
input
signal to RFX board, but when the input signal was high, the RFX board I
& Q
outputs was badly distorted, while the original USRP FPGA does not shown
such distortion.

For the RFX boards, I tested both the TX/RX input and the RX2 input. The
TX/RX input shows a high attenuation.

The input signal was 1000.25MHz.The RFX900 board was used in the tests.

The tests results can be found at :

http://rapidshare.com/files/79333552/DBSRX_Tests.tar.gz
http://rapidshare.com/files/79333746/RFX_Tests_TX_RX_Input.tar.gz
http://rapidshare.com/files/79333825/RFX_Tests_RX2_Input.tar.gz

Best Regards,

Firas


View this message in context:
http://www.nabble.com/USRP-Dynamic-Range-and-8-Bit-Problem-tp14471573p14510788.html
Sent from the GnuRadio mailing list archive at Nabble.com.