USRP and USRP2 FFT result comparison analysis help

Hi, folks,

I recently did some comparison test with my USRP and USRP2 boxes and
got slightly different results.

Test setup

  • Signal source : HP signal generator (forgot the name of the equipment)
  • Daughter cards for USRP and USRP2 : BasicRX
  • Test configuration
    : I divided a 12.1MHz CW signal from the SigGen with a 3-dB divider
    and fed the outputs to both USRP and USRP2 boxes.

Please find the two snapshots here.
http://zoolu.co.kr/episodes/3
Both have 4MHz span and centered at 12.3MHz with fft size of 8196.

Questions

  1. It seems that the noise floor of USRP2 is about 10 to 12 dB lower
    than that of USRP while the USRP has better SNR. Which are considered
    relatively more important in general between a lower noise level
    (probably be able to detect weaker signals) and a better SNR when you
    plan to build a signal detect solution?

  2. I can see that the USRP has a bit more flatten noise floor shape
    but USRP2 has less spurious signals. Are those because of different
    performances of different ADCs or actually the two signals fed to each
    boxes are different although I branched from a single source?

  3. I can see a rather strong peak of 12.3MHz signal in the USRP. I
    guess this is out of the local oscilator signal from CORDIC in the
    FPGA. However, I don’t see the signal in the USRP2. Is this a leakage
    from the FPGA to ADC for the USRP unit?
    If then, is there any good way to get rid of this?

  4. I know that the relative signal level can be changed according to
    the numbers of FFT sizes. How do you guys calibrate the level? What I
    can think instantly is to measure a reference signal of know level
    (ie, 0 dBm) and put some codes to calibrate the signal level. Just
    wonder if there is any better method used in general.

Best regards,

Ilkyoung.

Hi,

  1. The comparison is not fair with these decimation values. For USRP2,
    you used a decimation of 25. Using this odd decimation will disable the
    FIR filters of the USRP2 FPGA.

  2. To make the comparison valid, I suggest for you to use decimation of
    64 for USRP and decimation of 100 for USRP2. In this case, both of them
    will have a bandwidth of 1MHz. Re do you experiments and post the
    results again.

  3. The ADC of USRP2 is 14Bit, while that of USRP is 12 Bit. This should
    give about 12 dB quantization noise improvement to USRP2 over USRP.

Best Regards,

Firas

… and Don’t forget to reduce input signal level. I see distortion in
your
graphs. Use -20dBm as input signal level.

Regards,

Firas

View this message in context:
http://www.nabble.com/USRP-and-USRP2-FFT-result-comparison-analysis-help-tp21438485p21441059.html
Sent from the GnuRadio mailing list archive at Nabble.com.

ILKYOUNG KWOUN wrote:

and fed the outputs to both USRP and USRP2 boxes.
relatively more important in general between a lower noise level
(probably be able to detect weaker signals) and a better SNR when you
plan to build a signal detect solution?

What you are seeing here is the effect of the PGA (programmable gain
amplifier) on the USRP. In your usrp_fft plot you can see it is set to
10dB. The USRP2 does not have an equivalent to this PGA, so to make a
better comparison, you should set the slider to 0dB manually, or use “-g
0” on the command line. Bringing the gain down will also lower the
noise floor of the USRP1, but not down to the level of the USRP2.

In general, the USRP2 will have a lower noise floor, and lower spurious
due to the 14-bit ADC.

Also, I think that the two python programs you are using will have the
same number of points in the FFT, but you should make sure if you are
trying to compare them. Remember, you can keep lowering the height of
the apparent noise floor by using more points in your FFT. The real
amount of noise will be the same, but you will be dividing it into more
and more bins. 1024 is a good number to use, since it is comparable to
the resolution of your screen.

  1. I can see that the USRP has a bit more flatten noise floor shape
    but USRP2 has less spurious signals. Are those because of different
    performances of different ADCs or actually the two signals fed to each
    boxes are different although I branched from a single source?

The spurs you see are from a variety of causes. You didn’t say what
signal level you are feeding in to the boards, but about 9dBm will clip
in the USRP2 and in the USRP1 with gain set to 0. Your gain is set to
10dB, so -1dB would be the clipping point. ADCs are typically spec’ed
at “-1dBFS” which means a sinusoidal signal 1dB lower than full scale,
so try 8dBm. Signal generators may only be accurate to .2 to .5 dB, so
you may have to adjust in 0.1dB increments.

Also, the non-flat noise floor in the USRP2 may be a real part of the
signal you are looking at. It is very easy to see the phase noise of
the signal generator you are using. I actually wasted a couple of days
trying to figure out why I was seeing that much phase noise, until I
realized it was the noise of the generator and not the USRP2. I had
assumed that a high quality signal generator from Rohde and Schwarz
would have better phase noise than the USRP2, but that actually isn’t
the case at low frequencies like this. Once I got a better signal
generator with the “low phase noise option” I saw it go down quite a
bit.

  1. I can see a rather strong peak of 12.3MHz signal in the USRP. I
    guess this is out of the local oscilator signal from CORDIC in the
    FPGA. However, I don’t see the signal in the USRP2. Is this a leakage
    from the FPGA to ADC for the USRP unit?
    If then, is there any good way to get rid of this?

Another cause of your spurs is probably harmonics from the signal
generator aliasing into the passband. Remember, the BasicRX does not
have any filtering, and the ADCs will respond to signals well past 400
MHz. Even a very good signal generator may have 2nd and 3rd harmonics
which are only 50 dB down, and even higher order harmonics are
measurable. Use a lowpass filter to get a purer tone.

In testing the USRP2, I use a very high quality signal generator
followed by a lowpass filter with more than 35 dB of harmonic rejection,
and even then, the harmonics from the generator are still at a roughly
comparable level to the USRP2s own harmonic performance.

  1. I know that the relative signal level can be changed according to
    the numbers of FFT sizes. How do you guys calibrate the level? What I
    can think instantly is to measure a reference signal of know level
    (ie, 0 dBm) and put some codes to calibrate the signal level. Just
    wonder if there is any better method used in general.

We automatically scale so that the height of the peak for the fft of a
sinusoid (or any of the discrete spurs) should stay the same no matter
how many points in the FFT. Anything which is wideband, like noise,
will show up at reduced levels when you have more points. To calibrate
to an absolute signal level, it is easiest to just do as you said and
put in a known amplitude. You could also do your computations based on
the gains of everything in the signal chain, but that is more work.

Thank you for performing these tests. It is good to have many people
measuring these things.

Matt

Thank you so much, guys.

I will setup another experiment regarding your advices and post the
result. Since I have to arange my friend’s lab, I probably could do it
on weekends. :slight_smile:

Best regards,

Ilkyoung.

You are correct about the fft level not scaling with fft size. I
remember putting this in some time ago, but it isn’t there now, so I
don’t know what happened. In any case, I did some further analysis on
the spurs. We are seeing 2 different things on the USRP1 and USRP2

The USRP1 definitely has a DC offset problem. I thought I had fixed
this a long time ago, but I guess it never got put in there. It isn’t
hard to fix, but I’m busy with other things right now. I would be happy
to accept a patch for the verilog code.

On the USRP2, I believe that what you are seeing is perfectly
reasonable. If you put in a full scale signal (about +8dBm), the peak
shows up at about 31dB (on the arbitrary scale on the display, assuming
you use 8192 points in the FFT). You are seeing the 12.1 MHz (DC) spur
at about -60 dB, which is about -90 dB Full Scale. Since we are only
sending 16 bit numbers over the bus, you can’t expect to have better
dynamic range than about 96dB.

Matt

Guys,

I managed another comparison test last night.
http://zoolu.co.kr/episodes/6

*. Test tone signal quality
I used a siggen as good as I could get. :slight_smile: According to the
spectrum analyzer, it seemed that the siggen did not generate any
noticeable harmonics(1st and 2nd pictures). Actually, I raised the
power level up to 0 dBm when I tested last night. But took the
snapshots with -34dBm level.

*. Signal level
I reduced the signal level down to -34dBm just to avoid any
possible input level saturation. I believe it is low enough. :slight_smile:

My questions

  1. Still get spurs
    As you can see in 3rd and 4th pictures, I still have quite
    significant spurs in band. However, when I match the local oscillator
    frequency with that of test signal like picture #5, all spurs were
    gone away. So, I am still suspicious if those spurs came out of the LO
    leakage from the FPGA CORDIC. (I don’t believe that happens in the
    digital domain. Probably, the digital switching noise spilled over to
    the analog part and it was fed to ADC input. That’s my best guess.) Is
    there any comments, corrections, or suggestions? I probably have to
    figure out Kyle Pearson’s work on rounding in FPGA that Frank
    mentioned before.

  2. Automatic signal level scaling
    I made comparison between different FFT sizes and the peak level
    varied. As you can see, The Peak level was -11dB with FFT size of 8192
    and -19dB with 1024 while the absolute signal power level was -34dBm.
    The differences are about 8 dB and it explains because the FFT size
    scale is 8(=2^3) times. Which part of the code does the automatic
    scaling?

Ilkyoung.

ILKYOUNG KWOUN wrote:

Guys,

I managed another comparison test last night.
http://zoolu.co.kr/episodes/6

Just to follow up on this a little more, if you generate a complex sine
wave in octave or Matlab, quantize it to 16 bits, and then plot the fft,
you will see similar behavior.

Matt