USRP Input Calibration / CIC Filter Response

Hi,

I’m trying to calibrate the USRP so I can get true dBm readings out of
it. I’m using an LFRX card with 16 decimation and a center frequency
of 2 MHz so I can look at the range from DC - 4 MHz. I hooked up a
signal generator and swept it through the DC - 4 MHz range at 100 kHz
intervals and generated the plot here:

http://ice.cc.gt.atl.ga.us/usrp_adc_response.jpg

The signal generator produced a -36 dBm signal for each frequency, and
the y-axis of this plot is the output of the USRP’s ADC in dB. (20 *
log10 of the appropriate FFT bin for the frequency of interest, using
an 8192 point FFT).

My question is what is causing the particular shape of this response.
I’m guessing it has to do with the CIC filter, but I can’t seem to
reconcile my output with the plots from this post on the CIC filter’s
response:

http://lists.gnu.org/archive/html/discuss-gnuradio/2007-05/msg00191.html

…particularly this plot:

http://www.nabble.com/file/8500/(0%20-%2032)MHz)%20CIC%20Frequency%20Response%20(CIC%20decimation%20%3D32).JPG

Can someone please explain what’s going on here?

My end goal is to fit an equation to the dB “correction factor” I
calculate in order to normalize across the entire 4 MHz chunk of
spectrum.

Thanks,

Erich

On Thursday 19 March 2009 22:43:56 Erich Stuntebeck wrote:

The signal generator produced a -36 dBm signal for each frequency, and
the y-axis of this plot is the output of the USRP’s ADC in dB. (20 *
log10 of the appropriate FFT bin for the frequency of interest, using
an 8192 point FFT).

My question is what is causing the particular shape of this response.
I’m guessing it has to do with the CIC filter, but I can’t seem to
reconcile my output with the plots from this post on the CIC filter’s
response:

I think there are two parts

  • dampening at 0 and 4 MHz: This is most probably the typical CIC shape,
    [sin(x)/x]^4 (four CIC stages)
  • “500kHz Ripple”: this is most probably an artifact due to how you
    generate
    the response. There might be a small frequency offset between the USRP
    and
    your signal generator. The USRP oscillator is specified to be 64MHz +/-
    x ppm
    (parts per million, x either 20 or 50 depending on USRP revision). If
    you take
    the power just from a single bin, you may use the wrong bin.

One possibility would be to take several bins around the “correct” bin
and sum
up the powers of these bins. Actually, you dont need the fft at all, you
can
just calculate the power of the incoming signal in the time domain.

A somewhat related question: which firmware are you using - with or
without
halfband filter?

Regards,

Stefan


Stefan Brüns / Bergstraße 21 / 52062 Aachen
phone: +49 241 53809034 mobile: +49 151 50412019

On Thu, Mar 19, 2009 at 05:43:56PM -0400, Erich Stuntebeck wrote:

The signal generator produced a -36 dBm signal for each frequency, and

What is the daughterboard gain set to?
If you look at the time domain samples, what is their range?
You might try increasing the power a bit and see if anything changes.
Did you collect that data with usrp_rx_cfile.py? If so, what was the
exact command line that you used?

What kind of a signal generator are you using?
Have you tried sweeping the same way into a spectrum analyzer?
If so, what did you see?

Eric

On Mon, Mar 23, 2009 at 12:58:17PM -0400, Erich Stuntebeck wrote:

I’m using the firmware without the half-band filter.

The signal generator is an Agilent 33220A. I don’t have a spectrum
analyzer, but I do have two of the 33220A’s and they both perform the
same way so I’m guessing its not a calibration issue with the signal
generator.

Thanks, Erich.

Without seeing the time domain samples, I’m guessing that the gain is
too low and/or the input is too low, and that what you’re seeing is
the result of only a few of the least significant bits moving in
received samples. That is, you’re not using the full dynamic range of
the signal processing pipeline. I suggest running it again with the
Rx gain set to its maximum value. Let us know how it turns out.

Eric

On Mon, Mar 23, 2009 at 10:44:21AM -0700, Eric B. wrote:

the bin nearby with the maximum amplitude.

I’m using the firmware without the half-band filter.

The signal generator is an Agilent 33220A. I don’t have a spectrum
analyzer, but I do have two of the 33220A’s and they both perform the
same way so I’m guessing its not a calibration issue with the signal
generator.

A person who desires to remain anonymous passes on this comment:

eric, i saw your response on the mailing list (below). i think the
problem has nothing to do with gain.

in my opinion the mistake is that erich is using FFT without 

applying
a window-function first. in other words, he’s using a ‘rectangle’
window. a rectangle window is a bad choice for amplitude flatness
measurements, because the amplitude response depends on whether the
signal is at bin center, or between 2 bins (the worst case). there
is
a difference of ~3.7dB between these two scenarios.

if he wants to measure flatness, he should use a window function 

which
is designed to give the same amplitude response for any frequency,
be
it bin-centered of not. a ‘flattop’ window (google for it) is
recommended in that case - it has 0.02dB flatness.