On 11/17/2010 02:20 AM, Matt E. wrote:
Matt
OK, to follow-up after a few experiments this morning. I added roughly
40dB of gain (in addition to the 35dB or so that was already there)
of well-filtered gain in front of the USRP2/Basic_Rx. That appears
to have caused <400KHz to start working, so I got caught up by
a red-herring of decimation=256 – that was a coincidence, but my oh
my what a coincidence!
So, I have over 70dB of gain in front of the USRP2+Basic_Rx combination,
and I have a stripped-to-the-bones app for investigating things
that consists of a UHD source, and FFT sink block, and also a
complex-to-mag block with a number sink.
What I’m seeing is that the magnitudes (as seen in the number sink)
coming off the source, even with roughly 75dB of gain ahead
are roughly 0.002 to 0.003 when I’m using 400KHz sampling, and
roughly 0.0006 to 0.0007 when the bandwidth is 250KHz. If you
process the numbers as voltages, then we’re talking a roughly 10dB
drop in apparent average power level by reducing the bandwidth
by less than 3dB. Both 400Khz and 250KHz use a decimation that is
both even, and a multiple of 4, so they should be using exactly
the same filter sequence in the decimator, correct?
What surprises me is how tiny these numbers are–I’m connected to an
antenna outdoors, and the system can easily “see”
distant CB stations, for example (I use stuff on the CB bands as a
kind of gross sanity test for sensitivity).
So now, I’m wondering how things are scaled between the ADC and the
host. The ADC is 14 bits twos-complement signed, then it goes
into the FPGA–do you do 32-bit arithmetic inside the FPGA and then
re-scale back to 16-bits?
Then those 16-bits get squirted over the GiGe, where UHD picks them up,
and re-scales them into +/- 1.0 floating point numbers, yes?