I updated the repo with results for the N210+WBX:
Our single tone sweep results look very similar. Your IMD plot should
be in
dBc; plot the difference in power between one of the test tones and one
of
the IMD3 products. Mine don’t do below -50 dBc since I have the input
attenuator on the analyzer set for the higher power level. I need to
reduce
it for the lower power levels.
-
Yes, that would be a calibration factor. If you measure 0 dBm on the
spectrum analyzer, and -20 dBFS in the FFT, then 20 dB is the
calibration
factor, so -20 dBFS + 20 dB = 0 dBm. Note when you change the RX gain,
then
the calibration factor changes. If you increase gain 10 dB, then you
must
decrease the calibration factor 10 dB.
-
Using the FFT to manually measure level can be a problem if the tone
is
split between bins, so it’s better to use a coarse FFT. Measuring all
the
power in the channel at once may be preferable, and is like using a
power
meter. Use the complex_to_mag_squared, followed by
inegrate_with_decimation, then 10*log() +K, where K is your calibration
factor. If you decimate down to 1 Hz, i.e. decimation_rate=sample_rate,
you
can get very precise power readings. The accuracy drops off with tone
power
due to the wideband noise; just like a power meter.
You would have to look at the USRP FPGA block diagram to find out
exactly
what is going on between the input and the FFT, but it is essentially
fine
tuning with an NCO then many stages of filtering and decimation. I’m
sure
it affects the amplitude slightly as different filters are used for
different decimations, and the odd vs. even decimation.
- Compression is the drop in gain (not power). For example look at
this
table where the USRP TX gain is stepped in 1 dB increments and the
output
power is measured:
USRP_TX_Gain_dB, Pout_dBm, Gain = Pout_dBm - USRP_RX_Gain_dB
3.0, 8.0, 5.0
4.0, 9.0, 5.0
5.0, 9.9, 4.9
6.0, 10.7, 4.7
7.0, 11.3, 4.3
8.0, 12.0, 4.0 <<-- This is the P1dB
9.0, 12.6, 3.6
The P1dB is where the gain has dropped from 5.0 to 4.0. The P1dB
referenced
to the output is 12.0 dBm. The P1dB referenced to the USRP TX gain
setting
is 8.0 dB. Notice it is also where the digit after the decimal point
repeats itself if the input is stepped in 1.0 dB increments; i.e it when
from 5.zero to 4.zero. This is the quick n’ dirty method of finding
P1dB.
If the input moves in 1 dB steps, all you need to monitor is the most
significant digit after the decimal point.
- I have not done RX testing. My signal generators are
non-synthesized
and have no digital interface. I bought a new one on eBay and it should
be
here next week, but I still need another for a two tone test, not to
mention
the components to achieve proper isolation between the two.
Thanks,
Lou
KD4HSO
Gayathri Ramasubramanian wrote
-
This changed to ~ 31 dBm at 900 MHz and ~ 23 dBm at 1.8 GHz.
So can we use the difference to be the calibration factor in the linear
daughterboards provide various levels of amplification in their analog
real
subtracted from the values got from UHD_FFT and is ~ 25 dB on IIP3 and 50
Discuss-gnuradio mailing list
Discuss-gnuradio@
Discuss-gnuradio Info Page
Plots for the Questions.docx (47K)
<http://gnuradio.4.n7.nabble.com/attachment/49883/0/Plots%20for%20the%20Questions.docx>
–
View this message in context:
http://gnuradio.4.n7.nabble.com/GR-USRP-and-GPIB-measurements-tp49727p49890.html
Sent from the GnuRadio mailing list archive at Nabble.com.