GR, USRP, and GPIB measurements

There have been a few messages on both GR and USRP lists about choosing
proper hardware gain settings. I figured I’d write some code to sweep a
B200 over frequency and gain, and generate some plots that are handy for
choosing proper TX gain. It also demonstrates controlling test
equipment
via Linux GPIB while controlling the GR flow, and plotting the results,
which makes a handy piece of test gear on the bench. I plan on doing
the RX
side next, where results must be pulled from the GR flow via messages or
probes.

Thanks,
Lou
KD4HSO


View this message in context:
http://gnuradio.4.n7.nabble.com/GR-USRP-and-GPIB-measurements-tp49727.html
Sent from the GnuRadio mailing list archive at Nabble.com.

1). It looks like the amplitude in UHD_SIGGEN_GUI is normalized to the
peak amplitude of the combined two tones, thus it is safe to set to 1.0.
Notice if you switch to a single tone, the amplitude increases when
viewed on the spectrum analyzer. You must measure your output with a
power meter or spectrum analyzer to get the power in dBm, then adjust
the TX_gain to change the level. That was really the premise behind my
experiment, to figure out what is the output power for various gains and
frequencies. I beleive max limit on the WBX is 31.0 since it has two
5-bit attenuators. I suppose I could have looked in the data sheet for
the B200 transceiver chip.

  1. I just choose 60 dB as I had to use something when initialized the
    usrp in the code. It was really just arbitrary. No, with the WBX you
    will be limited to 0-31 dB for the TX gain setting as the B200 is
    different hardware than the WBX. I actually have an N210+WBX also, but
    I have not gotten around to measuring it yet.

  2. The one tone test uses a tone at 100 kHz. The two tone test uses
    ones at 50 kHz and 75 kH, hence 25kHz tone spacing at 62.5 kHz offset.
    All fall within the 250 kHz sampling rate. It does not matter if the
    intermod products fall outside the sampling rate as those are generated
    after the (ideal) DAC.

  3. You must always stay below DAC full scale (+/- 1 signal). Even a
    sinusoid of amplitude 1.0 is not good since the DSP within the FPGA tend
    to periodically overflow; at least Im seeing that behavior within the
    B200. 3 dB below full scale (1/sqrt(2)) is sort of an arbitrary choice
    I made. The two tones have a 3 dB crest factor; the power crests up to
    3 dB above average power. Hence I wanted the power crest to still be 3
    dB below full scale. The digital modulations and multi-carrier can have
    much higher crest factors, thus you must back off to prevent DAC
    overflow.

  4. 89 dB is the max setting for the B200. I just choose the 70 dB as
    that provides 0 dBm output power at 100 MHz.

  5. If the Tektronix analyzer has a serial or ethernet maybe you could
    control it through that. I used Linux GPIB. I planned on using PyVISA
    but the Linux GPIB already had a python module. I may update it to use
    PyVISA, as VISA provides a hardware abstraction so you can use GPIB,
    serial, ethernet, etc. without having to change the code. You will
    need to create a new class within instruments.py for the tektronix
    analyzer, duplicating the methods of the 8566B.

Thanks,
Lou
KD4HSO

Hi.

Thank you for your detailed response. it helps makes things clear.
Could you also clarify the below questions

  1. While you increased the gain were you also looking for 1 dBCP in one
    tone and IIP3 or OPI3 point in the two tone test. If so, what values did
    you get for the B200 USRP?

I would like to get a reference for USRPN210+WBX test. Usually the Ettus
web specifies values only for the daughterboard like WBX is said to have
5
to 10 dBm on IIP3. So we don’t know for the entire device set.

  1. Also in you notes, you have said " The one tone TX test consists of a
    0.707 amplitude tone at 100 kHz offset from the center frequency. The
    tone
    amplitude is -3 dBFS, which is half power from full scale of the DAC."
    “The two tone TX test consists of 0.304 amplitude tones with 25 kHz
    spacing, offset 62.5 kHz from the center frequency. Each tone amplitude
    is
    -9 dBFS, thus a combined average power at -6 dBFS, and peak
    instantaneous
    power at -3 dBFS.”

Can you explain this a little more. How did you map 0.707 amplitude to
-3
dBFS?
You say it s half power of full scale DAC so what is the full scale DAC
power?
For a USRP2 +WBX I found the following measures on spectrum Analyser,
while
sending a single tone with UHD_SIGGEN.py

Ampl Gain Attenuator Pout on spec Analyser Actual power
(Pout
+60)

0.707 25 60 -42.48 dBm
17.52
dBm
0.707 20 60 -48.53 dBm
11.47
dBm
0.707 15 60 -56.94 dBm
3.06
dBm
0.707 10 60 -64.75 dBm
-4.75
dBm
0.707 5 60 -74.36 dBm
-14.36 dBm

however I don’t know how to relate it in terms of DAC power scale.

Similarly for two tone you have said “The two tone TX test consists of
0.304 amplitude tones with 25 kHz spacing, offset 62.5 kHz from the
center
frequency. Each tone amplitude is -9 dBFS, thus a combined average power
at
-6 dBFS, and peak instantaneous power at -3 dBFS.”

I get the relation like : 1/sqrt(2) = 0.707 ==> -3 dBFS
1/(2*sqrt(2)) = 0.304 ==> -3 dbFS +
20log
(1/2) == -3 dBFS - 6.02 = -6 dBFS

Is my understanding correct. Could you clarify the calculation of
average
power as -6 dBFS, and instantaneous power as -3 dBFS too?

look forward to your response.

Thanks
Gayathri

  1. I have not calculated P1dB or IP3, yet. I will have to do that. I
    find
    it’s easier to look at a graph of IMD3 levels since it is more intuitive
    that working from the IP3, which is more useful for calculations.

  2. 0 dBFS would be the DAC swinging full scale; i.e. -1 to +1 float
    value
    (DAC voltage or current if you want to think of it that way) from GNU
    Radio.
    That is the maximum power you can get from the DAC. So to drop the
    power by
    1/2 (i.e. -3 dB), the amplitude must drop to 1/sqrt(2)=0.707. You
    always
    need to stay backed off from full scale, so -3 dB is a good point. You
    don’t care about the power from the DAC, only the power from the DAC +
    amplifier. The DAC should be very linear compared to the amplifier, and
    it
    is since the IMD really drops off with amplifier gain setting.

Your measurements should be dropping in 5 dB steps; they are not. You
can
either decrease your video bandwidth, or better yet remove that 60 dB
attenuator. Most spectrum analyzers can handle 30 dBm with the input
attenuator at maximum. 60 dB is too much, not to mention it’s probably
not
exactly 60 dB.

Let’s say you have two tone, each is 0 dBm, at 100 Hz and 101 Hz. The
the
total (mean square) power is 3 dBm; i.e. measure each tone individually
and
sum the power. At the beat frequency of 1 Hz the amplitudes add
constructively and destructively; ie. doubling and going to nothing.
That
amplitude doubling gives an instantaneous (peak) power of 6 dBm, then it
disappears to nothing, hence the average is still 3 dBm. So for two
tones
the peak power is 3 dB over average power, or 6 dB over that of a single
tone. That’s why I set each tone power to -9 dBFS, so the peak power is
-3
dBFS.

Thanks,
Lou
KD4HSO

Gayathri Ramasubramanian wrote

web specifies values only for the daughterboard like WBX is said to have 5
Can you explain this a little more. How did you map 0.707 amplitude to -3

                             1/(2*sqrt(2)) = 0.304 ==> -3 dbFS + 20log

Discuss-gnuradio mailing list

Discuss-gnuradio@

Discuss-gnuradio Info Page


View this message in context:
http://gnuradio.4.n7.nabble.com/GR-USRP-and-GPIB-measurements-tp49727p49821.html
Sent from the GnuRadio mailing list archive at Nabble.com.

I updated the repo with results for the N210+WBX:

Our single tone sweep results look very similar. Your IMD plot should
be in
dBc; plot the difference in power between one of the test tones and one
of
the IMD3 products. Mine don’t do below -50 dBc since I have the input
attenuator on the analyzer set for the higher power level. I need to
reduce
it for the lower power levels.

  1. Yes, that would be a calibration factor. If you measure 0 dBm on the
    spectrum analyzer, and -20 dBFS in the FFT, then 20 dB is the
    calibration
    factor, so -20 dBFS + 20 dB = 0 dBm. Note when you change the RX gain,
    then
    the calibration factor changes. If you increase gain 10 dB, then you
    must
    decrease the calibration factor 10 dB.

  2. Using the FFT to manually measure level can be a problem if the tone
    is
    split between bins, so it’s better to use a coarse FFT. Measuring all
    the
    power in the channel at once may be preferable, and is like using a
    power
    meter. Use the complex_to_mag_squared, followed by
    inegrate_with_decimation, then 10*log() +K, where K is your calibration
    factor. If you decimate down to 1 Hz, i.e. decimation_rate=sample_rate,
    you
    can get very precise power readings. The accuracy drops off with tone
    power
    due to the wideband noise; just like a power meter.

You would have to look at the USRP FPGA block diagram to find out
exactly
what is going on between the input and the FFT, but it is essentially
fine
tuning with an NCO then many stages of filtering and decimation. I’m
sure
it affects the amplitude slightly as different filters are used for
different decimations, and the odd vs. even decimation.

  1. Compression is the drop in gain (not power). For example look at
    this
    table where the USRP TX gain is stepped in 1 dB increments and the
    output
    power is measured:

USRP_TX_Gain_dB, Pout_dBm, Gain = Pout_dBm - USRP_RX_Gain_dB
3.0, 8.0, 5.0
4.0, 9.0, 5.0
5.0, 9.9, 4.9
6.0, 10.7, 4.7
7.0, 11.3, 4.3
8.0, 12.0, 4.0 <<-- This is the P1dB
9.0, 12.6, 3.6

The P1dB is where the gain has dropped from 5.0 to 4.0. The P1dB
referenced
to the output is 12.0 dBm. The P1dB referenced to the USRP TX gain
setting
is 8.0 dB. Notice it is also where the digit after the decimal point
repeats itself if the input is stepped in 1.0 dB increments; i.e it when
from 5.zero to 4.zero. This is the quick n’ dirty method of finding
P1dB.
If the input moves in 1 dB steps, all you need to monitor is the most
significant digit after the decimal point.

  1. I have not done RX testing. My signal generators are
    non-synthesized
    and have no digital interface. I bought a new one on eBay and it should
    be
    here next week, but I still need another for a two tone test, not to
    mention
    the components to achieve proper isolation between the two.

Thanks,
Lou
KD4HSO

Gayathri Ramasubramanian wrote

This changed to ~ 31 dBm at 900 MHz and ~ 23 dBm at 1.8 GHz.
So can we use the difference to be the calibration factor in the linear
daughterboards provide various levels of amplification in their analog
real
subtracted from the values got from UHD_FFT and is ~ 25 dB on IIP3 and 50


Discuss-gnuradio mailing list

Discuss-gnuradio@

Discuss-gnuradio Info Page

Plots for the Questions.docx (47K)

<http://gnuradio.4.n7.nabble.com/attachment/49883/0/Plots%20for%20the%20Questions.docx&gt;


View this message in context:
http://gnuradio.4.n7.nabble.com/GR-USRP-and-GPIB-measurements-tp49727p49890.html
Sent from the GnuRadio mailing list archive at Nabble.com.

Hi

Thanks again fro your explanations. You were correct about the
measurements
I had sent in earlier. I checked with the power meter and the O/p power
at
450MHz @ ampl 0.707 and TX Gain of 25 gives 11.5 dBm.
I have done the single tone and two-tone tests for USRPN210 +WBX board
at 3
frequencies of 400 MHz, 900 MHz, 1800 MHz.
The settings are similar to yours :AMPL 0.707, TX gain : varied from 0
to
31.
Could you please check them and let me know if they are fine.

Also I am trying to find the RX side linearity range. For this I
connected
signal generator to the RX port (RF2) of the USRPN210 and viewed the
received spectrum on UHD_FFT.
I have plotted the readings that I got.
I could see that there is a difference of around 35 dBm between the
reading
on UHD_FFT and that on spectrum Analyser when the same input from signal
geenrator was given to both (using a power splitter ) @ 400 MHz.
This changed to ~ 31 dBm at 900 MHz and ~ 23 dBm at 1.8 GHz.
Could you kindly clarify the below points regd this:

  1. So it is good to assume *35 dBm as a calibration factor @ 400 MHz
    *for
    the USRPN210 +WBX device. i.e in case we use USRPN210 +WBX device test
    as
    Receiver for testing and use UHD_FFT to plot the spectrum and take the
    reading of amplitude in dB and get a value of 25 dB , would it to fine
    to
    say that 25 - 35 = -10 dBm is the actual power received by USRP to an
    extent ( not only on the RX port).
    So can we use the difference to be the calibration factor in the linear
    range?

  2. The link
    Ettus Research - The leader in Software Defined Radio (SDR) | Ettus Research, a National Instruments Brand | The leader in Software Defined Radio (SDR)
    says that “When the FFT(default) view is used, the x-scale is the
    frequency, and the y-scale is amplitude.The y-scale shows the amplitude
    with “counts,” and the values do not typically correlate to a specific,
    absolute power input. The amplitude read on the display is useful for
    approximate comparisons. The level for a given input amplitude will vary
    a
    few dB across frequency and from unit to unit. Also, receiver
    daughterboards provide various levels of amplification in their analog
    chains,which will affect the amplitude result in the FFT”.

Could you clarify what type of processing is done on the received signal
from the point of reception till the display on UHD_FFT briefly in terms
of
scaling/ normalization.

The basic idea is to find a relation between the amplitude value shown
on
UHD_FTT plot and basic power scale in dBm.( a factor to be
summed/subtracted or multiplied or divided from UHD_fft reading to get
real
power value in dBm)

  1. On further increase of input power in steps of 1 dBm or 0.1 dBm,
    there
    was a 1 dB drop of power. We assumed this was the point of 1 dB
    Compression
    for the receiver side of USRPN210. Is this understanding correct? This
    comes around -8 dBm @ 400 MHz, -4 dBm @ 900 MHz and ~ +2 dBm @ 1.8
    GHz
    .

  2. I have also tried two tone test on the USRPN210 +WBX device. The
    plots
    shown under Q4. The IIP3 point comes ~ 4 dBm when the factor of 35 is
    subtracted from the values got from UHD_FFT and is ~ 25 dB on IIP3 and
    50
    dBm on OIP3 ( which was told to me by my advisor as not the normal
    values
    to expect) . You said you plan to do the one tone and two tone test
    for
    RX side for B200. Do you find any similarity in results. Kindly let me
    know
    if I seem to be going wrong any where.

Kindly Clarify the above points.
I eagerly look forward to your responses.

You need to be careful with the WBX as input power above -20 dBm could
damage
the 2nd stage RX amplifier. Per the data sheets, the absolute maximum
input
power of the 2nd stage amp is below the saturated output power of the
1st
stage amp.

1a) Yes, only valid for linear range.
1b) The device is always going to generate IMD, so it really depends on
how
much error you can tolerate. The IMD3 grow at 3x the rate of the main
tones, so for a slow gain compression, the error added to the IMD levels
won’t be much compared with how fast they grow. On my plots, -30 dBc is
where the single tone power (visually) starts to deviate from linear.

  1. Negative of the difference. The IMD are always going to be negative
    dBc,
    where the “c” means referenced to the carrier.

  2. Optional, and tied to the daughterboard, specifically analog
    imparments
    in the I/Q sections. I don’t know the specifics; maybe ask on the USRP
    list. I think each correction file is serialized by the daughterboard,
    which make sense if you are running several USRP from one host.

Thanks,
Lou
KD4HSO

Gayathri Ramasubramanian wrote

linear
fundamental tone and the IMD product power @ harmonc freq. i.e [power @

  • uhd_cal_rx_iq_balance
    that it more tending towards the WBX board operation. Also the factors get
    Kindly clarify the above points.

Thanks again

Regards

Gayathri


Discuss-gnuradio mailing list

Discuss-gnuradio@

Discuss-gnuradio Info Page


View this message in context:
http://gnuradio.4.n7.nabble.com/GR-USRP-and-GPIB-measurements-tp49727p49901.html
Sent from the GnuRadio mailing list archive at Nabble.com.

Thanks for doing the TX side one tone and two tone tests for USRPN210.
It
helps me get a reference.

  1. When I did the test for finding out the calibration factor, I found
    it
    was a constant value ntil a certain range and after that it dipped (e.g
    ard
    -8 dBm input power @ 400 Mhz). I image this is point near the saturation
    /compression region of the device. This was observed on 5 USRPN210+WBX
    devices that I tested.
    a) So I understand that this calibration factor is valid only in the
    linear
    range. Is it correct?
    b) if so, I should not use the calibration factor to turn the IMD
    products’
    value on FFT to dBm using this factor as they occur only when the device
    is
    operating in the non linear range.

Please let me know if my understanding is right.

  1. You told me to plot the IMD plot with difference of power between
    fundamental tone and the IMD product power @ harmonc freq. i.e [power @
    F1
  • power @ (2F1-F2)] . But this is coming as a +ve value for me while
    your
    plot shows the Y axis on a negative scale. Would this also be might due
    to
    the attenuator you are using? or should I consider the negative of the
    difference?
  1. Also you have told to run the UHD calibration routines should be run
    for
    the N210+WBX:

    • uhd_cal_rx_iq_balance
    • uhd_cal_tx_dc_offset
    • uhd_cal_tx_iq_balance

Are these mandatory or optional? Do they affect the daughter board or
mother board of the device? The devices I use are for common use in a
lab
and hence it is preferable not to change any of the configurations for
specific use for long term. Hence my question.

As per the information given at
USRP Hardware Driver and USRP Manual: Device Calibration and Frontend Correction it
seems
that it more tending towards the WBX board operation. Also the factors
get
stored in the machine they are connected to. So in case we want to run
the
USRPN210 device with old configuration, it would be ok to just delete
the
files or run it from some other machine? Is this correct?

Can any random number be specified for a serial number if not detected
from
the device or any specific format is to be used ( i.e min number of
digits
& alphabets etc)

Kindly clarify the above points.

Thanks again

Regards

Gayathri

http://files.ettus.com/uhd_docs/manual/html/calibration.html

You don’t need to take the FFT to get the power. See my attached
example (in
GR 3.7). Just take the output from the USRP, complex_to_mag_squared,
integration with decimation to 1 Hz, 10*log10() + k, and probe at 1 Hz
rate.

I’m using an N210 + WBX with gain=15 dB. That gives me a calibration
factor
of k=-34.8 dB. I can vary power of my signal generator from -50 dBm to
-10
dBm with less that 0.1 dB error in the calculated power.

https://dl.dropboxusercontent.com/u/49570443/power_calc.grc

https://dl.dropboxusercontent.com/u/49570443/power_calc.grc.png

Thanks,
Lou
KD4HSO

http://gnuradio.4.n7.nabble.com/file/n49981/power_calc.png

Gayathri Ramasubramanian wrote

“complex to Mag^2” block → “integrate and decimate” block → "10 log

Gayathri

View this message in context:
http://gnuradio.4.n7.nabble.com/GR-USRP-and-GPIB-measurements-tp49727p49981.html
Sent from the GnuRadio mailing list archive at Nabble.com.

Hi

Thank you for previous note.
I removed the FFT block and made my flow graph similar to yours. It
works now :slight_smile:

It is the first time I am working with the QT GUI widgets so kindly
help clear the foll doubts.

  1. I do not have the QT_FREQUENCY_GUI_SINK BLOCK you have used only
    the QT_GUI_SINK which gives the frequency spectrum. This is because of
    the
    lower version ( 3.6.4.1) I suppose.
    For a USRPN210+WBX board I got the calibration factor to be as follows:

@ 400 MHZ, K = - 62.8

@ 900 MHz ; K = -58.8

@ 1.8 GHz ; K = --52.5

I tested for 3 devices (USRPN210+WBX) and got around same result . The
Gain was set to 0. This does not match the value you had obtained with
your test.

Is there a major difference in the working b/w QT_Frequency_SINK and
QT_GUI_SINK that would affect the final result?If so, Can you suggest
a work around. .

  1. The channel gain value seems to add to the transmitted power from
    signal generator. So the calibration factor also needs to have channel
    gain added to its value for getting same result.

So it seems like the channel gain affects the signal power like a LNA
and hence increases the strength of the signal wrt to the noise. Then
how is it different from RX gain.? Im a little confused on
where the gain parameters actually impact the value of the signal.

  1. The QT block also seems to do an internal FFT default size of 256 and
    send out a signal. However this value does not match the value seen on
    WX
    FFT GUI ( the CUrsor on QT SINK actually gives a vlue very close to
    actual
    signal generator reading). is there major difference in signal
    processing
    of QT widgets as compared to WX widgets. I found some previous threads
    stating whats the difference as per the software processing level in GRC
    not about the way it processed the received signal.

  2. Why is there a difference in the channel_power_dBm value and the
    value
    seen on the spectrum using the cursor. (refer the picture). The cursor
    shows -81.42 dB @ 0.0 Khz offset from centre freq, while the
    channel_power_dBm shows a value of -17.5 dBm

  3. Since This gives the channel power, I suppose this cant be used to
    find
    the intermodulation’s power with two tone test. Is my understanding
    correct?or is there a work around for it too?

Kindly clarify the above doubts.
Look forward to your response.

Thanks
Gayathri

Hi

Thank you for your note.

I am trying to get the power of received signal using the blocks you had
suggested previously.
I have attached a flow graph snapshot ('uhd_fft_DK.grc.png").

I am using the UHD:USRP Source to receive the signal which in turn sends
the signal in the foll path
“complex to Mag^2” block --> “integrate and decimate” block --> “10 log
+k” block—> “signal probe” block.

However there are errors mainly referring to the in-compatible I/O sizes
of
source (complex to Mag ^2 block) and sink (Integrate and Decimate
block).

The I/O size of “Complex_Mag^2” block is said to be 4096, while the I/p
size of i"ntegrate and decimate" block is 4. ==> error as not
compatible.
Should I change the Vector length? I am not quite sure?
The error propagates to other blocks if I change anything.

Kindly help solve this.

I have also attached another flowgraph (flowgraph screenshot.png) which
works but it doesn’t have the “integrate and Decimate” block or the
Log10+K
block but it works and receives the signal (developed by someone else
previously in our lab). Now since the “Log10+K” block is missing, does
it
mean the output is given in mwatts instead of dBm?
I cant seem to use this version on as the “probe_signal_vector” block
does
not seem to be supported in GRC 3.6.4.1 which is the version in my
system.

I tried using simple probe_signal block instead but again faced the I/o
size incompatibility error? Is there a work around for this “probe
signal
vector” block?

Also could you explain briefly how to approach and solve there I/o size
errors.

Look forward to your response

Thanks
Gayathri

  1. I set the gain of the WBX to 15 dB, which is why our values do not
    match.
    If I use 0 dB WBX gain I get the following for a -40 dBm tone:

400 MHz, k=-30.6
900 MHz, k=-25.5
1.8 GHz, k=-19.2

There should be no computational differences between the GUI and
frequency
sinks.

  1. The channel gain is the RX gain; it’s the same thing. The WBX
    receive
    side has two fixed gain stages followed by 0 - 31.5 dB of digital
    attenuation. That’s what you are setting for the gain in the UHD; a
    gain of
    31.5 dB equates to 0 dB attenuation setting. Yes, you have to offset
    the
    computed power by RX gain if you want the reading to stay constant.
    Note
    the n*log10()+k block does not allow k to me changed in real-time, so
    you
    have to multiply each sample by a constant. You must do the same if you
    want to calibrate the spectrum. See example here:

https://dl.dropboxusercontent.com/u/49570443/power_calc_2.grc

I have no idea about the differences between the WX and QT frequency
sinks.
They should give identical results with identical settings.

  1. The channel power and spectrum are totally different computations, so
    they must have different calibration factors. See my linked example.

  2. For finding intermod levels, you use the same math for the channel
    power
    calculation, but narrow the channel with a frequency translating filter
    and
    decimation. Either tune to each intermod product or have multiple
    channelizers in parallel. This is where it may be more efficient to do
    the
    FFT and find the peaks, but it’s harder to program that.

Thanks,
Lou
KD4HSO

Gayathri Ramasubramanian wrote

the QT_GUI_SINK which gives the frequency spectrum. This is because of
I tested for 3 devices (USRPN210+WBX) and got around same result . The

stating whats the difference as per the software processing level in GRC

Kindly clarify the above doubts.
Look forward to your response.

Thanks
Gayathri


View this message in context:
http://gnuradio.4.n7.nabble.com/GR-USRP-and-GPIB-measurements-tp49727p50066.html
Sent from the GnuRadio mailing list archive at Nabble.com.

Hello Gayathri,
On 25.08.2014 10:17, Gayathri Ramasubramanian wrote:

Hi
Thank you for your note.
My questions here are just based on my previous measurements and your last
mail. kindly clarify the same.

  1. Is it possible that different USRPN210 devices with WBX boards have the
    different calibration factors. I set the channel gain to 0 and still get :
    […]
    Your values seem to be different ,
    Hence the question.
    Yes, that will be the case, as with any analog circuitry; amplifiers,
    filters and such are produced with tolerances, and thus these values are
    expected to change from device to device.
    Is this correct or am I doing something wrong. As my values seem to be
    almost 30 ~ 33 higher than ones you are getting from your tests. what could
    be causing this error/ discrepancy.
    30dB higher? Are you sure you are inserting the same power (by the
    way, -40dBm is quite some power and I would generally recommend using an
    attenuator to avoid damage if the input power rises, as Lou already
    warned about)? Are integrating over the same number of samples?
    As a special note to your measurements: You’ll sometimes see LO leakage,
    which can contribute significantly to total power. To avoid that, you
    could specify a uhd.tune_request(a,b) instead of a simple target
    frequency. That would allow you to specify a digital tuning offset,
    which would move the LO out of your measured bandwidth.

Generally, I wonder what you use these values for, afterwards. Because
they are only valid for the 250kHz bandwidth as limited by the
antialiasing filter in your USRP’s FPGA; if you wanted to use this value
for higher sample rates, I’d expect there to be a high level of
proportionality, but I guess since your doing calibration now, it would
make sense to use a filter on the PC where you can control the noise
equivalent bandwidth yourself – especially since I assume you’re
measuring a single tone, and that would fit in the narrowest of bandpass
filters, which would avoid measuring the power of the input tone
including your noise floor over your complete sampling rate bandwidth.

If you were using a spectrum analyzer, that would actually shift a
analog filter through the spectrum and display the energy passing that
filter at every frequency, which would give you a display quite
different from mag squared for the full 250kHz bandwidth of 1s of input
signal samples.

Greetings,
Marcus

  1. What you are doing is correct. What’s the power of your signal
    generator? I was using -40 dBm, and as Marcus says, that is strong, but
    you’ll need to have a good SNR since the noise bandwidth is ~250 kHz.
    Several posts ago you stated running -8 dBm into the WBX. You may have
    damaged the WBX, since that 30 dB difference is about the isolation you
    may
    get with a damaged amp. I suggest sniffing down the chain of the WBX
    with a
    spectrum analyzer, make sure the two amps are working. Also as Marcus
    said,
    you need to be using the same bandwidth and decimation, or that will
    change
    the calibration factor.

I choose 250 kHz wince I was working with WBFM channels.

  1. If the RX gain is changed, then the k must change to compensate. See
    my
    *.grc I linked in the last post. I assigned RX gain to a QT slider then
    added it to K. The RX gain can be changed and the RX power stays about
    the
    same.

  2. I’m on travel now so I don’t have access to my computer. You ought
    to
    update to GR 3.7; lot’s of nice features, and it also preserves the flow
    diagram even if the blocks of changed.

Thanks,
Lou
KD4HSO

Gayathri Ramasubramanian wrote

‘channel_power_dBm’ value to be very close to the input signal generator
mails you had sated that you get k = -34.5 wih RX gain of 15. So in view
calibration along with the xml file like last time. Using the xml to know
Gayathri

View this message in context:
http://gnuradio.4.n7.nabble.com/GR-USRP-and-GPIB-measurements-tp49727p50090.html
Sent from the GnuRadio mailing list archive at Nabble.com.

Hi
Thank you for your note.
My questions here are just based on my previous measurements and your
last
mail. kindly clarify the same.

  1. Is it possible that different USRPN210 devices with WBX boards have
    the
    different calibration factors. I set the channel gain to 0 and still get
    :
    @ 400 MHz : k = -62.8
    @ 900 MHz : k = -58.5
    @ 1800 MHZ: k = -52.5

I checked the above with 3 USRPN210 +WBX devices and found the K value
to
be very close to the stated values. Your values seem to be different ,
Hence the question.

My method was that I first ran the flow graph with channel gain as 0 and
k
as 0. Then found the power value displayed for the ‘channel_power_dBm’.
I subtracted this value from my input signal value (from signal
generator)
and found out the difference. This is what I am taking to be the ‘k’
value.
If I use k values (62.5 for 400 MHz) it in the block too, it give the
‘channel_power_dBm’ value to be very close to the input signal generator
value.
I repeated this for 3 devices at the 3 frequencies.

Is this correct or am I doing something wrong. As my values seem to be
almost 30 ~ 33 higher than ones you are getting from your tests. what
could
be causing this error/ discrepancy.

  1. In you mails you say that we have to subtract RX gain from
    Calibration
    factor to baseline it for RX gain = 0 dB . Also in one of your previous
    mails you had sated that you get k = -34.5 wih RX gain of 15. So in view
    of
    these two stmts, was your ‘k’ value of -34.5 for 1800 MHz.

This 1800 MHz is the only one satisfying this rule with previous data
set:
i.e *-34.5 - 15 = 19.5 *( this is close to the 1800 MHz results you have
written about in your last email). I just want to check if my
understanding
is correct and Im doing the math right.

  1. Could you kindly paste a picture of your flow graph for spectrum
    calibration along with the xml file like last time. Using the xml to
    know
    which bock is specified is a little tough. Having a visual flow graph
    along
    with the xml would be more effective to understand and relate between
    them.

Please clarify above. Look forward to your response.

Thanks
Gayathri

On Aug 23, 2014, at 8:38 PM, madengr [email protected] wrote:

  1. I set the gain of the WBX to 15 dB, which is why our values do not
    match.
    side has two fixed gain stages followed by 0 - 31.5 dB of digital
    attenuation. That’s what you are setting for the gain in the UHD; a gain
    of
    31.5 dB equates to 0 dB attenuation setting. Yes, you have to offset the
    computed power by RX gain if you want the reading to stay constant. Note
    the n*log10()+k block does not allow k to me changed in real-time, so you
    have to multiply each sample by a constant. You must do the same if you
    want to calibrate the spectrum. See example here:

https://dl.dropboxusercontent.com/u/49570443/power_calc_2.grc

I have no idea about the differences between the WX and QT frequency
sinks.
They should give identical results with identical settings.

  1. The channel power and spectrum are totally different computations, so
    they must have different calibration factors. See my linked example.

  2. For finding intermod levels, you use the same math for the channel
    power
    calculation, but narrow the channel with a frequency translating filter
    and
    decimation. Either tune to each intermod product or have multiple
    channelizers in parallel. This is where it may be more efficient to do
    the

lower version ( 3.6.4.1) I suppose.
your test.
and hence increases the strength of the signal wrt to the noise. Then
how is it different from RX gain.? Im a little confused on
where the gain parameters actually impact the value of the signal.

  1. The QT block also seems to do an internal FFT default size of 256 and
    send out a signal. However this value does not match the value seen on WX
    FFT GUI ( the CUrsor on QT SINK actually gives a vlue very close to
    actual
  2. Since This gives the channel power, I suppose this cant be used to
    find


View this message in context:
http://gnuradio.4.n7.nabble.com/GR-USRP-and-GPIB-measurements-tp49727p50066.html

-8dBm may, or may not, have damaged either of the the two LNA stages on
the WBX.

“Modern” WBXen have TVS and limiter diodes in front of both stages to
try to ameliorate laboratory accident consequences, but they offer no
firm guarantees.

Nearly all of the receivers on USRP daughtercards are designed to be
connected to an antenna, which means a low-noise stage “up front”. This
means that using them in the lab requires some prudence and caution,
such as putting a 30dB attenuator in front of the device before
connecting it to a laboratory signal generator. There are several
orders-of-magnitude difference between the average power levels you’d
expect from an antenna, and what you’d typically be trying in the lab,
so when you’re working in the lab, padding the input with 30dB of
attenuation is just a really prudent practice.

Modern RF LNA transistors are, for the most part, “precious little
princesses”. They have gate regions that are only a few molecules thick,
and “high” currents flowing in these areas will cause holes to form in
the gate layer, which will quickly affect gain and noise figure, and
then lead to complete device failure. You can put clamping diodes in
front (as Ettus now does on its xBX daughtercard series), but such
devices cannot protect in all cases, and they compromise ultimate noise
figure and gain a little bit.

Thus, ahem, endeth the sermon of the day. We turn now to the hymn book
“Glory unto GaAs”. :slight_smile: :slight_smile:

On 2014-08-25 10:23, madengr wrote:

I choose 250 kHz wince I was working with WBFM channels.
Thanks,
Lou
KD4HSO

Gayathri Ramasubramanian wrote

Hi Thank you for your note. My questions here are just based on my previous
measurements and your last mail. kindly clarify the same. 1) Is it possible that
different USRPN210 devices with WBX boards have the different calibration factors.
I set the channel gain to 0 and still get : @ 400 MHz : k = -62.8 @ 900 MHz : k =
-58.5 @ 1800 MHZ: k = -52.5 I checked the above with 3 USRPN210 +WBX devices and
found the K value to be very close to the stated values. Your values seem to be
different , Hence the question. My method was that I first ran the flow graph with
channel gain as 0 and k as 0. Then found the power value displayed for the
‘channel_power_dBm’. I subtracted this value from my input signal value (from
signal generator) and found out the difference. This is what I am taking to be the
‘k’ value. If I use k values (62.5 for 400 MHz) it in the block too, it give the
‘channel_power_dBm’ value to be very close to the input signal generator value. I
repeated this for 3 devices at
the 3 frequencies. Is this correct or am I doing something wrong. As my
values seem to be almost 30 ~ 33 higher than ones you are getting from
your tests. what could be causing this error/ discrepancy. 2) In you
mails you say that we have to subtract RX gain from Calibration factor
to baseline it for RX gain = 0 dB . Also in one of your previous mails
you had sated that you get k = -34.5 wih RX gain of 15. So in view of
these two stmts, was your ‘k’ value of -34.5 for 1800 MHz. This 1800 MHz
is the only one satisfying this rule with previous data set: i.e *-34.5

  • 15 = 19.5 *( this is close to the 1800 MHz results you have written
    about in your last email). I just want to check if my understanding is
    correct and Im doing the math right. 3) Could you kindly paste a picture
    of your flow graph for spectrum calibration along with the xml file like
    last time. Using the xml to know which bock is specified is a little
    tough. Having a visual flow graph along with the xml would be more
    effective to understand and relate between them. Please clarify above.
    Look forward to your response. Thanks Gayathri


View this message in context:
http://gnuradio.4.n7.nabble.com/GR-USRP-and-GPIB-measurements-tp49727p50090.html
[1]
Sent from the GnuRadio mailing list archive at Nabble.com.


Discuss-gnuradio mailing list
[email protected]
Discuss-gnuradio Info Page [2]

Links: