GPS with DBSRX, Almost There

I’ve hit a wall with using the DBSRX to record GPS L1 C/A code data. The
signal path consists of the following:

Spirent GPS Simulator -> 2 MHz wide SAW @ L1 -> +40 dB Miteq Amp ->
DBSRX -> USRP

Notes/Settings:

  1. Spirent Simulator:
    Static scenario, 39 deg North, -84.866 deg West, 0.0 meter height
    Visible SVs: 9, 17, 21, 8, 23, 1, 3 ,31, 29, 25, 5, 30
    C/N0 of all SVs: 50 dB-Hz

  2. SAW Filter
    2 MHz bandwidth
    Center frequency = 1.57542e9 Hz
    .5-.8 dB NF

  3. Miteq Amp:
    +40 dB gain
    .2-.8 dB NF

  4. DBSRX:
    Target LO frequency: 1.57542e9 - 604000 Hz = 1.574816 Hz
    Actual LO frequency: 1.5748125e9 Hz
    Resulting IF frequency: 607500 Hz
    Refclck_divisor = 16
    N = 25197
    R = 64
    GC1 Gain: 30 dB
    GC2 Gain: 0 dB
    Baseband filter 3 dB cutoff: 4 MHz

  5. USRP
    Decimation: 16
    PGA Gain: 0 dB

I’ve noticed that the DLL of my software receiver settles to +15 Hz, and
the true IF is +24 kHz from the predicted IF. This would indicate that
the 64 MHz board clock is ~1 kHz from its spec value. This, in itself is
not a problem, but I was wondering if this was within tolerances of the
onboard XO?

The real problem lies in the fact that the carrier tracking loop (a 3rd
order PLL) of my software receiver cannot achieve phase lock. The phase
jitter looks high, and the LO frequency drifts so much it dominates over
the Doppler derived from satellite motion.

If anyone would like any GPS IF data I would be happy to email it to
your personal email address (indicate how many seconds of data you would
like). Thanks!

On 3/6/07, Gregory W Heckler [email protected] wrote:

I’ve noticed that the DLL of my software receiver settles to +15 Hz, and
the true IF is +24 kHz from the predicted IF. This would indicate that
the 64 MHz board clock is ~1 kHz from its spec value. This, in itself is
not a problem, but I was wondering if this was within tolerances of the
onboard XO?

I believe the XO on the USRP has a 50ppm spec, so 1kHz is well within
that.

The real problem lies in the fact that the carrier tracking loop (a 3rd
order PLL) of my software receiver cannot achieve phase lock. The phase
jitter looks high, and the LO frequency drifts so much it dominates over
the Doppler derived from satellite motion.

If anyone would like any GPS IF data I would be happy to email it to
your personal email address (indicate how many seconds of data you would
like). Thanks!

On a side note, have you seen that SiGe device available from Spark
Fun Electronics? I posted earlier today about it. It might be of
interest?

http://www.sparkfun.com/commerce/product_info.php?products_id=8238#

Brian

Brian:

From what I see it does not continuously stream data, which is a
requirement for my needs. Additionally I am looking at recording GPS L2C
and the new Galileo frequencies, so a tuneable front end is a must.

Greg

2007/3/6, Gregory W Heckler [email protected]:

If anyone would like any GPS IF data I would be happy to email it to
your personal email address (indicate how many seconds of data you would
like). Thanks!

I am very interested to here more about your experience with GNU Radio
and GPS! I am creating a similar application, and wonder what your
goals are? Are you creating a complete receiver in GNU Radio, or a you
using an existing software receiver?

Feel free to contact me off list if you want to discuss more.

Cheers,


Trond D.

On Tue, Mar 06, 2007 at 04:57:55PM -0500, Gregory W Heckler wrote:

Brian:

From what I see it does not continuously stream data, which is a
requirement for my needs. Additionally I am looking at recording GPS L2C
and the new Galileo frequencies, so a tuneable front end is a must.

I’m pretty sure that it’s capable of continuously streaming data.
It’s basically a GPS front end chip connected to an FX2.
It might need a software tweak or two, but if they started with the
USRP code, it’ll stream by default.

Eric

Gregory W Heckler wrote:

The real problem lies in the fact that the carrier tracking loop (a
3rd order PLL) of my software receiver cannot achieve phase lock. The
phase jitter looks high, and the LO frequency drifts so much it
dominates over the Doppler derived from satellite motion.

Yes, it looks like my dbs_rx is wandering 10 Hz or so over timescales
of a few seconds. A second-order loop with bandwidth of about 20 Hz
seems to track all this out, but at the expense of noise—it would be
nice to use a smaller bandwidth for a stationary receiver.

Below are plots of 4 seconds of an actual dbs_rx recording (one point
per millisecond). The first plot is unwrapped phase (y-axis in radians)
and the second is the demodulated data after the PLL (also attached as
an
octave/matlab file).

So things look reasonably okay for strong L1 signals, but there may
be limits to how far the dbs_rx can be pushed for weak signals, if
cycle slips are at all important. Maybe the strong PRNs can aid the
weak ones, since the LO jitter is common to all.

If anyone would like any GPS IF data I would be happy to email it to
your personal email address (indicate how many seconds of data you
would like). Thanks!

I could take a look if you like—two seconds perhaps?

Cheers,
Peter M.

Martin D. wrote:

Maybe you could inject a stable frequency near the wanted RX frequency.
Say a few Mhz away from the 1.57542e9 you want to receive.
Then you could use this in the output to remove the jitter and LO drift.

for example:
inject 1600000000 Mhz (=25 harmonic of 64MHz) at the input (after the saw filter)
This will result in 24.58 Mhz +jitter +LOdrift at the input of the USRP (after dbs_rx downmix)

I wonder if any 25th harmonic is already in the signal, by virtue
of leakage within the USRP itself? That would be great.

Cheers,
Peter M.

You have not read or internalized the specifications for the oscillator
on the USRP which is intimately involved in this system. It is 50 ppm
accuracy which is bad enough, but look at the can. It is begging to
have thermal variances. Start up the usrp and your process and
investigate Newton’s Law of Cooling (blow on the oscillator) and watch
everything dance! I stopped working on GPS until I could come up with a
replacement. In my professional applications for the USRP, I am
replacing the oscillator with an external stabilized injection.

Bob

Peter M. wrote:

seems to track all this out, but at the expense of noise—it would be
weak ones, since the LO jitter is common to all.



Discuss-gnuradio mailing list
[email protected]
http://lists.gnu.org/mailman/listinfo/discuss-gnuradio


AMSAT Director and VP Engineering. Member: ARRL, AMSAT-DL,
TAPR, Packrats, NJQRP, QRP ARCI, QCWA, FRC. ARRL SDR WG Chair
“Taking fun as simply fun and earnestness in earnest shows
how thoroughly thou none of the two discernest.” - Piet Hine

Peter M. wrote:

This will result in 24.58 Mhz +jitter +LOdrift at the input of the
USRP (after dbs_rx downmix)

I wonder if any 25th harmonic is already in the signal, by virtue
of leakage within the USRP itself? That would be great.

The 25th harmonic will drift just as much as your signal, so it won’t
help. Martin’s suggestion would only work with a signal with better
stability. Of course, you already have some of those – the stronger
satellites.

Matt

Gregory W Heckler wrote:

Visible SVs: 9, 17, 21, 8, 23, 1, 3 ,31, 29, 25, 5, 30

The real problem lies in the fact that the carrier tracking loop (a 3rd
order PLL) of my software receiver cannot achieve phase lock. The phase
jitter looks high, and the LO frequency drifts so much it dominates over
the Doppler derived from satellite motion.
Maybe you could inject a stable frequency near the wanted RX frequency.
Say a few Mhz away from the 1.57542e9 you want to receive.
Then you could use this in the output to remove the jitter and LO drift.

for example:
inject 1600000000 Mhz (=25 harmonic of 64MHz) at the input (after the
saw filter)
This will result in 24.58 Mhz +jitter +LOdrift at the input of the USRP
(after dbs_rx downmix)
Use a second channel in the USRP to get this 24.58 Mhz to the host at
around 0 + jitter + LOdrift.
Mix (multiply) the conjugate of this with the actual signal on channel 0
to remove jitter and LO-drift.

You might need to low-pass filter it first or even use a second PLL.

If the stability of the 64Mhz clock of the usrp is the problem, then you
need an external stable source.
If only the jitter of the dbs_rx is the problem, then you can use the
24th or 25th harmonic of the 64Mhz usrp clock.

Greetings,
Martin
Greetings,
Martin

I measured the phase noise of the 64 MHz board clock. Looking at the
result, I doubt the board clock is producing the phase noise I am seeing
in my receiver.

Gregory W Heckler wrote:

[email protected]
http://lists.gnu.org/mailman/listinfo/discuss-gnuradio

Hmmm. The PLL on the MAX2118 downconverter has typical phase-noise
specs of around -80dBc/Hz at 10Khz
offset. Below 1Khz offset, the phase noise increases to about
-55dBc/Hz, which is about 25dB worse than the
xtal oscillator. According to the datasheet, anyway…


Marcus L. Mail: Dept 1A12, M/S: 04352P16
Security Standards Advisor Phone: (ESN) 393-9145 +1 613 763 9145
Strategic Standards
Nortel Networks [email protected]

Robert McGwier wrote:

There is a multiplier circuit/ PLL in the DBS-RX. Whatever phase
noise is coming from the oscillator is being multiplied considerably
by this upconversion to be used at LO in the DBS-RX. You cannot get
low phase noise oscillators and high performance mixers in that small
a package. Together these considerations imply more power and
territory and thermal control than is available with the DBS-RX.

Bob

Indeed, the target application space of the MAX2118 (satellite TV), upon
which DBS_RX is based, has relatively sloppy
phase-noise requirements.


Marcus L. Mail: Dept 1A12, M/S: 04352P16
Security Standards Advisor Phone: (ESN) 393-9145 +1 613 763 9145
Strategic Standards
Nortel Networks [email protected]

I’ve noticed that the DLL of my software receiver settles to +15 Hz,
and the true IF is +24 kHz from the predicted IF. This would indicate
that the 64 MHz board clock is ~1 kHz from its spec value. This, in
itself is not a problem, but I was wondering if this was within
tolerances of the onboard XO?

24 kHz off at 1575 MHz would be 15ppm, which is within the spec of 50
ppm.

The real problem lies in the fact that the carrier tracking loop (a
3rd order PLL) of my software receiver cannot achieve phase lock. The
phase jitter looks high, and the LO frequency drifts so much it
dominates over the Doppler derived from satellite motion.

Yes, the clock does have a lot of drift relative to the doppler change
rate.

To all concerned parties:

I think I’ve discovered the problem. My “tune” routine chose the R and N
dividers to minimize the difference between the command and desired LO
frequencies. For L1 this ended up being 64 and 25197. The refclk was set
at 4 MHz, producing an R divider frequency of 62500 Hz. For a sanity
check I enabled the debug output of the Max2118 chip, in order to probe
the comparison frequency on the CNTOUT pin. Probing the pin produced a
nice square wave at 31250 Hz. The inconsistency bothered me, and I
double checked my driver was writing the correct values to the Max2118
registers, it checked out as ok. On a lark I decided to sacrifice the LO
error for a greater comparator frequency. After changing the R value to
8 and N to 3150 I recorded some new data and crunched it with my
software receiver. To my amazement the phase jitter went away. So, two
questions:

  1. Why the inconsistency with the R divider debug output on the CNTOUT
    pin? An R divider value of 8 produced a square wave at 250 kHz, again 2X
    lower than the values of R and reclk should produce. I am positive the
    value being written to the register complies with the spec sheet (R =
    2*2^(R_register))

  2. The Max2118 spec sheet quotes an XTAL input from 4 to 27 MHz.
    Obviously the low comparison frequency was effecting the stability of
    the PLL in the Max2118, why did the Python DB-SRX driver default the
    refclk divide to 16, placing the refclk at 4 MHz, rather than placing
    the refclk frequency towards the middle (16 MHz) of the spec? Was this
    done for some other technical reason?

I will try to quantify the C/N0 loss tomorrow. Additionally, I’d like to
thank everyone very much for the help that has been given to date.

-Greg Heckler

There is a multiplier circuit/ PLL in the DBS-RX. Whatever phase noise
is coming from the oscillator is being multiplied considerably by this
upconversion to be used at LO in the DBS-RX. You cannot get low phase
noise oscillators and high performance mixers in that small a package.
Together these considerations imply more power and territory and thermal
control than is available with the DBS-RX.

Bob

Marcus L. wrote:

Gregory W Heckler wrote:

I measured the phase noise of the 64 MHz board clock. Looking at the
result, I doubt the board clock is producing the phase noise I am
seeing in my receiver.


AMSAT Director and VP Engineering. Member: ARRL, AMSAT-DL,
TAPR, Packrats, NJQRP, QRP ARCI, QCWA, FRC. ARRL SDR WG Chair
“Taking fun as simply fun and earnestness in earnest shows
how thoroughly thou none of the two discernest.” - Piet Hine

Gregory W Heckler wrote:

the Max2118 registers, it checked out as ok. On a lark I decided to
sacrifice the LO error for a greater comparator frequency. After
changing the R value to 8 and N to 3150 I recorded some new data and
crunched it with my software receiver. To my amazement the phase
jitter went away. So, two questions:

  1. Why the inconsistency with the R divider debug output on the CNTOUT
    pin? An R divider value of 8 produced a square wave at 250 kHz, again
    2X lower than the values of R and reclk should produce. I am positive
    the value being written to the register complies with the spec sheet
    (R = 2*2^(R_register))

It is a long time ago that I developed this code, but I seem to recall
seeing the same discrepancy at the time. The center frequency comes out
right, though.

  1. The Max2118 spec sheet quotes an XTAL input from 4 to 27 MHz.
    Obviously the low comparison frequency was effecting the stability of
    the PLL in the Max2118, why did the Python DB-SRX driver default the
    refclk divide to 16, placing the refclk at 4 MHz, rather than placing
    the refclk frequency towards the middle (16 MHz) of the spec? Was this
    done for some other technical reason?

The maximum compare frequency of the DBSRX PLL is 2 MHz, and the minimum
divide ratio is 2. So no matter what we put in for the reference, it
will have to be divided down to 2 MHz or lower. However, if we do as
much of the divide as possible in the FPGA, we can give a phase-matched
signal to 2 DBSRX boards on the same USRP. This is useful in phased
arrays.

For example, if we give 2 DBSRXs the same 16 MHz reference, and they
each have to divide by 8, then there is an 8-way phase ambiguity between
them. However, if we give them a matched 4 MHz reference and they both
divide by 2, then there is only a 2-way phase ambiguity, which is much
easier to resolve.

If you aren’t doing phased arrays, this doesn’t matter.

I will try to quantify the C/N0 loss tomorrow. Additionally, I’d like
to thank everyone very much for the help that has been given to date.

Thanks for posting your data.

Matt

Gregory W Heckler wrote:

the Max2118 registers, it checked out as ok. On a lark I decided to
sacrifice the LO error for a greater comparator frequency. After
changing the R value to 8 and N to 3150 I recorded some new data and
crunched it with my software receiver. To my amazement the phase
jitter went away. So, two questions:

One other observation –

If you properly tune the loop filter in both cases, there should be an
18dB phase noise difference (div by 8 better than div by 64) inside the
loop bandwidth for the 2 cases you list. Since we aren’t changing the
loop filters (they are passive components on the board), it can be even
worse. The components on the board are optimized for a 1 MHz compare
frequency.

These 2 factors can easily account for the performance differences you
are seeing, and going to a div-by-4 would probably improve it more.
There is very little cost to using a very coarse frequency step in the
PLL since we have very fine tuning capability ( ~ 14 millihertz) in the
digital downconverter. Thus, tuning the LO exactly on-frequency is of
no benefit, and actually makes things worse. This is why we use 4 or 8
MHz steps in the RFX-series boards.

Matt

Attached is a plot of the C/N0 estimates derived from the I & Q
correlations from my software receiver. All SVs were set at a C/N0 of 50
dB-Hz in the simulation. As you can see they all converge (after the
pull in period) to a C/N0 in the range of 46-47 dB-Hz. Decreasing the
input C/N0 to 45 dB-Hz resulted in an estimated C/N0 of 44 dB-Hz.
Similarly, a C/N0 of 40 dB-Hz on the simulator produced an estimated
C/N0 of approximately 40 dB-Hz. The test configuration was the
following:

Spirent GPS Simulator -> +40 dB Miteq Amp -> DB-SRX -> USRP

The DB-SRX was set to have 30 dB of RF gain and 30 dB of IF gain. I ran
the same test with an external OCXO as the board clock and achieved the
same results. I will make sure my C++ driver for the DB-SRX guarantees
that the Max2118 comparator frequency is at least 250 kHz, clean up the
code and submit.

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs