I’m currently using GnuRadio/GRC/USRP to implement a
proof-of-concept transmitter to demonstrate the
feasibility of using open-source SW and low-cost
HW to build a transceiver that is compatible with
NASA’s Telemetry Data Relay Satellite System (TDRSS).
TDRSS uses Direct Sequence Spread Spectrum (DSSS),
and has extremely tight specs on the chip clock
stability. In the mode that I’m demonstrating,
the chip clock runs at 3077799 Hz and the jitter
is supposed to be 0.01 Hz or less.
When I attach my transmitter to the TDRSS test set
for validation, the test set’s receiver keeps skirting
on the edge of synchronizing with the PN code, but
never correlates long enough for the receiver to
move from “acquisition” to “tracking”.
The carrier frequency, PN sequence, chip rate, and
signal level have all been verified to be correct.
The prevailing opinion in the lab is that the chip
clock must be jittering too much. They further claim
that the jitter is inescapable because I have so few
samples per chip (I’m running at 8e6 samples/sec)
and there isn’t an integer relation between the
chip rate and the sample rate.
So, after that long-winded introduction, I have
two questions for the list:
-
Is 0.01 Hz jitter at 3.077799 MHz achievable with
GnuRadio? I’m running 3.1.1 on a MacBook Pro with
a Core2 Duo processor running at 2.3 GHz. -
Is the lab technician’s explanation of the source
of the jitter correct? Somehow, it doesn’t seem
mathematically reasonable. 8 Msamples/sec is about
2.6 times the Nyquist rate for a 3.077799 MHz
signal. There should be more than enough sample
data to accurately represent the chipping sequence.
@(^.^)@ Ed