Carrier Frequency

I cannot find in any of the transmission examples for digital
communication
where the carrier frequency comes into play. I have tried to track it
down,
but cannot figure out where the actual “carrier modulation” is taking
place. Any help?

On Tue, 2009-03-17 at 11:03 -0500, William H. wrote:

I cannot find in any of the transmission examples for digital
communication where the carrier frequency comes into play. I have
tried to track it down, but cannot figure out where the actual
“carrier modulation” is taking place. Any help?

That’s because there aren’t any.

The USRP is usually treated as a “complex baseband, direct conversion”
system. The modulation is created in its baseband representation, with
no carrier component, and sent to the USRP and converted to analog I and
Q signals by the DAC. The analog daughterboards then upconvert these
using a quadrature mixer to a passband frequency.

Receiving works exactly the same way, in the opposite order.

This a slightly simplified description, and there are some additional
considerations on the receive side due to frequency and timing offset
between the transmitter and receiver, but that’s the gist of it.

Johnathan

William H. wrote:

I cannot find in any of the transmission examples for digital
communication where the carrier frequency comes into play. I have tried
to track it down, but cannot figure out where the actual “carrier
modulation” is taking place. Any help?

GnuRadio is one of a class of radios known as “zero-IF” or
direct conversion". To understand what this means, consider
a classical receiver where a local oscillator (LO) is mixed with
the carrier to form an intermediate frequency (IF) that is
the difference between the LO and the carrier. Now pretend we
slowly change the frequency of the LO to approach that of the
carrier. As we do so, our IF gets lower and lower. When they
are equal, our IF is centered at 0 Hz, with sidebands extending
above and below zero. This is called the baseband signal.
In order to represent both the positive and negative parts,
it is necessary to to represent both the amplitude and the
phase, hence the need to use a quadrature signal with I and Q
components.

All this is basic digital signal processing 101. I’m sure any
DSP text can do a better job of explaining it than this.

The USRP source block delivers digital samples that are
already downconverted to a 0 Hz IF. The USRP sink block expects
digital samples that are baseband, centered on 0 Hz. The USRP
hardware contains the necessary down and up converters to receive
or generate the correct carrier (sometimes with the help of a
daughtercard).

As a result of all this, modulation and demodulation do not
necessarily look like you might expect. For example, to
do a simple amplitude demodulator, you would compute
sqrt(I^2 + Q^2) for each sample. Similarly, a phase demod would
be done as atan(Q/I).

Let’s take a (slightly) more complex example. In QPSK
(Quadrature Phase-Shift Keying), there are two bits encoded
into every “symbol” that is transmitted. The “symbols”
correspond to any one from a set of four phases (each 90
degrees apart). For simplicity, lets choose:
00 = 45 degrees
01 = 135 degrees
11 = 225 gegrees
10 = 315 degrees

So, in order to modulate a digital bitstream using QPSK
you break the bitstream up into groups of two bits and
set I and Q accordingly:

00 => I=+1 , Q=+1
01 => I=-1 , Q=+1
11 => I=-1 , Q=-1
10 => I=+1 , Q=-1

This gives you a nice unit vector with the desired phase.

Note that nowhere did the carrier frequency enter into
the modulation (or the preceeding demodulations).

This is a highly simplified explanation. Find a good DSP
text to learn more.

@(^.^)@ Ed