Channel Response in OFDM with MIMO USRP

Hello -

I’ve created a MIMO setup connecting 2 USRPs using an external clock
reference (10MHz) and 1-PPS input. However, when I transmit packets
between
them, I see a phase rotation experienced by the subcarriers. I’ve
attached
an image highlighting the same.
The x-axis is the subcarrier index (80 subcarriers = 1 OFDM symbol) and
the
y-axis is the unwrapped channel phase.

I’m trying to understand why this would happen. As far as I know only
sampling offset should be the reason for the same, but since these 2
USRPs
now share the external clock, can this still be a problem? Or could
there
be another reason why I see this happening. The slope of this channel
phase
remains seemingly consistent for all the packets. I’m trying to achieve
a
scenario where I see a constant phase for all the subcarriers.

Using USRP N200’s rev 4.

Thanks.

On 13.01.13 08:04, Unforgiven11 dreams wrote:

As far as I know only sampling offset should be the reason for the
same, but since these 2 USRPs now share the external clock, can this
still be a problem?

10MHz/PPS syncing seems to work, because otherwise you wouldn’t see this
slope to be identical over all frames.

I think you are right: timing offset. Maybe because of the total delay
between those two synced baseband-in to baseband-out reference planes.
So, some basic “timing synchronization” is still needed, every time
after you have modified the “channel”.

IRRC phaseshift(n) = delta2pi*n/N_subcarrier

The current delta seems to be around 0.25 in your plot, so maybe a
offsetting your RX fft window by 80*0.25 = 20 samples helps reducing
slope.

-paul

Thanks for the reply.

Based on your suggestion and some reading, could it be due to the
‘fractional’ timing offset or not ‘integer’ timing offset. On the first
glance it appears that the preamble detection was delayed - since any
delay
in the time domain results in the phase shift in the frequency domain
(which is reflected in the channel response calculated from the
preamble).
So, I just ‘pre-pone’ the preamble detection by 1 sample, and
interestingly
the slope just inverted. On the other hand if I ‘post-pone’ it by 1
sample,
the slope increased. So could it be that the 0<offset<1?

PS: If I offset the detection by ~20 samples, the resulting signal after
acquisition is messed up. Also, earlier when I demodulate the signal, it
happens correctly, after this huge an offset the demodulation fails. The
FFT happens after the preamble detection, and it appears that the
detection
is accurate since the interval between consecutive preamble spikes
(packets) corresponds to the interval I maintain at the transmitter.

Further, I thought that having a common clk-ref/pps, should sync up the
DAC
and ADC running the sender and the receiver? Is that not a correct
assumption? Or could this small a timing offset result because of the
inaccuracy of the clock-ref/pps?

-jack

On 13.01.13 19:33, Unforgiven11 dreams wrote:

Based on your suggestion and some reading, could it be due to the
‘fractional’ timing offset or not ‘integer’ timing offset. On the first
glance it appears that the preamble detection was delayed - since any
delay in the time domain results in the phase shift in the frequency
domain (which is reflected in the channel response calculated from the
preamble). So, I just ‘pre-pone’ the preamble detection by 1 sample, and
interestingly the slope just inverted. On the other hand if I
‘post-pone’ it by 1 sample, the slope increased. So could it be that the
0<offset<1?

That sounds reasonable, the value I quickly calculated seemed much too
big for me too. What I do know is that the relation between delay and
linear phase is super simple. Look it up in a textbook and calculate the
time that corresponds to the slope you showed earlier. It should be
smaller than the sampling clock interval then.

I didn’t realize that you do preamble detection already, I thought since
you are using the PPS all your samples are timestamped and you basically
know when the OFDM symbol starts. Are you transmitting frames with one
preamble and multiple OFDM symbols each?

±1 slope inversion means for me that your pre-fft timing
synchronization is already working and your fft window is placed
correctly. That would mean the estimates indeed are correct and reflect
the characteristics of the channel - i.e. all the stuff between DAC and
ADC :wink: - and those introduce some (albeit small) delay.

-paul

-jack

I’ve only been half paying attention to this thread.

But two frac-N synthesizers, even when fed with a phase-coherent
reference clock, will have some random phase-offset between them every
time they’re tuned. With newer UHD versions on the N2XX series
machines and the SBX daughtercard, you can get around this with
the special phase-alignment mode using timed commands for tuning.
But that isn’t a particularly realistic scenario for real-world
communications
apps.

In the real world, the TX and RX will not be phase coherent in any way.


Marcus L.
Principal Investigator
Shirleys Bay Radio Astronomy Consortium
http://www.sbrac.org

I see.
Well, I’m running the regular OFDM rx-chain which includes preamble
detection->sampler->fft->so on. On the transmitter, each frame
(preamble[1
OFDM symbol] + data[multiple OFDM symbols]) acts as a burst which
includes
the required timestamps and the burst tags.

I was just somehow hoping to get a flat channel for some analysis which
otherwise is a pain, since I have to do it without channel equalization.
This phase change results in symbols rotating in the I/Q domain. :slight_smile:

-jack

On 13.01.13 22:00, Jack M wrote:

Well, I’m running the regular OFDM rx-chain which includes preamble
detection->sampler->fft->so on. On the transmitter, each frame
(preamble[1 OFDM symbol] + data[multiple OFDM symbols]) acts as a burst
which includes the required timestamps and the burst tags.

Nice, I’d be interested in how your chain looks like in GNURadio. I’m in
the middle of building a good 802.11n RX chain, but it’s only offline so
far, a simple looped sequential recipe in matlab that needs synced
baseband blocks as input and tries to decode all frames it finds in it.

I was just somehow hoping to get a flat channel for some analysis which
otherwise is a pain, since I have to do it without channel equalization.
This phase change results in symbols rotating in the I/Q domain. :slight_smile:

Your channel is super flat and good-natured. But I don’t think that
there is a way to avoid post-fft multiplication with a complex vector
that has the appropriate linear phase progression over subcarriers. I
guess one could call it manual fine-timing compensation, or some sort of
“static channel phase equalization” :wink:

-paul

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs