Improvement to OFDM receiver synchronization code (ofdm_sync_pn)

Tom, Matt, others hacking on the GNURadio OFDM code:

I believe I’ve found a performance bug in the ofdm_sync_pn (the default)
OFDM synchronization code in the trunk Subversion revision as of today.
This bug is present in the unaltered trunk code, but most visible when
I increase (at the transmitter) the time between packets to 250 ms (see
benchmark_ofdm_tx.py.diff).

The Schmidl & Cox algorithm calculates the correlation between a window
of size fft_length/2 received samples and a delayed (by fft_length/2
samples) window of received samples of the same size. Then it
normalizes (divides) this by the power of the received samples. The
current normalization at ofdm_sync_pn.py:61 averages power over a window
of size fft_length/2 samples. This results in a huge spike of length
fft_length/2 and amplitude about 100 in the timing metric at the end of
the packet when the normalization factor falls but the correlation
hasn’t yet (see lower plot of trunk_ofdm_sync_before.png at t=2.36e5).
As shown in the same plot, the timing metric correctly peaks at the
beginning of the packet, with amplitude almost 1.0.

My proposed change is to compute the normalization factor over a window
of size fft_length instead of fft_length/2 (see ofdm_sync_pn.py.diff).
This results in the signals shown in trunk_ofdm_sync_after.png
(attached). Note the only spike in the timing metric now is at the
beginning of the packet.

The code wasn’t broken before, but I believe it would result in spurious
peak detections, which might hurt performance in the face of multiple
transmitters and receivers. The issue is harder to see when the
transmitter sends continuously, since the normalization factor never has
a chance to fall.

Comments/feedback? Thanks,
Kyle Jamieson

Hi Kylie,

This has also been proposed by Minn et. al in their paper “On timing
offset estimation for OFDM systems”. They showed that this method also
improves the variance of the estimator.

The modification is quite simple:
R(d) = 0.5 * sum over full window |r(d+m)|^2.

I think, normally the correlation power should always be less or equal
to the signal power. But if you have very small powers near zero that
can’t be precisely described with your current operands’ bitwidth, the
calculated power value may become smaller which yields a ratio greater
than 1.0.

However, we have never experienced such problems with our OFDM system
except for offline debugging scenarios. Over the air, the presence of
noise will probably avoid this problem.

Dominik

Oh, I forgot to mention that both the attached plots in my previous mail
were from an over-the-air test at 2.412 GHz using two USRP Rev 4.5
boards with RFX2400 Rev 30 and the new antennas.

Thanks for the pointer to the literature!

Kyle

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs