Tom, Matt, others hacking on the GNURadio OFDM code:
I believe I’ve found a performance bug in the ofdm_sync_pn (the default)
OFDM synchronization code in the trunk Subversion revision as of today.
This bug is present in the unaltered trunk code, but most visible when
I increase (at the transmitter) the time between packets to 250 ms (see
benchmark_ofdm_tx.py.diff).
The Schmidl & Cox algorithm calculates the correlation between a window
of size fft_length/2 received samples and a delayed (by fft_length/2
samples) window of received samples of the same size. Then it
normalizes (divides) this by the power of the received samples. The
current normalization at ofdm_sync_pn.py:61 averages power over a window
of size fft_length/2 samples. This results in a huge spike of length
fft_length/2 and amplitude about 100 in the timing metric at the end of
the packet when the normalization factor falls but the correlation
hasn’t yet (see lower plot of trunk_ofdm_sync_before.png at t=2.36e5).
As shown in the same plot, the timing metric correctly peaks at the
beginning of the packet, with amplitude almost 1.0.
My proposed change is to compute the normalization factor over a window
of size fft_length instead of fft_length/2 (see ofdm_sync_pn.py.diff).
This results in the signals shown in trunk_ofdm_sync_after.png
(attached). Note the only spike in the timing metric now is at the
beginning of the packet.
The code wasn’t broken before, but I believe it would result in spurious
peak detections, which might hurt performance in the face of multiple
transmitters and receivers. The issue is harder to see when the
transmitter sends continuously, since the normalization factor never has
a chance to fall.
Comments/feedback? Thanks,
Kyle Jamieson