Question about Frequency Selective Fading Model

Dear All,

I have an OFDM TX block connected to an OFDM RX block through a
‘Frequency
Selective Fading Model’ block, implemented in GRC. The parameters used
in
the model are:

Num sinusoids (for sum of sinusoids): 8
Max Doppler: 0
LOS Model: NLOS
PDP Delays: (0, 1.6, 2.8)
PDP Magnitudes: (0.4, 0.4, 0.2)
Num Taps: 100

When a packet is received by the channel payload equalizer block, I
simply
print out the “Initial Taps”, which are the initial channel estimated,
and
plot the channel as a function of subcarrier index. I observe that at
subcarrier 0 (the midpoint, since subcarrier indices are centered around
0), there is a large phase shift. In my experiment, the fft_len is 64,
used
as follows:

Subcarrier (-32 through -27): Unused
Subcarrier (-26 through -1) : Carry data/pilots
Subcarrier 0 : Unused
Subcarrier (+1 through +26) : Carry data/pilots
Subcarrier (+27 through +31): Unused

What is the cause of this large discontinuity between subcarriers (-1
and
+1)? Obviously, ignore subcarrier 0 since it is unused. I’ve pasted the
‘Initial Taps’ below. The columns are: subcarrier index, real part, imag
part, magnitude of channel, respectively.

-32 0 0 0 -31 0 0 0 -30 0 0 0 -29 0 0 0 -28 0 0 0 -27 0 0 0 -26
-0.241 -0.245 0.343 -25 -0.164 -0.203 0.261 -24 -0.08 -0.183 0.199
-23
0.006 -0.185 0.185 -22 0.087 -0.21 0.228 -21 0.157 -0.255 0.3 -20
0.212
-0.318 0.382 -19 0.247 -0.391 0.463 -18 0.262 -0.47 0.538 -17 0.254
-0.549 0.605 -16 0.225 -0.622 0.662 -15 0.178 -0.683 0.706 -14 0.117
-0.728 0.737 -13 0.046 -0.754 0.755 -12 -0.028 -0.758 0.758 -11
-0.099
-0.741 0.748 -10 -0.163 -0.706 0.724 -9 -0.211 -0.653 0.686 -8 -0.244
-0.589 0.638 -7 -0.255 -0.519 0.578 -6 -0.246 -0.448 0.511 -5 -0.213
-0.381 0.437 -4 -0.165 -0.327 0.366 -3 -0.098 -0.289 0.306 -2 -0.016
-0.265 0.266 -1 0.079 -0.264 0.275 0 0 0 0 1 0.232 -0.506 0.557 2
0.191
-0.587 0.617 3 0.143 -0.654 0.67 4 0.079 -0.7 0.704 5 0.01 -0.727
0.727
6 -0.065 -0.729 0.732 7 -0.135 -0.712 0.724 8 -0.199 -0.675 0.703 9
-0.25
-0.62 0.669 10 -0.282 -0.552 0.62 11 -0.295 -0.478 0.561 12 -0.285
-0.401
0.492 13 -0.254 -0.329 0.416 14 -0.204 -0.268 0.337 15 -0.138 -0.221
0.26
16 -0.06 -0.195 0.204 17 0.022 -0.189 0.191 18 0.105 -0.207 0.232 19
0.181 -0.246 0.306 20 0.245 -0.306 0.392 21 0.29 -0.38 0.478 22 0.316
-0.465 0.562 23 0.318 -0.554 0.639 24 0.296 -0.641 0.706 25 0.253
-0.72
0.763 26 0.189 -0.786 0.808 27 0 0 0 28 0 0 0 29 0 0 0 30 0 0 0 31
0 0
0
Why is there such a large phase shift taking place around 0? Any
pointers
would be greatly appreciated.

This effect does not show up with a standard frequency xlating fir
filter
used instead of this frequency selective fading model block. Nor does
the
problem show up when the experiment is run over USRP hardware instead of
the channel model block.

Thanks in advance!

Best regards,
Aditya

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs