I am trying to integrate the output of a spectrum analyzer in real time.
I sent the output of the osmocom source to a log power fft block, which
is then sent to a meta file sink. In order for the program to calculate
the correct timestamp for each item, the meta file sink needs to know
the relative rate change from the sample rate of the starting block to
After some attempts, I found that the value “2pi/(10241024)” gives a
very accurate calculation of the timestamps of each item. (The 1024
comes from the fft size of the fft block, and the vector length of the
meta file sink) I understand why the meta file sink decimates the item
rate by 1024, since it counts 1024 points to be one item, a vector. What
I do not understand is why the fft block decimates the item rate by
2*pi/1024. I want to make sure that this indeed is the correct
decimation rate of the fft block.
I looked for answers in the dspguide.com website and found out how the
fft program works. The FFT program calculates by using the complex DFT,
which takes two N point signals and transforms it into two N point
signals, where the first N/2 points in each output signal corresponds to
the positive frequencies of the time domain signal, and the last N/2
points are negative frequencies which are usually ignored. But I am
still not so clear as to why 2pi/1024 would be the decimation rate of
the fft block. I know that when the time domain signal is analyzed it
uses equations to calculate how much of each basis sinusoid is contained
in the time domain signal. The basis functions are cos(2pikn/N) and
sin(2pikn/N) where k is the wave number, n is the index from 0 to
N-1, and N is the fft size. But this doesn’t help me figure out why the
fft block decimates the item rate by 2pi/1024, though I thought it
might give me a clue.
I want to make sure that my relative rate change value is correct so
that I can begin to do real time spectrum analyzing. I am truly thankful
and appreciative for all your help.