first of all sorry for the last post, I was too quick with the enter
I am currently working on my final thesis and I really need some help in
understanding the correct calculations that are necessary to get usable
out of the FFT values in usrp_spectrum_sense.py.
What I need is a measure that is somewhat proportional to the signal
of the spectrum in dBm - it does not matter if it is really in dBm, it
only important that I get a proportionality between the values.
Currently I am calculating the signal power like in the following code:
In the flow graph I use the following parts:
s2v = gr.stream_to_vector(gr.sizeof_gr_complex, self.fft_size) mywindow = window.blackmanharris(self.fft_size) fft = gr.fft_vcc(self.fft_size, True, mywindow) c2mag = gr.complex_to_mag_squared(self.fft_size) stats = gr.bin_statistics_f(self.fft_size, self.msgq,
self._tune_callback, tune_delay, dwell_delay)
self.connect(self.u, s2v, fft, c2mag, stats)
In the main loop I am performing the following calculations:
for bin in m.data: signalPower += bin signalPower = 10*math.log10(signalPower) -
10math.log10(tb.fft_size) - 20math.log10(tb.power) - tb.gain
Where tb.gain is the gain in dB that is set in the options and tb.power
the power of the FFT window.
Is it correct to just subtract 20 times the log of the window power to
remove the window processing gain from the data?
Is subtracting the gain correct to get somewhat proportional values?
Thank you very much in advance!
View this message in context:
Sent from the GnuRadio mailing list archive at Nabble.com.