first of all sorry for the last post, I was too quick with the enter
button
I am currently working on my final thesis and I really need some help in
understanding the correct calculations that are necessary to get usable
data
out of the FFT values in usrp_spectrum_sense.py.
What I need is a measure that is somewhat proportional to the signal
power
of the spectrum in dBm - it does not matter if it is really in dBm, it
is
only important that I get a proportionality between the values.
Currently I am calculating the signal power like in the following code:
Where tb.gain is the gain in dB that is set in the options and tb.power
is
the power of the FFT window.
Is it correct to just subtract 20 times the log of the window power to
remove the window processing gain from the data?
Is subtracting the gain correct to get somewhat proportional values?
Is it correct to just subtract 20 times the log of the window power to
remove the window processing gain from the data?
Is subtracting the gain correct to get somewhat proportional values?
Thank you very much in advance!
TMob
Eventhough, I am not clear about what you are doing(i am a new one),I
know from FFT theories that the computed FFT should be normalized by
1/sqrt(N) to preserve the energy(where N is size of FFT u take).
That means, 10*log(N) should be substracted from the computed FFT power
to get same power as the one computed in time domain.
Thanks for your reply Bruhtesfa, by subtracting the log of the FFT size,
I
think I am normalizing the sum to the length of the FFT already. At
least
this is why I am subtracting the FFT size.
In the meantime, I ran into another issue while playing around with a
signal
generator and my spectrum sensing code. When I turn on the signal
generator
on a specific channel, with a bandwidth of 20MHz I would expect to see
the
values in the center frequency increase and also the ones of the
neighbouring 802.11 channels. So far so good, but if I do so, I also see
an
increase in the values of all other channels that should not be
interferred
by the signal.
I played around with the parameters of the USRP a little bit and found,
that
this behaviour is dependent on the gain of the USRP. When I reduced the
gain
to let’s say 10dB, the signal from the signal generator would only
appear in
the data of the center channel and its neighbors like it should be. Of
course setting a fixed but reduced gain would solve my problem for this
specific configuration but I need something that works for varying
environments.
Now, I looked around in the mailing list and in the source code of
usrp_spectrum_sense.py and didn’t find any sort of automatic gain
control
that would adjust the gain of the USRP to the received signal strength.
I am
wondering if there is a way to implement something like this to receive
more
valid results than the ones I am getting at the moment. Does anybody
have an
idea??
Any help is very much appreciated!
Thanks,
Tom
Eventhough, I am not clear about what you are doing(i am a new one),I
know from FFT theories that the computed FFT should be normalized by
1/sqrt(N) to preserve the energy(where N is size of FFT u take).
That means, 10*log(N) should be substracted from the computed FFT power
to get same power as the one computed in time domain.
–
View this message in context: http://www.nabble.com/Spectrum-Sense---Convert-FFT-Data-into-dBm-proportional-values-tp21810302p21828879.html
Sent from the GnuRadio mailing list archive at Nabble.com.
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.