Forum: GNU Radio usrp_spectrum_sense vs. usrp_fft

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
5d2784053df51a5b383361f8caef2a6f?d=identicon&s=25 TMob (Guest)
on 2009-01-28 17:35
(Received via mailing list)
Hi,

I am currently trying to use the USRP to sense the 802.11 channels for
activity. So far, I am using the usrp_spectrum_sense to do this. Each
time I
get the callback from gr.bin_statistics_f, I calculate the signal power
in
the returned data vector using the following formula:

        for bin in m.data:
            signalPower +=
20*math.log10(math.sqrt(bin)/tb.fft_size)-20*math.log10(tb.fft_size)-10*math.log(tb.power/tb.fft_size/tb.fft_size)
        signalPower /= tb.fft_size

According to previous posts, this should give me the signal power at the
given center frequency in dBm.
Unfortunately, it turned out that the values that I get using this code,
vary very much, e.g. with the FFT size and the gain. When I leave gain
and
FFT size per default I get values from -28 through +5 (dBm) which
definitely
does not correspond to dBm. Is there any mistake in the formula? Is this
really dBm that I get?

Because the usrp_fft.py example shows more realistic values (around -50
-
-60dBm) than the usrp_spectrum_sense.py, I was wondering if somebody
could
explain how usrp_fft gets to these values. All I can see in the source
code
there is that a USRP source is defined and connected to the scope. But
where
is the conversion into dBm done? Can this be applied to
usrp_spectrum_sense
somehow?

Thanks,

TMob
--
View this message in context:
http://www.nabble.com/usrp_spectrum_sense-vs.-usrp...
Sent from the GnuRadio mailing list archive at Nabble.com.
D716f8fd0a24f6bafabed1b7c0092a5a?d=identicon&s=25 Mohd adib Sarijari (adib_sairi)
on 2010-06-04 00:01
(Received via mailing list)
TMob wrote:
> 
20*math.log10(math.sqrt(bin)/tb.fft_size)-20*math.log10(tb.fft_size)-10*math.log(tb.power/tb.fft_size/tb.fft_size)
> Because the usrp_fft.py example shows more realistic values (around -50 -
> -60dBm) than the usrp_spectrum_sense.py, I was wondering if somebody could
> explain how usrp_fft gets to these values. All I can see in the source
> code there is that a USRP source is defined and connected to the scope.
> But where is the conversion into dBm done? Can this be applied to
> usrp_spectrum_sense somehow?
>
> Thanks,
>
> TMob
>

did anyone have the answer to this question? is it because of the
usrp_fft.py have the windowing block? regarding the windowing block, do
any
body know why the blackmanharis is chosen?

Adib


-----
Mohd Adib Sarijari
Universiti Teknologi Malaysia
www.fke.utm.my
www.utm.my
--
View this message in context:
http://old.nabble.com/usrp_spectrum_sense-vs.-usrp...
Sent from the GnuRadio mailing list archive at Nabble.com.
D716f8fd0a24f6bafabed1b7c0092a5a?d=identicon&s=25 Mohd adib Sarijari (adib_sairi)
on 2010-06-04 07:43
(Received via mailing list)
adib_sairi wrote:
>> power in the returned data vector using the following formula:
>> and FFT size per default I get values from -28 through +5 (dBm) which
>> Thanks,
>
-----
Mohd Adib Sarijari
Universiti Teknologi Malaysia
www.fke.utm.my
www.utm.my
--
View this message in context:
http://old.nabble.com/usrp_spectrum_sense-vs.-usrp...
Sent from the GnuRadio mailing list archive at Nabble.com.
This topic is locked and can not be replied to.