I am currently trying to use the USRP to sense the 802.11 channels for
activity. So far, I am using the usrp_spectrum_sense to do this. Each
get the callback from gr.bin_statistics_f, I calculate the signal power
the returned data vector using the following formula:
for bin in m.data: signalPower +=
signalPower /= tb.fft_size
According to previous posts, this should give me the signal power at the
given center frequency in dBm.
Unfortunately, it turned out that the values that I get using this code,
vary very much, e.g. with the FFT size and the gain. When I leave gain
FFT size per default I get values from -28 through +5 (dBm) which
does not correspond to dBm. Is there any mistake in the formula? Is this
really dBm that I get?
Because the usrp_fft.py example shows more realistic values (around -50
-60dBm) than the usrp_spectrum_sense.py, I was wondering if somebody
explain how usrp_fft gets to these values. All I can see in the source
there is that a USRP source is defined and connected to the scope. But
is the conversion into dBm done? Can this be applied to
View this message in context:
Sent from the GnuRadio mailing list archive at Nabble.com.