Test_usrp_inband_tx crash

Hi,

While using the inband code, I tried to test <test_usrp_inband_tx> in
the
apps-inband folder, I noticed that the test crashes after 20 packets. I
am
currently trying to debug this problem. I traced the crash to
<fusb_linux.cc>
in the legacy folder.

  1. I noticed that rf-freq in <test_usrp_inband_tx> is set to 10 Mhz. Is
    this
    the
    RF-Frequency used on the hardware? We are using a 400Mhz daughtercard.
    What
    happens if the frequencies don’t match?

  2. Do you observe the same crash? How can we fix this?

BTW, I understand that the inband code is in development, I am still
interested
in experimenting with it.

Hardware: USRP with 400Mhhz daughtercards, Intel p4
Software: gnuradio from
<root/gnuradio/branches/developers/gnychis/inband>,
ubuntu gutsy gibbon

Thanks
Sanmi

Hi Sanmi,

  1. We are not supporting the code until release, seriously :slight_smile: There is
    no daughterboard support yet.

  2. You haven’t really given us enough information about the crash,
    however we have found and patched a bug in fusb_linux.cc as of version

  • George

hi all,
just wanted to say hello & offer a quick introduction…
i’m a sound / media artist, currently enrolled in the digital + media
mfa program
at the rhode island school of design; for my thesis work, i’m
developing a system
with which i will sonify / visualize data (primarily density levels)
from cellular networks (i.e. gsm)
to create realtime performances and audiovisual installations using
the usrp + gnu radio.

i’m running gnu radio (3.0.4) on fc7 w/ the planet ccrma kernel,
and have been successful receiving data using the usrp with AM / FM /
WFM demodulation,
and sending it out via alsa to another machine running supercollider
3 for analysis / resynthesis.
ultimately, i am planning on sending the data to supercollider via jack,
then onto max/msp/jitter on the other machine via open sound control
for 3d visualization using its implementation of openGL.

the audio stream generated from usrp_rx_nogui.py using AM demodulation
is fairly reliable in terms of spectral analysis: the cell traffic is
generally pretty discernible,
and while the noise floor is decently high, it could be compensated
for somehow.
but it seems like it might make more sense to be using GMSK
demodulation…
(FM / WFM have lower noise floors, but seem to require too tight of a
radius to be useful)

however, i’m hitting a brick wall when it comes to trying to rectify
the differences
between the demodulation methods in usrp_rx_nogui.py and
benchmark_rx.py / rx_voice.py…
i also noticed that, in gmsk.py, the comments for the GMSK
demodulator block
say that the output is a stream of bits packed in the LSB of each
byte it streams out.

but of course, a stream of bits in the LSBs of a bunch of bites does
not a great audio signal make…
and while i’d originally thought i’d be streaming out data files,
then using that data to drive a synthesis engine written in
supercollider,
it seems to makes more sense to keep it all in an audio stream, as it
is conveniently packaged for my needs,
and the latency is much lower than would be the case otherwise.

and since i’m not trying to transmit or receive specific data, just
trying to analyze the level of activity
in specific ranges of the spectrum in a given location, maybe it
doesn’t make sense to bother with GMSK.
at any rate, any insight or advice you might have regarding optimal
strategies for demodulation
in regards to this project would be greatly appreciated.

thanks in advance,
–mark


m. cera | c3ra.com