Hi everyone,
I want to test something with usrp running continously under normal
environment.
Can someone tell me the real data of how long usrp is capable of running
continously.
Thanks.
On 09/09/2010 11:42 AM, Anil S. wrote:
Three years ago, I used to run my USRP1-based radio astronomy software
for months at a time
with no issue.
In the last year or so, I’ve been unable to maintain those kinds of
uptimes with either USRP1 or
USRP2, which I ascribe to fundamental changes in Gnu Radio, but I
haven’t been able to put
my finger on exactly which flow-graphs will “wedge” after a few days,
and which ones will
keep going and going. I have one flow-graph that uses the USRP2 at
low bandwidths (400Ksps
or so) that seems to be able to run forever. Another, similar
flowgraph, will tend to “wedge”
after a few days.
I also have a flow-graph that uses an ALSA source and sink, and it can
only run for a few
days before “wedging”.
I think the USRP hardware and drivers are perfectly capable of running
forever, but the
current Gnu Radio seems to have issues that I haven’t been able to
pin-point with long-running
applications.
We have seen the same behaviour when trying to run overnight on the
USRP1,
the flowgraph was just doing FM mod (LFRX in LFTX out) and it had a
graphical sink. Have the flowgraphs you used been made in GRC (ie could
this
be a wx issue?).
Kieran
I’ve run my beacon satellite receiver and radar data acquisition
programs for several days without problems. I have never needed to
operate them longer, so I don’t know what the limit is. The beacon
satellite receiver is written completely in c++. The data acquisition
program also has some python in it.
My programs don’t have a GUI and typically they don’t have very many
gnuradio blocks. Most of the signal processing is done in external
libraries or programs.
Sometimes I do see a usrp1 driver error when I unplug a usb hard drive
while the usrp is running. This isn’t really that problematic. I just
avoid doing this when I don’t want the system to fail.
juha
On 09/09/2010 06:19 PM, Kieran B. wrote:
We have seen the same behaviour when trying to run overnight on the
USRP1, the flowgraph was just doing FM mod (LFRX in LFTX out) and it
had a graphical sink. Have the flowgraphs you used been made in GRC
(ie could this be a wx issue?).Kieran
I have a mixture of flow-graphs that use GUI bits and ones that don’t.
Doesn’t seem to make any difference.
In fact, my longest-running one uses an FFT and Stripchart/scope sink.
On Fri, Sep 10, 2010 at 10:19:05AM +1200, Kieran B. wrote:
We have seen the same behaviour when trying to run overnight on the USRP1,
the flowgraph was just doing FM mod (LFRX in LFTX out) and it had a
graphical sink. Have the flowgraphs you used been made in GRC (ie could this
be a wx issue?).
I’ve recently seen crashes (assert failures) directly related to wx,
gtk and opengl. From looking at the stack traces on those, it didn’t
appear to be our problem. These were not running under GRC.
Eric
On 09/09/2010 10:32 PM, Eric B. wrote:
thread apply all bt
to generate the stack traces and send them to me.
Eric
Will do. Last time I did that, the rather large mass of information on
each of the many, many, threads was
quite daunting for me to analyse myself.
–
Marcus L.
Principal Investigator
Shirleys Bay Radio Astronomy Consortium
thanks for the information to everyone…
On Thu, Sep 09, 2010 at 01:32:01PM -0400, Marcus D. Leech wrote:
D
keep going and going. I have one flow-graph that uses the USRP2
at low bandwidths (400Ksps
or so) that seems to be able to run forever. Another, similar
flowgraph, will tend to “wedge”
after a few days.I also have a flow-graph that uses an ALSA source and sink, and it
can only run for a few
days before “wedging”.
Marcus,
It would be useful it you could provide a gdb stack trace of all
threads when you see the “wedged condition”.
If it’s a python program, run gdb against /usr/bin/python and use the
gdb attach command to attached to the wedged process.
Then issue
thread apply all bt
to generate the stack traces and send them to me.
Eric