Hi all,
I use USRP2 to generate some RF signal around 60 MHz using a script that
is
a slight modification of usrp2_siggen.py. I first create the signal on
the
host, write it into a file sink and when the calculation is done I use
the
same file as a signal source and send it to the USRP2. At this point I
do
not understand why this process uses between 80% and 100% of one CPU.
Whenever the CPU load gets too close to 100% I observe the otherwise
nice
peak at say 60MHz becomes very broad and noisy which renders it useless
for
my application. The load gets close to 100% reliably when the CPU is
switched, but also on its own. The steps in the quality of the signal
are
discrete, it is either very nice or very bad. The computer is a fairly
new
one (HP Compaq de7800p), Intel Core Duo 3.16 GHz, 2 GB RAM running
Ubuntu
8.10. I upgraded today to the latest gnuradio version from the trunk and
recompiled the firmware. The USRP2 is hooked to the onboard Gigabit
Ethernet.
When I do not enable realtime scheduling in the script the CPU load
seems to
increase even more and the signal is never nice. The sampling rate is
10e8/4, where 4 is the interpolation set in siggen.py. Using an
interpolation of 8 seems to alleviate the problem but I need the
bandwidth.
I installed the realtime kernel which did not change a thing.
I would very much appreciate any hints on how to understand this or on
what
I am missing. I do not understand why the CPU load is so high when the
calculation is already done and only the file has to be read. Cheers,
JJM