To expand on this further.
The number of MFLOPS required is
proportional to the sample-rate X inherent-flowgraph-complexity.
If,
on average, there’s a deficit in available MFLOPS vs required MFLOPs,
you’ll get overflows. Buffering within the flow-graph, and the driver
stacks, allows you to have short-term shortfalls, but if on average you
don’t have enough MFLOPs, you’ll fall behind, and get overruns. Think of
it as a kind of “physics of computing” thing.
The more “stuff” you do
in a flowgraph, the higher the inherent-flowgraph-complexity. A digital
receiver DSP chain is quite complex, and requires a lot of operations
to be performed on every sample. The architecture of Gnu Radio
exacerbates this somewhat, because the data-flow model tends to produce
redundant data motion that would not occur in a signal-processing chain
that was “hand coded” and “hand optimized”. It’s the price you pay for
the flexibility of the “plug a bunch of modules together” architecture.
That being said, I have flow-graphs that run at 16.67Msps, doing
“stuff” and they keep up fairly well–but on a 6-core 3.2GHz machine
with 4G of fast memory.
On 25 Mar 2013 10:36, Tom R. wrote:
John,
The ability to process data in a software radio application
is
directly proportional to the sampling rate used. The higher the
sampling rate, the more computational power required. As you increase
the computational load on your application, each block takes longer to
process data, and at some point, your application can take too much
time, which means that the data flowing from the USRP to the host is
coming too fast to be processed, so it gets dropped.
A digital
receiver is a complex set of algorithms used to properly
synchronize
(in time, frequency, and phase not to mention any
multipath
distortions) the incoming signal. This is much more complex
than just
drawing the signal in a GUI.
So in other words, you shouldn’t
expect to be able to handle those
kinds of bandwidth with the digital
receiver. Now, there’s a lot more
optimization that can be done to
make those algorithms more
computationally efficient in order to
continue increasing the