Understanding Reported Overruns

I have an application running on USRP1 @ 6.4M SPS. With the processing
I
am doing I have an i7 2600k that an just BARELY keep up.

I have been having problems where my app “goes dark”, either with all
zero
input or garbage. I have not been able to confirm which, but I suspect
it
is being zeroed because I detect no errant frame syncs which I would
expect
if I was receiving full randomness over a long enough period of time.

I finally traced down what may be the root cause, which is the system is
overheating and ubuntu is throttling the cpu cores back. I have been
able
to correlate syslog events with the throttling to observed gaps in my
application fairly consistently…and will work to address that problem.

Does anyone have a guess as to what may actually be happening? I would
expect the output to be punctuated with overruns in this case, if I
can’t
keep up with the rate from the USRP, but this does not seem to be
happening. Before my app settles in on the work to do under normal
circumstances, it does extra processing which always initially results
in
overruns for a time, when the application starts or reconfigures itself,
so
I know as a general rule the overrun output is not being suppressed.