Minimizing or eliminating Overruns

Hello everyone,

Has anyone worked or is familiar with the file fusb_darwin.cc. I am
currently running USRP1 to receive two streams each of 500,000 complex
samples/second from each of the two daughter boards (under the
dual_source_c) configuraton and dumping the stream straight to the
file_sink. Thus, I am reading a total of 8Mbytes/sec from the USB and
writing this to the file_sink. However, this is giving me a lot of
Overruns.
The overrun “uO” is printed in the file fusb_darwin.cc as below.

if (l_buffer->enqueue(l->buffer (), l_size) == -1)
{
fputs(“uO\n”,stderr);
fflush (stderr);
}

I believer this means that the USB is getting data from the USRP but is
dropping data as the l_buffer is not accepting it. The value l_size is
always 32768.

I am currently using a 2.26 GHz Intel Core 2 Duo Macbook with 2 GB (1067
MHz DDR3) RAM. I ran the usrp_benchmark_usb.py file and the max.
throughput support I get is 32MBytes/sec. Therefore, streaming from the
USRP at 8MBytes/sec in itself is not a problem. However, after I connect
a file_sink, I get a lot of overruns.

Questions I would like to discuss:
a) Has anyone experienced writing at a rate more than 8MBytes/sec using
the file_sink. If so, what would be the best way to optimize this.
b) The fusb_darwin.cc file indicates that everytime there is a overrun I
lose 32768 bytes of data. However, I don’t know how many data samples
this would be after subtracting the overhead? Can anyone please help me
with this calculation.

Thank you very much,
Neil.