Losing data somewhere between USRP and hard drive

Hello,

Q1: If I use a file_sink to record data, and my app never reports “Ou”
on the console, have all the bits been written to disk?

Q2: Does anyone have a suggestion about how I can avoid Ou messages when
using file_sink?

Q1 commentary: Sometimes the data I get from file_sink looks funky. It
could be a problem with my implementation of my application, but I
suspect the data is corrupted and I suspect it contains gaps eventhough
no “Ou” was reported.

Q2 commentary: Ramdisk doesn’t work because I’m writing gigabytes of
data. I am thinking maybe I could write a multithreaded file_sink that
operates more cleverly than the currently implementation. I am sampling
4e6 complex short samples per second. I have also tried sending the
data over TCPIP using file_descriptor_sink attached to a python socket
with setblocking(0). Even when the receiving server is just throwing
away the data (i.e. no disk access or processing) data is sometimes
lost.

Thanks for your help and suggestions,

Chris

On Sun, Aug 12, 2007 at 06:10:21PM -0700, Chris S. wrote:

Hello,

Q1: If I use a file_sink to record data, and my app never reports “Ou”
on the console, have all the bits been written to disk?

Q2: Does anyone have a suggestion about how I can avoid Ou messages when
using file_sink?

  • If using ext3 file system, try remounting it as ext2.
  • Buy a faster disk.

Q1 commentary: Sometimes the data I get from file_sink looks funky. It
could be a problem with my implementation of my application, but I
suspect the data is corrupted and I suspect it contains gaps eventhough
no “Ou” was reported.

No clue. usrp_rx_cfile.py is known to work :wink:

Q2 commentary: Ramdisk doesn’t work because I’m writing gigabytes of
data. I am thinking maybe I could write a multithreaded file_sink that
operates more cleverly than the currently implementation. I am sampling
4e6 complex short samples per second.

What OS are you using?
What filesystem are you using?

Have you tried benchmarking your filesystem i/o throughput?
How about testing for any “pauses” where nothing else can get done.
E.g., posting the ext3 journal.

I have also tried sending the data over TCPIP using
file_descriptor_sink attached to a python socket with
setblocking(0). Even when the receiving server is just throwing
away the data (i.e. no disk access or processing) data is sometimes
lost.

16MB/s * 8 bits/byte = 128Mbit/s
Are you using gigabit ethernet?

Eric

Hi,

Thanks for your response. I would still like an answer to this
question:

Q1: If I use a usrp_rx_cfile.py to record data, and my app never reports
“Ou” on the console, have all the bits been written to disk?

usrp_rx_cfile.py is known to work :wink:

No doubt about it, but if the hard drive cannot keep up, I’m sure data
will be lost. My goal is to setup a system that can record data to disk
even with a slower hard drive, perhaps by creating a threaded file_sink
with a large memory buffer.

ubuntu 32 bit, with ext3. I will try ext2.

Have you tried benchmarking your filesystem i/o throughput?
How about testing for any “pauses” where nothing else can get done.
E.g., posting the ext3 journal.

I’m running on a laptop and I suspect I am going to have problem with
hard drive speed and my solution will need to be in the “application
layer.” But I will play with hdparm and family.

16MB/s * 8 bits/byte = 128Mbit/s
Are you using gigabit ethernet?

Yes. Network has plenty of room to spare according to “task manager.”
Using crossover cable. The mystery here is why would gr give "Ou"s when
sending data over gigabit to a server that simply throws the data away.
These Ous happen far less than the hard drive Ous.

Thanks again for your help,

Chris