Hi,
I’m using the USRP/DBSRX to record data for GPS. GPS tracking demands a
continuous stream of data – dropped bits make tracking impossible.
4Msps of complex data supplies 16 MB/s – within USB2 bandwidth and my 4
disk RAID0 bandwidth.
I record the data using my own c++ version of cfile. I never get an
error or overrun condition from usrp_standard::read so it appears that
everything is working fine. However, when I process the data it is
clear that data is dropped – approximately a few thousand samples every
minute or two. I typically record data for one hour at a time.
I read data from the USRP in batches of 8192 16 bit short samples.
Anyone have any tips on tracking down why I am dropping data?
Thanks,
Chris
On Tue, Jun 03, 2008 at 01:41:03PM -0700, Chris S. wrote:
clear that data is dropped – approximately a few thousand samples every
minute or two. I typically record data for one hour at a time.
Bug in your code? Are you double buffering or something? Are you
properly handling locking between the producer and consumer?
I read data from the USRP in batches of 8192 16 bit short samples.
Anyone have any tips on tracking down why I am dropping data?
Thanks,
Chris
If you’re running Linux and writing to an ext3 filesystem, try
remounting it as an ext2 filesystem. I’ve seen problems in the past
when the ext3 filesytem posts its journal, however they showed up as
overruns – the filesystem wasn’t keeping up.
Eric
Chris-
I read data from the USRP in batches of 8192 16 bit short samples.
Anyone have any tips on tracking down why I am dropping data?
Is there a way for you to temporarily take file-write out of the
equation? I.e. can
your code look at the bitstream and know if it remains continuous /
intact?
The “every minute or two” thing makes me suspicious that some HDD
related thing is
going on. 16 MBbyte/sec is around 1 GByte/minute.
-Jeff
Jeff B. wrote:
Is there a way for you to temporarily take file-write out of the equation? I.e. can
your code look at the bitstream and know if it remains continuous / intact?
The “every minute or two” thing makes me suspicious that some HDD related thing is
going on. 16 MBbyte/sec is around 1 GByte/minute.
Jeff,
Thanks for your recommendation. I can indeed pipe the output of my
“data gathering app” to the input of my GPS processor and see if the
problem goes away. However, I suspect the problem is not HDD related
for these reasons:
-
I’m using a 4 disk RAID0 (external eSATA) drive that supposedly can
handle the throughput
-
When I was disk bound in the past, I would receive USRP overrun
errors. I do not receive these errors when I am presently losing a few
thousand samples out of every 480e6
CHris
Eric B. wrote:
If you’re running Linux and writing to an ext3 filesystem, try
remounting it as an ext2 filesystem. I’ve seen problems in the past
when the ext3 filesytem posts its journal, however they showed up as
overruns – the filesystem wasn’t keeping up.
Eric,
I used to have this problem when I was using ext3 on a slow hard drive.
The problem manifested itself as a buffer overrun. Today I use ext2,
have a 4 disk RAID0 that can handle 40MB/s sustained, and I get no
overruns.
Bug in your code? Are you double buffering or something? Are you
properly handling locking between the producer and consumer?
My code is pretty simple, so naturally (of course) I don’t think it is
buggy. I am double buffering. I will start recording with cfile and
see if the problem goes away. I could also try a lower sampling rate.
Thanks,
Chris
Chris S. wrote:
Thanks for your recommendation. I can indeed pipe the output of my
“data gathering app” to the input of my GPS processor and see if the
problem goes away.
BTW one problem with this approach is that I can only confirm that there
is lost data by post processing. The symptoms are obvious: I lose
tracking lock on a GPS satellite, but the root cause (jittery clock or
cycle-slip or missing data) can only be determined via autopsy.
Chris
Chris-
problem goes away.
Were you able to verify that?
However, I suspect the problem is not HDD related
for these reasons:
-
I’m using a 4 disk RAID0 (external eSATA) drive that supposedly can
handle the throughput
-
When I was disk bound in the past, I would receive USRP overrun
errors. I do not receive these errors when I am presently losing a few
thousand samples out of every 480e6
I’ve seen cases before where the drive does handle the throughput as
advertised, but
on an average basis. Under sustained, continuous write circumstances,
when the drive
reaches a new sector, multiple of sectors, or some other internal space
boundary,
extra time is taken for allocation… or something along those lines.
That’s why I
mentioned the 1 GByte figure. It’s been some time since I encountered
this so it’s
just a shot in the dark (happened when working on high speed DSP based
data
acquisition applications).
-Jeff
Jeff B. wrote:
I’ve seen cases before where the drive does handle the throughput as advertised, but
on an average basis. Under sustained, continuous write circumstances, when the drive
reaches a new sector, multiple of sectors, or some other internal space boundary,
extra time is taken for allocation…
Jeff,
Thanks again. Is it ever possible for all of these things to happen:
- write data to disk using cfile
- hard drive cannot keep up for whatever reason (new sector, etc) and
data is lost
- A USRP “overrun” condition does not occur
Thanks,
Chris
I have sampled continuously for many hours without problems. I had a
setup with USRP syncronized to an external clock. I have then measured
a constant frequency sinusoid derived out of the same clock and
verified that the ratio of consecutive complex samples was always
constant (up to a certain error term). I have also measured carefully
timed pulses that are derived out of the same clock, and they appear
exactly in the correct place even after many hours of sampling.
However, I never managed to get the normal filesink working without
dropping samples (even though others haven’t had any problems). I
suspect that the filesystem was the reason. I have only tried sampling
with ext3 and mainly XFS. Since I didn’t know any better, I wrote my
own double buffered filesink block, which is available here:
http://mep.fi/juha/gnuradio.html
There is also an example sampler.py application that can be used for
sampling data. The program saves the samples as big endian short
integers. It also conveniently chops up the data into small
processable constant length files.
juha
On Wed, Jun 04, 2008 at 09:33:33PM -0700, Chris S. wrote:
Chris
I can’t think of a case when this would happen.
What evidence do you have that data is being dropped?
Can you hook up a siggen and feed a single sinusoid to the USRP rx
daughterboard? Tune the USRP so that the sinusoid is not at DC in the
complex baseband, and log that data using your program and/or
usrp_rx_cfile. Then use octave or some other tool to look for
discontinuities in the received signal.
This would allow you to confirm or refute the existence of the
discontinuity using a much less complicated detector than your GPS
receiver.
Eric
Chris-
- hard drive cannot keep up for whatever reason (new sector, etc) and
data is lost
- A USRP “overrun” condition does not occur
That I don’t know. I just mention my experience because – if it should
turn out to
have any bearing – it would apply regardless of the file-write method
being used.
But it does sound from some of the other posts that you should be able
to get it
working.
-Jeff
USRP’s cfile utility cannot write my data without overruns, so I use
my own app which I have attached to this email in case anyone is
interested.
Just some comments on the code:
int NumBytes = rx->read(
(char*)Buffer,
n*sizeof(short),
&Overrun);
Your code doesn’t do anyting if the read returns less then
n*sizeof(short) bytes. Is that possible within the gnuradio code?
pStream->write((char*)Buffer, n*sizeof(short));
Same here the write might not write all n*sizeof(short) bytes.
Robert
Robert Fitzsimons wrote:
pStream->write((char*)Buffer, n*sizeof(short));
Same here the write might not write all n*sizeof(short) bytes.
Robert,
You’re exactly right, I assume that all the bytes are read and written!
Thank you so much I’ll fix it right away.
Chris
Eric B. wrote:
What evidence do you have that data is being dropped?
Eric,
I know data is missing from my recorded file because the C/A code of
every GPS satellite in my collection jumps by an unexpected amount. The
C/A offset for GPS satellites should increase 1024 “chips” every ms.
Holes in GPS data are unmistakable – and I had been dealing with them
for months before I got a new hard drive setup.
However, just because the data is missing from the file doesn’t mean
there is a problem with the USRP. For example, my data writing routine
can insert some junk into the file (e.g. I write an error message out to
stdout along with the USRP bits vs to stderr). Or I could have fouled
up the buffer or threading.
USRP’s cfile utility cannot write my data without overruns, so I use my
own app which I have attached to this email in case anyone is
interested. I will try Juha’s recorder to see if it performs better.
Can you hook up a siggen and feed a single sinusoid to the USRP rx
daughterboard? Tune the USRP so that the sinusoid is not at DC in the
complex baseband, and log that data using your program and/or
usrp_rx_cfile. Then use octave or some other tool to look for
discontinuities in the received signal.
I will perform this test; however, I expect I will see the same results
since GPS provide me with quality sinusoids.
This would allow you to confirm or refute the existence of the
discontinuity using a much less complicated detector than your GPS
receiver.
Ahh, I see…
Thanks everyone,
Chris
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On Jun 4, 2008, at 10:59 PM, Chris S. wrote:
USRP’s cfile utility cannot write my data without overruns, so I use
my own app which I have attached to this email in case anyone is
interested. I will try Juha’s recorder to see if it performs better.
FWIW (maybe you’ve fixed the problem already), I would think that if
this is the case then you’re using the cfile script wrong. Are you
using the ‘-s’ switch (to halve the required disk bandwidth)?
-
-
-
- -----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (Darwin)
iEYEARECAAYFAkhIZygACgkQy9GYuuMoUJ5BsgCfYduc32l6EqBYFsWv4NwvraCI
QHEAn3oyMLBzzjt+UnYZUxpPKQWvsqOs
=/cw9
-
-
- -----END PGP SIGNATURE-----
-
- -----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (Darwin)
iEYEARECAAYFAkhIZzgACgkQy9GYuuMoUJ7OKgCgkJeiokb3X8A8ycAkiTi9Ed70
17IAoMtDwlbTFdfLKPV1qzVT85sWCGT1
=GQuU
-
- -----END PGP SIGNATURE-----
- -----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (Darwin)
iEYEARECAAYFAkhIZ2kACgkQy9GYuuMoUJ6cTACfeAlD6lKRH8iwTu5tmRVts3t5
id8AoKHTnvTKCYMaWPk40XT7I7YHv5Ts
=eLJB
- -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.8 (Darwin)
iEYEARECAAYFAkhIaSIACgkQy9GYuuMoUJ5oDQCgjEFuuToFAQjP2Xh3p2C3Jngw
FvsAoKOrA30Ku+/F3SUySeF6Q1Ck0WH0
=z9Y8
-----END PGP SIGNATURE-----
There is insufficient dynamic range in any GPS signal on any ordinary
mortals GPS antenna to use more than 8 bits.
Bob
Dan H. wrote:
USRP’s cfile utility cannot write my data without overruns, so I use
my own app which I have attached to this email in case anyone is
interested. I will try Juha’s recorder to see if it performs better.
FWIW (maybe you’ve fixed the problem already), I would think that if
this is the case then you’re using the cfile script wrong. Are you using
the ‘-s’ switch (to halve the required disk bandwidth)?
Hi Dan,
Yes, I record the data as 16 bit shorts vs. 32 bit floats. At 4Msps
complex data, this provides 16 megabytes per second which is within the
capabilities of my 4 disk RAID0 array. Here are the results of the
bonnie++ benchmark which reports my drive can write at 58 megabytes per
second:
Version 1.03 ------Sequential Output------
-Per Chr- --Block–
Machine Size K/sec %CP K/sec %CP
cstankevitz-lapt 4G 34335 44 58313 5
Chris
Bob McGwier wrote:
There is insufficient dynamic range in any GPS signal on any ordinary
mortals GPS antenna to use more than 8 bits.
Not so if the signal is being jammed…