Forum: GNU Radio Hard Disk Bottleneck

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
57371349a0e4580b0362a0441ef5094b?d=identicon&s=25 Vincenzo Pellegrini (Guest)
on 2007-02-09 10:11
(Received via mailing list)
Hi everybody,

just a quick question: has anybody managed so far to provide the usrp
with the 8 Complex Msps needed to transmit an 8 MHz wide band?

My problem is that such a bandwidth yields a 32MiBps throughput and,
even if the USB2 bus and the CPU speed of my machine are all right with
this, it looks like my IDE HARD DISK cannot provide such a data rate.

Can anyone please confirm whether he's been successful in something
similar, and, if so, with what kind of HD?

thanks

vincenzo
23b67cf88de2a13b380fc0a13c43b590?d=identicon&s=25 Ryan Seal (Guest)
on 2007-02-09 16:09
(Received via mailing list)
Vincenzo Pellegrini wrote:
> similar, and, if so, with what kind of HD?
> Discuss-gnuradio@gnu.org
> http://lists.gnu.org/mailman/listinfo/discuss-gnuradio
>
>

You can successfully store data at those rates using SATA-II drives. You
can also set up a RAID-O configuration with IDE drives and should get
enough bandwidth. Actually we are using a more intensive system here
with 8 SATA-II drives using an areca PCI-E card with a RAID-5
configuration and achieve continuous rates close to 200MBPS - so I know
it can be done.

If you format your IDE drive and use a benchmarking program, you will
typically see that, as the disk fills, data write speeds decrease
linearly due to your position on the disk. You can sometimes look at the
numbers and then create a smaller partition that can maintain these
speeds.

Also, using isolated drives for data is important and we find the XFS
filesystem to outperform all other filesystems for continuous writing.

Ryan
745d8202ef5a58c1058d0e5395a78f9c?d=identicon&s=25 Eric Blossom (Guest)
on 2007-02-09 18:25
(Received via mailing list)
On Fri, Feb 09, 2007 at 10:11:48AM +0100, Vincenzo Pellegrini wrote:
> similar, and, if so, with what kind of HD?
>
> thanks
> vincenzo

Buy a new disk and controller ;)

Lots of commodity 7200 RPM SATA drives can sustain 40 - 60 MB/sec,
no problem.  Take a look at the Seagate Barracuda 7200.10

Zipzoomfly.com has the 300GB version for $100.
http://www.zipzoomfly.com/jsp/ProductDetail.jsp?Pr...

Eric
79723aa1b24981dcec2dbf7fd59403c1?d=identicon&s=25 Brian Padalino (Guest)
on 2007-02-09 18:27
(Received via mailing list)
Has there been much discussion on building a modulator into the FGPA?
That would obviously reduce down the bandwidth required for
transmission of some very wideband signals.

Brian
3596cfe1d579c65b9babd35e8787977c?d=identicon&s=25 Matt Ettus (Guest)
on 2007-02-09 18:53
(Received via mailing list)
>> similar, and, if so, with what kind of HD?
>>

The real point here is that you can generate signals on the fly much
faster than you can read them from a disk.  We have no trouble
generating the 32 megabytes per second of FM, PSK, or whatever else, in
real time.  Do you really need to store these files to a disk?

Matt
3596cfe1d579c65b9babd35e8787977c?d=identicon&s=25 Matt Ettus (Guest)
on 2007-02-09 18:54
(Received via mailing list)
Brian Padalino wrote:
> Has there been much discussion on building a modulator into the FGPA?
> That would obviously reduce down the bandwidth required for
> t

Why build it into the FPGA when you can do it in software?  He's running
into a bottleneck in the disk->computer link, not the computer->USRP
link.  A modulator in software solves the first, a modulator in hardware
solves the second.

Matt
3719f4fea703e38bcbf8de6fe6bcdf55?d=identicon&s=25 Martin Dvh (Guest)
on 2007-02-11 14:20
(Received via mailing list)
Vincenzo Pellegrini wrote:
> similar, and, if so, with what kind of HD?
>
> thanks
>
> vincenzo
If I have to read/send signals I can't process in realtime I allways do
one of the following.

Small files:  use a ramdisk
Big files:      use a dedicated partition at the very start of the
drive, formatted with a fast filesystem (non-journalling) with nothing
further on it.

I have a 2.5 GB fat32 partition at the start of my drive (/dev/hda1).

For RX this can just keep up with 32 MiB/sec.

For TX I never have been able to get more then 16 MiB/sec, even when
using a ramdisk or using a null_source.

I don't know why there is a difference between TX and RX. Maybe there is
a subtle buffering, timing or other difference in the communication
with the usrp.

Has anobody else have been able to do more then 16 MiB/sec on the TX
side?

Greetings,
Martin
9726431d42cfb4effe9f0db975e0bbae?d=identicon&s=25 Jim Perkins (Guest)
on 2007-02-11 15:29
(Received via mailing list)
The fastest thing you can do is make a dedicated partition and read and
write it directly.  This is very simple (open(), read(), write(),
seek(), etc).  I typically dedicate the first 1024 blocks to storing
info about the files.  I use one block per file and the first block
stores info about the number of files stored.  Something like this:

block0000[0]=10  //10 files
block0001[0]=1024; //file 1 starts at block 1024
block0001[1]=1001023; //file 1 ends at block 1001023 (for a total of 512
megabytes as each block is 512 bytes)

.....

block0010[0]=xxx //file 10 starts at block xxx
block0010[1]=xxx //file 10 ends at block xxx

This gives you a very fast simple sequential file system.

If you only need to use one file you could dd it to the partition.  In
your code use open() to open the partition and read() the data.  If you
want to test to see if a dedicated raw partition is going to be fast
enough you can use dd with /dev/zero or /dev/null to test sequentially
read and write speed.  You can vary the read/write size in dd to figure
out the optimal read or write size.

-Jim
This topic is locked and can not be replied to.