The coming deluge of CPU cycles

So, I read yesterday that Intel is going to start introducing quad-core
CPUs sometime late this
year, rather than 2007 as originally announced.

Two questions occur to me:

o How can we best take advantage of the multiple CPU cores in Gnu 

Radio?
Being able to process larger bandwidths and “do stuff” with that
bandwidth
would seem to be a good goal. Like implementing a
CDMA/GSM/whatever
basestation RF processor all inside Gnu Radio, for example. [Me,
I just want to
be able to process radio astronomy data at higher bandwidths
:-)].

o Is it time to think about moving away from USB for USRP?  Perhaps 

to
PCI-X 2.0, or PCI-Express?


Marcus L. Mail: Dept 1A12, M/S: 04352P16
Security Standards Advisor Phone: (ESN) 393-9145 +1 613 763 9145
Strategic Standards
Nortel Networks [email protected]

On Wed, Jul 26, 2006 at 12:53:06PM -0400, Marcus L. wrote:

  example.  [Me, I just want to be able to process radio astronomy data 
  at higher bandwidths :-)].

I see two paths that can get us there:

(1) dynamic partitioning of the flow graph across processors in
SMP/multi-core machines.

(2) m-blocks dynamically scheduled across processors on SMP/multi-core

Once N-cores gets sufficiently large (8 ?), I think we start moving to
a thread / block model.

o Is it time to think about moving away from USB for USRP? Perhaps to
PCI-X 2.0, or PCI-Express?

Or perhaps Express Card. That would retain the laptop’s portability
advantage.

Eric

Eric B. wrote:

a thread / block model.

There’s also cases where multi-threading within a block might be
beneficial. Large FFT filters,
for example. There’s an “inflection point” where the cost of setting
up a parallel instance of
a filter is well-paid-for by computing it in parallel. But I’m no
expert on such things.

Just looked that up. That’s basically PCI-Express and USB2.0 “extruded”
into the designed-for-laptops
form factor. Definitely worth considering.


Marcus L. Mail: Dept 1A12, M/S: 04352P16
Security Standards Advisor Phone: (ESN) 393-9145 +1 613 763 9145
Strategic Standards
Nortel Networks [email protected]

On Thursday 27 July 2006 02:23, Marcus L. wrote:

So, I read yesterday that Intel is going to start introducing quad-core
CPUs sometime late this
year, rather than 2007 as originally announced.

Let’s hope they fix their bus architecture first otherwise they’ll all
be
starved for memory bandwidth :slight_smile:

You can get dual or quad CPU boards and put dual core CPUs in them
already… I
would suggest that would be more useful since (for AMD64 anyway) each
socket
has some local RAM which would mean less contention.

On 7/26/06, [email protected] [email protected] wrote:

I’d vote for Gigabit Ethernet as an interface. It offers the following:

Since you mention Gigabit Ethernet, I have to ask… are there any
latency
issues with it?

Nikhil

I’d vote for Gigabit Ethernet as an interface. It offers the following:

  1. Place the USRP very close to the antenna.

  2. Distribute the signal to multiple computers. (Multi-cast IP)

  3. Very low cost infrastructure of Ethernet switches and cables.

  4. Ethernet is easier to use across common platforms like
    Windoze/Linux/Embedded systems.

Would an extra bit of hardware such as a PCI card with PLX’s PCI9030
breaking out to the USRP with something like an 80 wire IDE cable be
suitable for high bandwidth, low latency and lowish cost?

(http://www.plxtech.com/products/io_accelerators/PCI9030/default.htm)

You might be able to do gigabit ethernet if you just pushed out the data
to and from the USRP in plain old ethernet frames directly to a gigabit
ethernet port on the PC. No IP headers and no switches in between.

I know someone who used gigabit ethernet driver chips hooked to an FPGA
in order to push lots of digitised SVGA video data down a long length
of CAT5e for a KVM application.

On Thu, Jul 27, 2006 at 04:09:24PM +1000, Jason Hecker wrote:

Would an extra bit of hardware such as a PCI card with PLX’s PCI9030
breaking out to the USRP with something like an 80 wire IDE cable be
suitable for high bandwidth, low latency and lowish cost?

(http://www.plxtech.com/products/io_accelerators/PCI9030/default.htm)

That’s one route, though 32-bit 33-MHz PCI is pretty much the bottom
of the barrel these days. Hence the interest in PCI-Express and/or
Express Card.

You might be able to do gigabit ethernet if you just pushed out the data
to and from the USRP in plain old ethernet frames directly to a gigabit
ethernet port on the PC. No IP headers and no switches in between.

Yes, there’s no particular problem doing Gig E. You can run it 100m
with CAT5e and 10+ km over fiber. Both of these are good for putting
the USRP on the tower, next to the LNA and PA.
http://en.wikipedia.org/wiki/Gigabit_Ethernet

I know someone who used gigabit ethernet driver chips hooked to an FPGA
in order to push lots of digitised SVGA video data down a long length
of CAT5e for a KVM application.

Eric

On Wed, Jul 26, 2006 at 10:11:14PM -0400, Nikhil wrote:

On 7/26/06, [email protected] [email protected] wrote:

I’d vote for Gigabit Ethernet as an interface. It offers the following:

Since you mention Gigabit Ethernet, I have to ask… are there any latency
issues with it?

Latency with Gig E should be less than we’re currently seeing with USB
given the higher data rate. Without a doubt, a bus-interfaced USRP
would have lower latency, however there are lots of trade-offs.

Eric

On Thursday 27 July 2006 15:39, Jason Hecker wrote:

Would an extra bit of hardware such as a PCI card with PLX’s PCI9030
breaking out to the USRP with something like an 80 wire IDE cable be
suitable for high bandwidth, low latency and lowish cost?

(http://www.plxtech.com/products/io_accelerators/PCI9030/default.htm)

A 9030 is target only, you’d need a 9054, 9056, 9060 or 9080 otherwise
the
performance would not be very great.

Another option is to use a PCI soft core but then you run into licensing
issues.

I have looked at these for work and one problem (for us anyway) is that
the
S/G engine doesn’t treat the data as precious so it’s not very useful
for
reading from a FIFO. It’s quite frustrating because that means you have
a
chip which has an S/G engine built in but you have to make your own
anyway :frowning:

I’d love to be shown to be wrong though :wink:

I know someone who used gigabit ethernet driver chips hooked to an FPGA
in order to push lots of digitised SVGA video data down a long length
of CAT5e for a KVM application.

Was that really ethernet framing? Or just using the CAT5 cable as 4
differential pairs?
(Not that there’s anything wrong with that - I imagine you’d still get
good
cable lengths with the right drivers)

Was that really ethernet framing? Or just using the CAT5 cable as 4
differential pairs?

No, from memory he just used the driver chips and implemented his own
bit toggling and framing magic in the FPGA. He just used the drivers
and transformers to get the data on and off the cable.

If you did use ethernet framing you could just use any old gigabit
ethernet card to talk to USRP.

The last time this came up, I think the problem was finding a gigabit
chipset that didn’t require a reflow oven and/or a 6-layer PCB… it’s
been
a few months, it may be worth looking at again.

R C

(http://www.plxtech.com/products/io_accelerators/PCI9030/default.htm)

That’s one route, though 32-bit 33-MHz PCI is pretty much the bottom
of the barrel these days. Hence the interest in PCI-Express and/or
Express Card.

True. At a glance I couldn’t see any alternative from PLX for faster
PCI busses. PLX’s chips are fairly inexpensive though. As an
alternative could you use an FPGA as the PCI-Express to USRP MkII
interface?

the USRP on the tower, next to the LNA and PA.

You might be able to implement power-over-ethernet to the USRP with this
method as well. No need to implement 802.3af (I think that’s the
spec.). Even though there are no spare pairs to use for a DC feed on a
GigE CAT5 cable there are ethernet transformers which can isolate and
insert a common mode DC current onto a pair of CAT5 wires.

On Wed, 26 Jul 2006, Marcus L. wrote:
[…]

o Is it time to think about moving away from USB for USRP?  Perhaps to
   PCI-X 2.0, or PCI-Express?

http://comsec.com/wiki?USRPnotUSB


Stephane

PS: I would cast my vote for GigE.

On Thu, Jul 27, 2006 at 04:02:00PM +0930, Daniel O’Connor wrote:

On Thursday 27 July 2006 15:39, Jason Hecker wrote:

Would an extra bit of hardware such as a PCI card with PLX’s PCI9030
breaking out to the USRP with something like an 80 wire IDE cable be
suitable for high bandwidth, low latency and lowish cost?

(http://www.plxtech.com/products/io_accelerators/PCI9030/default.htm)

A 9030 is target only, you’d need a 9054, 9056, 9060 or 9080 otherwise the
performance would not be very great.

Good point. I’ve written drivers using the 9080 before it was pretty
easy to use. The scatter/gather stuff worked fine for me, at least
from the point of view of the host side.

I know someone who used gigabit ethernet driver chips hooked to an FPGA
in order to push lots of digitised SVGA video data down a long length
of CAT5e for a KVM application.

Was that really ethernet framing? Or just using the CAT5 cable as 4
differential pairs?
(Not that there’s anything wrong with that - I imagine you’d still get good
cable lengths with the right drivers)

Eric

On Thursday 27 July 2006 16:09, Jason Hecker wrote:

That’s one route, though 32-bit 33-MHz PCI is pretty much the bottom
of the barrel these days. Hence the interest in PCI-Express and/or
Express Card.

True. At a glance I couldn’t see any alternative from PLX for faster
PCI busses. PLX’s chips are fairly inexpensive though. As an
alternative could you use an FPGA as the PCI-Express to USRP MkII
interface?

PLX don’t have any “IO Accelerators” for PCI express.

They do make a PCI express to PCI converter though :wink:

I dunno how much PCI-e soft cores cost, but it looks like one of the
“easier”
routes to doing PCI-e :frowning: You need an interface chip (BGA…) unless
you’re
using a Virtex-4 though (dunno what that translates to in Altera-land)

On Thu, Jul 27, 2006 at 04:39:43PM +1000, Jason Hecker wrote:

(http://www.plxtech.com/products/io_accelerators/PCI9030/default.htm)

That’s one route, though 32-bit 33-MHz PCI is pretty much the bottom
of the barrel these days. Hence the interest in PCI-Express and/or
Express Card.

True. At a glance I couldn’t see any alternative from PLX for faster
PCI busses. PLX’s chips are fairly inexpensive though. As an
alternative could you use an FPGA as the PCI-Express to USRP MkII
interface?

You could, but I think you burn up a pretty good chunk of a big FPGA
doing it. There are PCI-Express to PCI-X (and/or PCI) bridge chips.
I believe this is one way (short of a custom asic) to make this all
work. Then at least your card gets all the bandwidth available from
PCI-X. 64-bit 66-MHz or 100-MHz gets pretty reasonable :wink:

the USRP on the tower, next to the LNA and PA.

You might be able to implement power-over-ethernet to the USRP with this
method as well. No need to implement 802.3af (I think that’s the
spec.). Even though there are no spare pairs to use for a DC feed on a
GigE CAT5 cable there are ethernet transformers which can isolate and
insert a common mode DC current onto a pair of CAT5 wires.

I think it would be hard to get enough power for the PA over the CAT5.

Eric

The cost for the Xilinx PCI Express LogiCORE was $25,000, the last time
I looked. It may have dropped to $20,000. It can be used on a Virtex 2P
(for x1 and x4) or a Virtex 4 (for x1, x4, x8) operation.

The alternative is to use a PCIe PHY chip and then supply a PCIe Link
Layer / Transaction Layer softcore. Xilinx offers this solution for x1,
using a Philips PHY. I am not sure of the cost, but it’s probably in the
neighborhood of $5k.

While PCI Express is desirable for bandwidth, it is cost-prohibitive and
somewhat difficult to implement for both PCB layout and FPGA code
generation. It is preferable to PCI-X, however, since it is compatible
with a wide variety of new systems.

Since software radio is often used in embedded environments, it seems to
me that the interface chosen must support the most common interfaces
available today. Thus, a 10/100/1000 Ethernet interface would enable to
software radio to be plugged in to a variety of systems, including
laptops, embedded systems, and legacy systems.

Remember that if you want the bandwidth of PCI Express, you can always
use multiple SDRs, a Gigabit Ethernet switch, and multiple NICs in the
host PC. This would still be significantly cheaper than the cost
associated with a PCI Express license. Of course, this same argument can
be applied to USB.

–Alex

On Friday 28 July 2006 01:39, Eric B. wrote:

method as well. No need to implement 802.3af (I think that’s the
spec.). Even though there are no spare pairs to use for a DC feed on a
GigE CAT5 cable there are ethernet transformers which can isolate and
insert a common mode DC current onto a pair of CAT5 wires.

I think it would be hard to get enough power for the PA over the CAT5.

From what I can see you can draw up to 13W…
That said running a power cable to your antenna is not terribly onerous
IMO.

On Friday 28 July 2006 01:33, Eric B. wrote:

A 9030 is target only, you’d need a 9054, 9056, 9060 or 9080 otherwise
the performance would not be very great.

Good point. I’ve written drivers using the 9080 before it was pretty
easy to use. The scatter/gather stuff worked fine for me, at least
from the point of view of the host side.

Yeah, I mean to say that you COULD use it, but it would need more host
intervention and/or a reasonable amount of on card buffering.

You can’t (as far as I can see) say to the SG engine that it’s reading
from a
FIFO and it should treat the data as precious. Also there is no input to
the
PLX chip to allow you to gate reads (ie a data available pin).

So if you wanted to use it you’d have to set up some local memory and
then
copy data into that (from the FPGA) and then signal the host when a
“page” is
done so it can program the PLX chip. Means you get an interrupt every
page
which seems inefficient to me.

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs