In-band signaling project overview

Hi all,

I’ve gotten an e-mail asking what exactly is the in-band signaling
project, and since I’m asking for people to help contribute… I think
that’s a fair question for me to answer :slight_smile:

Overview:

We’re making significant changes in GNU Radio and the USRP FPGA to
create a control channel between the two to support packet processing.

Previously, the USRP interpreted all data over the USB bus to it as
samples. GNU Radio blocks also only handled fixed length data with no
meta-data. This is a great architecture for streaming, but not for
packet based radio. You can’t send samples to the USRP and say
“transmit at time X,” “before transmitting this packet, switch your
center channel to Y,” or “perform carrier sense before transmitting
this.”

You also received no information about samples you were receiving, such
as the RSSI or even just a timestamp when the samples were received.
Requiring fixed length data between GNU Radio blocks again is ok for
streaming, but one of the key aspects of packet processing is variable
length data with meta-data like a priority, a timestamp when it should
be transmitted, or simply anything you can come up with.

BBN proposed and implemented with Eric a completely new GNU Radio block,
the m-block, which supports variable length data and meta-data that you
can read about here:
http://acert.ir.bbn.com/downloads/adroit/gnuradio-architectural-enhancements-3.pdf

It was really the first great step towards packet processing in the
architecture. But, it left the gap between GNU Radio and the USRP.
There was still no per-packet control over the USRP. That’s where our
in-band signaling work comes in. All samples over the USB are now
encapsulated within a new packet structure:
http://gnuradio.org/trac/browser/gnuradio/trunk/usrp/doc/inband-signaling-usb

This allows variable length data between GR and the USRP, and it
provides additional information about the samples. We’ve modified the
FPGA to now interpret these packets, parse them, and of course… do
what they say :slight_smile: The timestamp field, for example, gives you much
tighter timing. You can send samples down with a timestamp and they are
transmitted at that time.

We also have carrier sense working, which is not really reflected in
that packet structure yet, where you use in-band signaling command
packets to write an RSSI threshold to a register and any packets that
come down with the carrier sense flag the FPGA waits until the RSSI goes
below the threshold to transmit. We also built in a deadline feature
where you can specify that the FPGA only wait X clock ticks before it
throws the samples out and moves on.

It’s really opening up a whole new level of processing between GR and
the USRP that truly facilitates wireless MAC protocol development,
packet processing, per-packet control over the radio, and more precision
scheduling for TDMA.

If you have any other questions, just let me know! We hope to
officially release it soon after profiling, so again, we would
appreciate any help :slight_smile:

  • George

George-

Ok, I get it. Sounds very promising.

A couple of quick questions:

  1. What about commands (meta-data) to help deal with latency? For
    example, ‘if you receive such-and-such request,
    send this ACK’?

  2. Does your new FPGA code require the next-gen USRP, with Spartan 3
    FPGA? My understanding is that capacity is very
    limited in the Cyclone, which is an old FPGA (around yr 2002
    time-frame).

-Jeff

On 10/12/07, Jeff B. [email protected] wrote:


2) Does your new FPGA code require the next-gen USRP, with Spartan 3 FPGA? My understanding is that capacity is very
limited in the Cyclone, which is an old FPGA (around yr 2002 time-frame).

If you have a single, specific application in mind, you should be able
to reduce down the size taken in the USRP FPGA significantly.

The original USRP FPGA was designed to be able to handle the most
minimal decimation (4 by the CIC and 2 by the halfband FIR filter, I
believe) which has the FIR computing every 8 clock cycles and an
effective bandwidth of 8MHz.

It was also designed for the CIC to handle large decimation rates
which increases the “bit growth” of that filter. If you specifically
have a bandwidth and decimation rate you’re interested in (or even a
small range of decimation rates) then you should be able to change the
CIC filter to use significantly less space. Moreover, if you’re
decimating more than 4 at the CIC, you can use more clock cycles and
less instantiated soft multipliers for the FIR filter in the FPGA.

Just curious, what are you specifically looking to do? Do you need
the ranges for the CIC decimation, or just a few select values? Can
you handle having less CIC decimation and more FIR decimation? What
waveforms are you looking at which causes latency to be so tight?

Brian

Brian-

It was also designed for the CIC to handle large decimation rates
which increases the “bit growth” of that filter. If you specifically
have a bandwidth and decimation rate you’re interested in (or even a
small range of decimation rates) then you should be able to change the
CIC filter to use significantly less space. Moreover, if you’re
decimating more than 4 at the CIC, you can use more clock cycles and
less instantiated soft multipliers for the FIR filter in the FPGA.

I think the biggest concerns with Cyclone I are lack of multipliers and
low amount of
internal mem (26 kbyte for the EPC1C12).

Just curious, what are you specifically looking to do? Do you need
the ranges for the CIC decimation, or just a few select values? Can
you handle having less CIC decimation and more FIR decimation? What
waveforms are you looking at which causes latency to be so tight?

802.11b (more specifically VoIP-over-WiFi), GSM (EDGE), and eventually
3G (HSDPA)
waveforms. I’ve seen discussions and specific deadline figures for
802.11b, but not
yet for the others – do you know any web pages that show comparisons?
Thanks.

-Jeff

On 10/12/07, Jeff B. [email protected] wrote:

I think the biggest concerns with Cyclone I are lack of multipliers and low amount of
internal mem (26 kbyte for the EPC1C12).

Understandable, but also remember that a CORDIC can perform a
multiplication if you want it pipelined - and if you have a 64MHz
clock with data coming out at 1Msps, you can perform a 64-tap FIR
filter with only 1 multiplier instantiated.

I can see where it is a concern, but it should be able to be
circumvented relatively easily. I think a simple rake receiver might
be possible at something like 1Msps and a 64MHz clock in that FPGA.

802.11b (more specifically VoIP-over-WiFi), GSM (EDGE), and eventually 3G (HSDPA)
waveforms. I’ve seen discussions and specific deadline figures for 802.11b, but not
yet for the others – do you know any web pages that show comparisons? Thanks.

I have not seen discussions with regards to those times, but I think
GSM and HSDPA will be difficult strictly from an encryption
standpoint. I thought the algorithms used for those protocols were
under wraps and portions of the key are in your SIM card?

As a curious note, why do you think there is a big push to do
standards that are out there and already implemented efficiently in
dedicated silicon?

Do you have any intention on creating your own MAC protocol?

Brian

Brian-

circumvented relatively easily. I think a simple rake receiver might
be possible at something like 1Msps and a 64MHz clock in that FPGA.

802.11b (more specifically VoIP-over-WiFi), GSM (EDGE), and eventually 3G (HSDPA)
waveforms. I’ve seen discussions and specific deadline figures for 802.11b, but not
yet for the others – do you know any web pages that show comparisons? Thanks.

I have not seen discussions with regards to those times, but I think
GSM and HSDPA will be difficult strictly from an encryption
standpoint. I thought the algorithms used for those protocols were
under wraps and portions of the key are in your SIM card?

If you own both ends (or maybe better said “if it’s your data in the
first place”)
this is not an issue.

As a curious note, why do you think there is a big push to do
standards that are out there and already implemented efficiently in
dedicated silicon?

I work mostly in the infrastructure side of things. I’m not sure why
people are
pushing to build things like their own software-based GSM receiver –
that’s a good
question actually. In my area, as one example, typical basestations
cost 70 to
100k. An SDR server with a PCI/PCIe card containing an FPGA + some DSPs
could reduce
this cost by a factor of 10.

Do you have any intention on creating your own MAC protocol?

No.

-Jeff

Brian-

It was also designed for the CIC to handle large decimation rates
which increases the “bit growth” of that filter. If you specifically
have a bandwidth and decimation rate you’re interested in (or even a
small range of decimation rates) then you should be able to change the
CIC filter to use significantly less space. Moreover, if you’re
decimating more than 4 at the CIC, you can use more clock cycles and
less instantiated soft multipliers for the FIR filter in the FPGA.

I think the biggest concerns with Cyclone I are lack of multipliers and
low amount of
internal mem (26 kbyte for the EPC1C12).

Just curious, what are you specifically looking to do? Do you need
the ranges for the CIC decimation, or just a few select values? Can
you handle having less CIC decimation and more FIR decimation? What
waveforms are you looking at which causes latency to be so tight?

802.11b (more specifically VoIP-over-WiFi), GSM (EDGE), and eventually
3G (HSDPA)
waveforms. I’ve seen discussions and specific deadline figures for
802.11b, but not
yet for the others – do you know any web pages that show comparisons?
Thanks.

-Jeff