Async Messages for Tx Timestamps

Hi,

I have a packet-based application where it would be useful to know
exactly when each packet has been successfully radiated by the USRP
(N200). It seems that UHD and gr-uhd already provide similar
functionality for receiving asynchronous messages when a burst has been
successfully transmitted (EVENT_CODE_BURST_ACK). However, in my
application, I’m transmitting continuously and don’t use end_of_burst
tags. Would it be possible for the USRP to post messages to the event
queue when it has transmitted a sample with a generic tag attached? If
not, will I experience an interruption of my signal or any latency if I
insert end_of_burst tags into my continuous stream of samples? Your
advice is greatly appreciated.

Thanks!
Jordan

On 04/08/2013 04:45 PM, Jordan O. wrote:

Hi,

I have a packet-based application where it would be useful to know exactly when
each packet has been successfully radiated by the USRP (N200). It seems that UHD
and gr-uhd already provide similar functionality for receiving asynchronous
messages when a burst has been successfully transmitted (EVENT_CODE_BURST_ACK).
However, in my application, I’m transmitting continuously and don’t use
end_of_burst tags. Would it be possible for the USRP to post messages to the event
queue when it has transmitted a sample with a generic tag attached? If not, will
I experience an interruption of my signal or any latency if I insert end_of_burst
tags into my continuous stream of samples? Your advice is greatly appreciated.
I’m a bit confused by what you mean when you say you’re transmitting
continuously in a packet-based application. Are the packets back-to-back
in your sample stream? Or are you stuffing a bunch of zero samples in
between packets? That isn’t the cleanest way to do it. It would be
better to use start-of-burst (“tx_sob”) and end-of-burst (“tx_eob”) tags
if you’re using gnuradio.

Gnuradio end-of-burst tags will signal UHD firmware to bring down the TX
chain. The tag gets converted into a flag in a metadata struct. This
struct is sent with an 1-sample buffer of samples when send(…) is
called in gr_uhd_usrp_sink. (See line 376 and thereabouts here:
http://gnuradio.org/cgit/gnuradio.git/tree/gr-uhd/lib/gr_uhd_usrp_sink.cc).

Thanks!
Jordan


Discuss-gnuradio mailing list
[email protected]
Discuss-gnuradio Info Page
Hope this helps!

–sean

Gnuradio end-of-burst tags will signal UHD firmware to bring down the TX
chain. The tag gets converted into a flag in a metadata struct. This
struct is sent with an 1-sample buffer of samples when send(…) is
called in gr_uhd_usrp_sink. (See line 376 and thereabouts here:
http://gnuradio.org/cgit/gnuradio.git/tree/gr-uhd/lib/gr_uhd_usrp_sink.cc).

A little background here: Jordan has a use case that requires continuous
generation of some sort of baseband carrier, which needs to be present
with or without actual data to send.

The challenge is to continually send data, but to not to the extent that
it backs up the pipeline completely full of empty carrier. The transmit
DUC in the usrp pulls samples out at a fixed rate, so waiting on the
pipeline to drain 100% before getting useful data transmitted can be
quite a long wait.

  1. The solution is to implement some sort of point-to-point flow control
    between the source block and the TX DUC in the USRP. Now something like
    this actually happens to exist which is used internally in UHD to
    implement transmit flow control. However, it reports consumed sequence
    numbers of packets, which isnt useful to the source block because its
    cant translate between items produced and packet counts over the
    ethernet.

  2. Now we could make it work. This one liner could enable the async
    messages for the usrp2 tx flow control back to the user api.
    git jblum@blarg:~/src/ettus/uhddev/host/lib/usrp$ git diffdiff --git a/host/li - Pastebin.com

And this one liner could enforce fixed length packets, so you could
easily relate samples produced to packet sequence numbers consumed:

  1. So aside from that, I was also thinking of another way to do a sort
    of point to point flow control, but using the rx timestamps as the async
    reporting. By transmitting samples at a known time, and using a receiver
    stream and its timestamps to judge when a certain window of samples had
    been consumed by the DUC chain.

  2. Or another way, depending upon the tolerance to variability, one
    could also forgo the RX timestamps and poll the get_time_now call on the
    sink block periodically to establish the same concept of time window.

Thoughts?

-josh

Hi Josh and Sean,

Thanks for your replies. Sorry for the confusion. As Josh mentioned,
my application is packet based, but a carrier signal is sent when
packets aren’t being transmitted.

Josh, I like your second solution. At what point are the control
messages generated? Are they generated as the samples are consumed by
the DUC?

I’m currently using your tx_pacer idea to ensure that the block buffers
(gr_vmcircbuf), network buffer, and SRAM never fill completely. This
gives me a fairly good idea about when packets are being generated and
has greatly reduced transmit latency. However, I’d like to have more
precise info about when the USRP is actually radiating the samples.

I will try your suggestion and let you know how it works out. Thanks
again for your help!

Jordan

Josh, I like your second solution. At what point are the control
messages generated? Are they generated as the samples are consumed
by the DUC?

Thats correct, the patches I sent would allow you to get those messages
in a programmatic way. The message is basically a last consumed packet
sequence by the DUC.

Amendment to the last patch to the sequence count gets into the user
payload field in the metadata: diff --git a/host/lib/usrp/usrp2/io_impl.cpp b/host/lib/usrp/usrp2/io_impl.cpp - Pastebin.com

I will try your suggestion and let you know how it works out. Thanks
again for your help!

Thanks,
I would be delighted to hear about it.

-josh

Hey Josh,

Just wanted to let you know that everything seems to be working nicely.
I especially like the fact that we now have a mechanism for closed-loop
flow control. It’s also nice to have a better idea of when packets are
actually being radiated. Thanks again for all your help!

Jordan