Recovering timing after overflow

Hi,

I have been able to use the stream tagging to determine the accurate
timing for the first sample of the stream. However, I run into
problems after an overflow. It does seem to be feasible to recover
timing by looking for new tags (the uhd_usrp block applies a new tag
after an overflow is detected). However, this pmt is too alien to me
still and I’m not exactly sure how to query for the sample index
corresponding to the new tag. Are there any examples anywhere? I know
how to query for the tags between some interval of samples, but I
cannot get the exact sample corresponding to the first sample of the
packet arriving after overflow.

juha

On 10/26/2011 03:41 AM, Juha V. wrote:

cannot get the exact sample corresponding to the first sample of the
packet arriving after overflow.

The tag’s offset field provides the absolute sample count of the tag.
Knowing the sample rate, the delta between the tag’s offset and your
sample; you can determine the absolute time for each proceeding sample.

Your code was probably making an assumption that the time was at offset
zero; but really the absolute time can be referenced at any offset.

http://gnuradio.org/cgit/gnuradio.git/tree/gnuradio-core/src/lib/runtime/gr_tags.h

-josh

If I understand correctly, the sample count is:

const uint64_t count = gr_tags::get_nitems(rx_time_tag);

This determines the index of the sample coming into work, which has a
new time because of overflow.

juha

Hurray!

It worked. I like the more object oriented approach more, but I’ll
wait for it to come stable first.

I finally have something that I’ve waited for a long time. I am now
able to keep track of time within a gnuradio block even in the
presence of overflows. This comes in handy because I am using one
computer to record and analyze several simultaneous beacon satellite
receivers, and at the same time running a chirp sounder at 25 MHz
bandwidth, which simultaneously looks at three transmitters. I am
effectively maxing out my low end intel quad core CPU, and if I do
anything heavy on the machine, I get overflows. I just accidentally
put the machine into swap halt by doing something heavy on R and got
59300 overflows on the ionosonde receiver that was running in the
background. But everything is still in sync!

In the process of figuring out how to determine the absolute time in
the most convenient way, I found out that it is handy to have a
“virtual t0”. This is initially rx_time, but if I get overflows, I
shift the virtual t0 by the number of dropped samples. This way I
always get the correct time on the granularity of one work block by
just calculating vt0+nitems_read(0)/sr. I might even make a mental
note that I write a patch for this functionality. Another approach
that I came up with was to create a block that pads zeros in place of
the lost samples, but this was too tedious for me to do.

juha

On 10/26/2011 08:23 AM, Juha V. wrote:

If I understand correctly, the sample count is:

const uint64_t count = gr_tags::get_nitems(rx_time_tag);

Correct, that will work. Just so you know, the tags API changed in
master to be more object oriented, so if you are on master, its
my_tag.offset. But yes, this is the same call.

This determines the index of the sample coming into work, which has a
new time because of overflow.

The “uint64_t count” is the absolute index of the tag/timestamp. The
absolute index of a sample can be calculated as this->nitems_read(0) +
relative_offset

Where relative_offset is the offset of the sample in your input buffer
in the current call to work.

-josh