How do I capture of the time of USRP N210 samples with host computer system time?

Hi Folks,

We are testing 2 USRP N210 units with data gathering (using GRC source
and
file sink). The command from the host to the N210 is sent at the
approximately simultaneous time (referenced to system NTP time).
However,
the samples gathered from the 2 units appear to differ by as much as
0.4-0.5
seconds! The intention is to gather data at the 2 units at approximately
the
same time as the pre-programmed system time commands. The gathering is
done
with the top_block.py code generated from GRC. What happens inside the
USRP
appears to be beyond my control here. So the natural thing to do here is
to
time-stamp the samples with host computer system time. I hope that we
don’t
have to use an external reference for the USRP. Isn’t there an internal
USRP
clock that is continuously counting. If we can get the USRP clock counts
to
tie together the samples counts and the host computer system time, we
would
be in good shape. Is this a good idea?

Your thoughts and help are appreciated.

LD Zhang

You can get the synchronize two USRPs with an external reference or a
USRP
MIMO https://www.ettus.com/product/details/MIMO-CBL cable, and
specifying
the time to start streaming for both.

-John

Hello,

Thanks for the note below.

I am reading an older email exchange between Josh B. and a guest. Josh
mentioned that there is a downstream block (in GRC I guess) that can use
the
timestamp tag to decide which samples are to be stored. Also there
appears
to be metadata utility to use. Maybe these magic functions such as
stream
tags (or) and get_time_last_pps to get the job done. So far I have used
none
of these utilities. My goal here is trying to avoid using the PPS if
possible. Since the data gathering is done infrequently, I tend to think
that the PPS can be avoided. I am trying not to use the PPS at least for
this phase of the development. Yes for the next phase with enhanced
capabilities, the external reference need will resurrect again. But it
is
better not to use the PPS right now, unless I am wrong here and need
re-education of the USRP. The mimo cable is not an option since the 2
units
are not co-located.

So is it possible to do the downstream block and metadata stream tags in
GRC
(currently using UHD source block and file sink block)?

Thanks very much,

LD

From: John M. [mailto:[email protected]]
Sent: Thursday, January 03, 2013 11:24 PM
To: LD Zhang
Cc: [email protected]
Subject: Re: [Discuss-gnuradio] How do I capture of the time of USRP
N210
samples with host computer system time?

You can get the synchronize two USRPs with an external reference or a
USRP
MIMO https://www.ettus.com/product/details/MIMO-CBL cable, and
specifying
the time to start streaming for both.

-John

On Thu, Jan 3, 2013 at 11:18 PM, LD Zhang [email protected] wrote:

Hi Folks,

We are testing 2 USRP N210 units with data gathering (using GRC source
and
file sink). The command from the host to the N210 is sent at the
approximately simultaneous time (referenced to system NTP time).
However,
the samples gathered from the 2 units appear to differ by as much as
0.4-0.5
seconds! The intention is to gather data at the 2 units at approximately
the
same time as the pre-programmed system time commands. The gathering is
done
with the top_block.py code generated from GRC. What happens inside the
USRP
appears to be beyond my control here. So the natural thing to do here is
to
time-stamp the samples with host computer system time. I hope that we
don’t
have to use an external reference for the USRP. Isn’t there an internal
USRP
clock that is continuously counting. If we can get the USRP clock counts
to
tie together the samples counts and the host computer system time, we
would
be in good shape. Is this a good idea?

Your thoughts and help are appreciated.

LD Zhang

Great! Thanks Josh for the answers to my last email. Please see my
comments
and questions below.

============================================================================

If you want to use the PC clock, I recommend calling set time now with
the
current PC time before scheduling streaming. This will make the USRP
tick
counter roughly match the PC clock.

python:
usrp_source.set_time_now(uhd.time_spec_t(time.time())

c++
usrp_source->set_time_now(uhd::time_spec_t(secs, micros, long(1e6));

This way, your only time-ambiguity is the variance in ethernet control
packets and the difference in time between the different PCs. That
should
satisfy referenced to PC’s time anyway…

I will try it in Python to set the USRP time. My generated top_block.py
is
doing as: self.uhd_usrp_source_0.set_…
I suppose I will just do:
self.uhd_usrp_source_0.set_time_now(uhd.time_spec_t(time.time())

There is however a question of how often one needs to set the USRP time.
It
would depend on how fast the USRP clock drifts with respect to host
computer
system time. I remember a number of 2 ppm accuracy of the TCXO frequency
reference. How does this translate to rate of clock drift (in say
micro-seconds drift per hour)?


The next step would be to schedule when a stream begins for each device
so
they send RX samples to the host at the “same time” (according to the
USRP
anyway).

I think the USRP source block supports setting the time, and streaming
at a
specific time, but there isnt a way to do this at the GRC level. So you
might need some hand coded python added to the generated flowgraph, if
you
are using GRC.

OK, I want to do this. But what is the command to do in python?
Something
like:
self.uhd_usrp_source_0.rx_sample_time??? Or .schedule_time???

I do need to pass the sample gather start time as a variable to the
top_block.py script. How do I do that (I am a matlaber who is still
clumsy
with python)?

Thanks and Regards,

LD


Discuss-gnuradio mailing list
[email protected]
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio

On 01/04/2013 01:55 AM, LD Zhang wrote:

timestamp tag to decide which samples are to be stored. Also there appears

If you want to use the PC clock, I recommend calling set time now with
the current PC time before scheduling streaming. This will make the USRP
tick counter roughly match the PC clock.

python:
usrp_source.set_time_now(uhd.time_spec_t(time.time())

c++
usrp_source->set_time_now(uhd::time_spec_t(secs, micros, long(1e6));

This way, your only time-ambiguity is the variance in ethernet control
packets and the difference in time between the different PCs. That
should satisfy referenced to PC’s time anyway…


The next step would be to schedule when a stream begins for each device
so they send RX samples to the host at the “same time” (according to the
USRP anyway).

I think the USRP source block supports setting the time, and streaming
at a specific time, but there isnt a way to do this at the GRC level. So
you might need some hand coded python added to the generated flowgraph,
if you are using GRC.

So is it possible to do the downstream block and metadata stream tags in GRC
(currently using UHD source block and file sink block)?

The file sink does not save metadata, but I suspect you dont need to
save the metadata if you presumably have asked the devices to stream at
the same time.

-josh

On 01/04/2013 01:21 PM, LD Zhang wrote:

python:
I will try it in Python to set the USRP time. My generated top_block.py is
doing as: self.uhd_usrp_source_0.set_…
I suppose I will just do:
self.uhd_usrp_source_0.set_time_now(uhd.time_spec_t(time.time())

There is however a question of how often one needs to set the USRP time. It
would depend on how fast the USRP clock drifts with respect to host computer
system time. I remember a number of 2 ppm accuracy of the TCXO frequency
reference. How does this translate to rate of clock drift (in say
micro-seconds drift per hour)?

Doesnt really matter. The PC and USRP are on two different clocks with
very different drifts. You should probably just set the time again
before every acquisition. The ambiguity is probably going to be on the
order of several ms anyway.

might need some hand coded python added to the generated flowgraph, if you
are using GRC.

OK, I want to do this. But what is the command to do in python? Something
like:
self.uhd_usrp_source_0.rx_sample_time??? Or .schedule_time???

There is a set_start_time() call. setting this will effect what time the
streaming begins when the flow graph starts

Also, as an alternative, there is access to the issue_stream_cmd(). You
can leave the flow graph running and issue commands to turn streaming on
and off, schedule bursts, etc…

I do need to pass the sample gather start time as a variable to the
top_block.py script. How do I do that (I am a matlaber who is still clumsy
with python)?

I guess you could just exec the python process to do a single
acquisition, write to file, close and return.

If you are messing w/ GRC, the parameter block will generate command
line options and top block constructor parameters for you.

-josh

Hi Josh,

Thanks for replying. I have some thoughts on using the “set_start_time”.
Please see my comments and questions below:

=========================================================

There is a set_start_time() call. setting this will effect what time the
streaming begins when the flow graph starts

Once the set_time_now command is performed. I consider the USRP have the
same (or approximately the same) time as the host computer, correct?

The set_start_time() call looks nice, but it requires a time passed to
it. I
currently don’t know how to dynamically call the set_start_time()
command
with a variable of system time in hour:min:sec.microsecond format. I
found a
discussion a while ago on some problems with the set_start_time. Someone
was
complaining that the time-out window in the cpp code is too short so a
future scheduled time did not work.

Thinking about what I need to achieve, the set_start_time command is
certainly good to make to work. But maybe I have an easier task here.
Since
the time is already sync’d, and presuming that I can get the time tag of
the
first sample, it is OK if the start time is off a bit(as long as it
doesn’t
jump around by more than 0.1 seconds each time I perform the same
function),
all I need is to know the time of the start sample referenced to the
host
system time. So I say all I need right now is to get the time tag of the
beginning sample and that would suffice, just to make it easier.

Also, as an alternative, there is access to the issue_stream_cmd(). You
can
leave the flow graph running and issue commands to turn streaming on and
off, schedule bursts, etc…

I will try to make the previous approach to work before I mess with this
one.

LD

On 01/04/2013 03:15 PM, LD Zhang wrote:

Once the set_time_now command is performed. I consider the USRP have the
same (or approximately the same) time as the host computer, correct?

The set_start_time() call looks nice, but it requires a time passed to it. I
currently don’t know how to dynamically call the set_start_time() command
with a variable of system time in hour:min:sec.microsecond format. I found a

You pass it a uhd.time_spec. SInce you are using the PC’s time, there is
already something you may find convenient:
uhd.time_spec.get_system_time()

http://files.ettus.com/uhd_docs/doxygen/html/classuhd_1_1time__spec__t.html#a28bb1e25ad03f333078bea59d21b4854

discussion a while ago on some problems with the set_start_time. Someone was
complaining that the time-out window in the cpp code is too short so a
future scheduled time did not work.

The timeout in the current implementation of uhd usrp source is no more.
The work function just keeps going, samples or no samples.

Hello,

I tried the following command in python:

python:
usrp_source.set_time_now(uhd.time_spec_t(time.time())

It doesn’t seem to work. Looks like the “time.time()” is wrong? Looked
up an
earlier example:

set_time_now(uhd::time_spec_t(0.0), 0)

The syntax looks different. But this may be doing something different
from
my intention which is to sync the USRP time to the host system time. I
am
still searching for the right syntax for this command. Any help is
appreciated.

LD

You pass it a uhd.time_spec. SInce you are using the PC’s time, there is
already something you may find convenient:
uhd.time_spec.get_system_time()

Can you do this in python script that I have, or how you do this in
python?

If not in python, then the easier thing for me to do may still be trying
to
get the time of captured start sample of the data.

Please advise.

Thanks,

For your reference, I am pasting the python code I have:

class top_block(gr.top_block):

def init(self):
gr.top_block.init(self, “test_1”)

##################################################
# Variables
##################################################
self.samp_rate = samp_rate = 10000000

##################################################
# Blocks
##################################################
self.uhd_usrp_source_0 = uhd.usrp_source(
  device_addr="",
  stream_args=uhd.stream_args(
    cpu_format="fc32",
    channels=range(1),
  ),
)
self.uhd_usrp_source_0.set_subdev_spec("A:B", 0)
self.uhd_usrp_source_0.set_samp_rate(samp_rate)
self.uhd_usrp_source_0.set_center_freq(0, 0)
self.uhd_usrp_source_0.set_gain(0, 0)
self.gr_skiphead_0 = gr.skiphead(gr.sizeof_gr_complex*1,
self.gr_head_0 = gr.head(gr.sizeof_gr_complex*1, 10000000)
self.gr_file_sink_0 = gr.file_sink(gr.sizeof_gr_complex*1,

“sample_sink.dat”)
self.gr_file_sink_0.set_unbuffered(True)

##################################################
# Connections
##################################################
self.connect((self.gr_head_0, 0), (self.gr_file_sink_0, 0))
self.connect((self.uhd_usrp_source_0, 0),

(self.gr_skiphead_0, 0))
self.connect((self.gr_skiphead_0, 0), (self.gr_head_0, 0))

def get_samp_rate(self):
return self.samp_rate

def set_samp_rate(self, samp_rate):
self.samp_rate = samp_rate
self.uhd_usrp_source_0.set_samp_rate(self.samp_rate)

if name == ‘main’:
parser = OptionParser(option_class=eng_option, usage="%prog:
[options]")
(options, args) = parser.parse_args()
tb = top_block()
tb.run()

On 01/04/2013 07:03 PM, LD Zhang wrote:

set_time_now(uhd::time_spec_t(0.0), 0)


Discuss-gnuradio mailing list
[email protected]
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio

You’ll have to put an:

import time

In your python


Marcus L.
Principal Investigator
Shirleys Bay Radio Astronomy Consortium
http://www.sbrac.org

Great! Thanks. I tried to import time and it works. Now I just have to
see
if this is sufficient for my sample gather timing or I still have to get
the
timestamp of the first sample using metadata and such.

LD

On 01/07/2013 04:46 PM, LD Zhang wrote:

system time. But what is the argument I should give to set_start_time?

Just a time in the near future that you can reasonably schedule in
advance of starting the flow graph. Like:
uhd.time_spec(time.time() + 0.5))

  1. The other option is to get the metadata out for the samples collected.
    From what I read, it looks like one cannot do it in GRC, but have to edit
    the cpp source code. Is there an example somewhere of how this is done?

you can write a custom block in c++ or python to deal with tags

The most complete guide is here, but it requires installing grextras. I
think you can do this w/ more recent native gnuradio, but there isnt a
guide yet

https://github.com/guruofquality/grextras/wiki/Blocks-Coding-Guide#wiki-stream-tags

-josh

worked). Do I just hand edit the set_start_time just following that command?
Now the problem is that after set_time_now, the USRP time is sync’d to the
system time. But what is the argument I should give to set_start_time?

If you’re running this on two different computers, and using the local
system clock, it’ll be hard to make both USRPs really agree on what time
it is. Even if the two hosts are synchronized with NTP, you’ll have
to do a fair bit of dancing about to make sure that they both,
more-or-less,
do the set_time_now() with the same value, and at the same time.

This is why for precision synchronized samples, people use a GPSDO with
a 1PPS output (to allow set_time_next_pps() ), and a 10Mhz
refclock (so that the clocks on the two-or-more USRPs all step
together at the same rate and phase).


Marcus L.
Principal Investigator
Shirleys Bay Radio Astronomy Consortium
http://www.sbrac.org

When I used

set_start_time(uhd.time_spec(time.time() + 0.5)))

for both USRP units (after set_time_now), this seems to have worked. At
least by visual examination the 2 units are taking data at approximately
the
same time. Yeah, yeah, yeah, I know, this is never exactly accurate and
one
need to do PPS for accurate and robust operation for long stretch of
time.
But actually for this phase of development we are making a point
intentionally not to use the PPS reference and relying on the USRP clock
over shorter stretch of time. I know that because of this we also have
to do
ntp sync for the 2 host computers for the USRPs more frequently which
makes
the code a little uglier. In the long run, we will adopt the use of a
PPS.

Thanks,

LD

Thanks to Josh and Marcus for their comments. The set_time_now command
works! After I put it in, the earlier observed 0.5 sec offset between
the 2
USRP became ~0.1 second. So there is still work to do. I guess my
options
are:

  1. Make the set_start_time command to work. My question is how I can
    make it
    work in python. I hand edited the set_time_now command to embed in the
    initialization part of the top_block.py code generated from GRC (which
    has
    worked). Do I just hand edit the set_start_time just following that
    command?
    Now the problem is that after set_time_now, the USRP time is sync’d to
    the
    system time. But what is the argument I should give to set_start_time?

  2. The other option is to get the metadata out for the samples
    collected.
    From what I read, it looks like one cannot do it in GRC, but have to
    edit
    the cpp source code. Is there an example somewhere of how this is done?

Thanks very much,

LD

On 01/16/2013 03:37 PM, LD Zhang wrote:

GRC as follows:

self.uhd_usrp_source_0.set_time_now(uhd.time_spec_t(time.time()))

self.uhd_usrp_source_0.set_start_time(uhd.time_spec_t(time.time() + 0.5))

How are you communicating the same start time to each device in your
setup? Suppose there were two devices, would it not be more like this:

self.uhd_usrp_source_0.set_time_now(uhd.time_spec_t(time.time()))
self.uhd_usrp_source_1.set_time_now(uhd.time_spec_t(time.time()))

#start stream time common for all N devices
start_time = uhd.time_spec_t(time.time() + 0.5)

self.uhd_usrp_source_0.set_start_time(start_time)
self.uhd_usrp_source_1.set_start_time(start_time)

-josh

Hi Folks,

Sorry for trying to resurrect this topic thought to be settled at one
time.
My earlier impression was somehow incorrect. Let me summarize the
situation: basically one wants to gather data at approximately the same
time for 2 USRPs. Using the 2 host computers sync’d to NTP, this appears
to
be feasible in principle. If they differ by 1 or 2 ms, I don’t care and
it’s within tolerance of the particular application.

So the quickest thing to do was to modify the top_block.py generated
from
GRC as follows:

self.uhd_usrp_source_0.set_time_now(uhd.time_spec_t(time.time()))

self.uhd_usrp_source_0.set_start_time(uhd.time_spec_t(time.time() +
0.5))

which basically set the USRP to system time and schedule the data gather
start at 0.5 seconds away from the current time. A first test appeared
to
OK, roughly. But subsequent test shows that the start gather time differ
between the 2 units by quite a bit, maybe 20-100 ms, or so. So looks
like
this approach does not work (So it looks Marcus was right when he said
that
one has to some dancing around to make sure it works, though I am
curious
what that dancing around is.).

At this point, there appears to be 2 other approaches:

  1. drop the GRC approach and try to use the rx_samples_to_file command.
    The
    order of command I see would be as:
    first: do an equivalent set_time_now command, although I am
    still
    searching for the exact syntax.
    second: according to what I read from an earlier topic, it seems that
    one has to make sure both the rx_samples_to_file.cpp and the
    rx_timed_samples.cpp need to be set a time_spec at the part of the code
    where the stream_cmd_t is set up. It appears that the
    rx_timed_samples.cpp
    is already set up correctly but not rx_samples_to_file.cpp? If so, how
    do I
    change the rx_samples_to_file.cpp? Also, once I am done changing, how do
    I
    rebuild the code suite?
    third: I suppose one do a rx_samples_to_file command with the right
    options - time in the future, etc. But this does seem to be doing the
    same
    thing as I was doing before. I don’t see how this approach fixes the
    problem.
    Please provide suggestions for solving this problem.

  2. A safer bet might be to try to get the timestamps of the data. This
    would require modifying the code that contains the metadata structure,
    which I currently don’t know how to do. Also I have a question: suppose
    the
    code is modified correctly and metadata structure is properly retained.
    How
    can this metadata be saved either into a separate file or together with
    the
    captured sample streams in the same file?

Maybe there are other approaches, I am still on a path that avoids using
the PPS. Your help is appreciated.

Thanks,

LD

Hi,

Please see my comment below:

On Wed, Jan 16, 2013 at 2:30 PM, Josh B. [email protected] wrote:

to

self.uhd_usrp_source_0.set_start_time(start_time)
self.uhd_usrp_source_1.set_start_time(start_time)

-josh

The 2 USRP is each connected to a different computer. Each computer is
sync’d in time via NTP update. Since NTP time is accurate to ~ 1ms, I
consider the 2 computers sync’d right after the NTP update. There is
network communication (socket signal) between the 2 computer so that
they
note their system time immediately after the socket signal and schedule
(round forward to a future integer 10 second point) to perform the same
action (data gathering) at the same time in the future. That is, each
schedule an amount of time and each watch its clock, when it gets to
that
scheduled point, it immediately launches the top_block.py script in
which
the same set_time_now and set_start_time command are performed. There is
no
“_0” and “_1” distinction because each is operating independently. I am
still scratching my head on what happens inside the USRP in grabbing the
first sample in a continuous data stream. Maybe the better solution here
is
to grab the metadata structure which gives the timestamp of the first
sample? Since I have never messed with USRP cpp code before, I want to
be
careful in what I am doing. I would like to find out what cpp file to
modify, how to modify it, and how to rebuild afterwards.

Thanks very much,

LD

> Hi Folks,
> it's within tolerance of the particular application.
+ 0.5))

they note their system time immediately after the socket signal and
the first sample? Since I have never messed with USRP cpp code before,
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio


Discuss-gnuradio mailing list
removed_e[email protected]
https://lists.gnu.org/mailman/listinfo/discuss-gnuradio
There’s a profoundly-variable and “jittery” amount of time that it takes
to start a Python interpreter and “get things going” between any two
serial invocations on the same machine, let alone on two different
machines. They may well agree on what time it is (to a first order
approximations) when they both say “go”, but after that, I can easily
imagine the behaviour to be not entirely deterministic.

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs