Data Collection Across Multiple Machines and USRPs

Hey list, this is a question for anyone that has used the USRPs and GNU
Radio for synchronized data collection.

I need to collect data using USRP N210s across a wireless network for a
field test. Basically the USRPs would be connected to laptops running
GNU
radio and the laptops themselves will be networked over WiFi.

My conceptualized collection would work as so: master computer that
would
control the collection python scripts on remote machines and itself
locally
(a simple USRP source to throttle to file sink). The caveat is these
scripts must start within a second of each other, so I am trying to
avoid
delays and keep latency under a 1000 ms (preferably somewhere close to
500
ms). My initial testing of my idea at my work desk as been less than
spectacular. I was using a bash script that had two lines of code:

#!/bin/sh
./home/$USER/Documents/GNU_radio/data_collection.py
–uhd-addr=addr=10.2.8.104 &
ssh -t $USER@Dell1 python ~/Documents/GNU_radio/data_collection.py
–uhd-addr=addr=10.2.8.105 &

It is a very simple script executing a script on local and one remotely
via
SSH, and according to the saved data files the file
creation/modification
times are off. If the the save files are created from scratch, the
timing
is extremely close and meets expectations when it is just two scripts.
If
the files are already pre-existing then the modification times can range
from 1 to 5 seconds. When I add more scripts to the shell script. Like
so:

#!/bin/sh
./home/$USER/Documents/GNU_radio/data_collection.py
–uhd-addr=addr=10.2.8.104 &
./home/$USER/Documents/GNU_radio/data_collection.py
–uhd-addr=addr=10.2.8.102 &
ssh -t $USER@Dell1 python ~/Documents/GNU_radio/data_collection.py
–uhd-addr=addr=10.2.8.105 &
ssh -t $USER@Dell1 python ~/Documents/GNU_radio/data_collection.py
–uhd-addr=addr=10.2.8.103 &

The times are even more off and can range from 3 to 10 seconds.
Alternatively, I can have dual USRP sources to file sink in the same
script
to get back down to the original script and I have a 3 to 7 seconds gap
between the data files collected in the local script compared to the
data
files collected by the remote script.

Is there a better way to go about collecting data quickly in
synchronized
fashion? I thought about a timing function built in each GNU radio
script
that should start the flow graph on any even second (based off a modulus
function of current system time in seconds) but I want to weigh all
options
first.

Thanks for your time reading,

Jon

Hey Jonathan,

when you cannot use GPSDOs, you should just sync your Laptops using NTP,
and then set the laptop time as device time on the USRPs using
set_time_now.

You can then agree on a specific point in time, use set_start_time on
the USRP sources and try to estimate how well-coordinated you are by
cross-correlating your measurements.

Greetings,
Marcus

On Tue, Jun 3, 2014 at 2:28 PM, Marcus Müller [email protected]
wrote:

Marcus

I’ll also recommend the file_meta_sink block:

http://gnuradio.org/doc/doxygen/classgr_1_1blocks_1_1file__meta__sink.html

This will store a time stamp based on the time info from the USRPs. It
should help you realign the data sets afterwards.

Tom

From: discuss-gnuradio-bounces+sean.nowlan=removed_email_address@domain.invalid
[mailto:discuss-gnuradio-bounces+sean.nowlan=removed_email_address@domain.invalid] On
Behalf Of Tom R.
Sent: Tuesday, June 03, 2014 2:32 PM
To: GNURadio D.ion List
Subject: Re: [Discuss-gnuradio] Data Collection Across Multiple Machines
and USRPs

On Tue, Jun 3, 2014 at 2:28 PM, Marcus Müller
<[email protected]mailto:[email protected]> wrote:
Hey Jonathan,

when you cannot use GPSDOs, you should just sync your Laptops using NTP,
and then set the laptop time as device time on the USRPs using
set_time_now.

You can then agree on a specific point in time, use set_start_time on
the USRP sources and try to estimate how well-coordinated you are by
cross-correlating your measurements.

Greetings,
Marcus

I’ll also recommend the file_meta_sink block:

http://gnuradio.org/doc/doxygen/classgr_1_1blocks_1_1file__meta__sink.html

This will store a time stamp based on the time info from the USRPs. It
should help you realign the data sets afterwards.

Tom

If you don’t have GPSDOs but need near-GPS accuracy, one option is to
get cheap USB GPS pucks and configure your GPSD instance to run an NTP
server, and point your system ntpd client at it. Then use the
“set_time_now” UHD/gr-uhd command to set the time register on multiple
radios. Finally, use the “set_start_time” command mentioned above to
schedule RX captures.

GPSD redirection pagehttp://www.catb.org/gpsd/gpsd-time-service-howto.html

Sean

On 03.06.2014 20:24, Jonathan F. wrote:

Hey list, this is a question for anyone that has used the USRPs and GNU

Radio for synchronized data collection.

I need to collect data using USRP N210s across a wireless network for a

field test. Basically the USRPs would be connected to laptops running
GNU

radio and the laptops themselves will be networked over WiFi.

My conceptualized collection would work as so: master computer that
would

control the collection python scripts on remote machines and itself
locally

(a simple USRP source to throttle to file sink). The caveat is these

scripts must start within a second of each other, so I am trying to
avoid

delays and keep latency under a 1000 ms (preferably somewhere close to
500

ms). My initial testing of my idea at my work desk as been less than

spectacular. I was using a bash script that had two lines of code:

#!/bin/sh

./home/$USER/Documents/GNU_radio/data_collection.py

–uhd-addr=addr=10.2.8.104 &

ssh -t $USER@Dell1 python ~/Documents/GNU_radio/data_collection.py

–uhd-addr=addr=10.2.8.105 &

It is a very simple script executing a script on local and one remotely
via

SSH, and according to the saved data files the file
creation/modification

times are off. If the the save files are created from scratch, the
timing

is extremely close and meets expectations when it is just two scripts.
If

the files are already pre-existing then the modification times can range

from 1 to 5 seconds. When I add more scripts to the shell script. Like
so:

#!/bin/sh

./home/$USER/Documents/GNU_radio/data_collection.py

–uhd-addr=addr=10.2.8.104 &

./home/$USER/Documents/GNU_radio/data_collection.py

–uhd-addr=addr=10.2.8.102 &

ssh -t $USER@Dell1 python ~/Documents/GNU_radio/data_collection.py

–uhd-addr=addr=10.2.8.105 &

ssh -t $USER@Dell1 python ~/Documents/GNU_radio/data_collection.py

–uhd-addr=addr=10.2.8.103 &

The times are even more off and can range from 3 to 10 seconds.

Alternatively, I can have dual USRP sources to file sink in the same
script

to get back down to the original script and I have a 3 to 7 seconds gap

between the data files collected in the local script compared to the
data

files collected by the remote script.

Is there a better way to go about collecting data quickly in
synchronized

fashion? I thought about a timing function built in each GNU radio
script

that should start the flow graph on any even second (based off a modulus

function of current system time in seconds) but I want to weigh all
options

first.

Thanks for your time reading,

Jon


Discuss-gnuradio mailing list

[email protected]mailto:[email protected]

https://lists.gnu.org/mailman/listinfo/discuss-gnuradio