In simulation, sometime we need to measure the latency of a packet in
terms
of the duration is needed to perform some certain signal processing on
the
packet. Assume that we have a source which generates packets with
specific
lengths. I want to know how long does it take for the packet to go
through
blocks and receive to a particular block?
Is there any solution in GNURadio? I think it’s possible with performance
counters but I don’t know how?
I never tire to say that such measurements are often meaningless, as a)
on general purpose processors and operating systems anything can
happen, stalling your flow graph, and b) as GNU Radio scales well on
multiprocessor platforms and algorithms are steadily optimized, you can
expect no two installations to exhibit the same latency. That being
said, it’s still a helpful measurement when actually implementing
something for a given system configuration, so here’s my advice:
Just compare the nitems_written() of an upstream block with the
nitems_read() of a downstream block at singulare times, which will give
you the numbers of items “in the flow” between these two.
Also, take the nitems_read() of an upstream block, and measure the host
time passing until nitems_read() of a downstream block is greater or
equal that number. Apply statistics.
The easiest way to find out how long an item has been “in the flight”
would be adding a stream tag containing the current host time at an
upstream block, and reading that tag somewhere downstream, comparing the
contained timestamp with the now current system time.
Also, since I bet this ends up on the Google search results for “GNU
Radio latency” sooner or later, a word to the uninitiated reader:
Usually, real time doesn’t matter in DSP systems such as GNU Radio.
Samples are not processed at their “signal theoretical” speed, but at
the rate that they become available, limited by the speed at which the
processor can process them. In the GNU Radio case, this is even more
evident because normally, GNU Radio lets blocks process samples en bloc,
meaning that you usually see something like 4096 samples going into a
block, which then processes them, and outputs another chunk of samples
(often of the same length), and then goes to sleep, until its woken up
to process another chunk of samples at its input. Latency is thus
strongly dependent on how big GNU Radio makes these chunks, which is a
thing that as developer/user you can configure, but lowering buffer
sizes usually decreases efficiency, and thus doesn’t necessarily reduce
latency. Generally, if you try to optimize something for throughput,
just write your blocks as efficiently as possible and use GNU Radio/volk
optimized things as often as sensible; if you try to optimize for
latency, you need to put in more thought, optimize individual buffer
sizes, consider what optimal work chunk sizes are and if you want to go
as far as breaking up GNU Radios highly modular approach.
I did what you recommended for measuring latency which is defined as
followed:
“the traveling duration of a packet through blocks until some specified
processing in one block is done”.
However, there are obstacles again. Firstly, how can optimize I/O
buffers
of my blocks to obtain a minimum latency. I tried to use “set_alignment”
for output buffers. When I change them a little, the latency will change
within a hundred of milliseconds. Although, the input buffers must be
determined in an efficient manner but how to do so?
This is important to mention that the latency here is just accounted for
GNURadio latency which is not included other delays such as hardware and
UHD delays.
Just compare the nitems_written() of an upstream block with the
Also, since I bet this ends up on the Google search results for "GNU Radio
these chunks, which is a thing that as developer/user you can configure,
Marcus
Is there any solution in GNURadio? I think it’s possible with *performance
Thank you very much Marcus,
As always you explain in details . I knew that the latency is
architecture dependent, however, I’m looking for a measurement as a role
of
thumb.
You’re idea was great which is tagging the stream with time stamp and
reading the tag and comparing it with the received time.
Just compare the nitems_written() of an upstream block with the
Also, since I bet this ends up on the Google search results for "GNU Radio
these chunks, which is a thing that as developer/user you can configure,
Marcus
Is there any solution in GNURadio? I think it’s possible with *performance
There is a mathematical relationship in the latency time? May be there
is
something related to Ethernet protocol to handle collicollisions.
El 07/09/2014 09:39, “Mostafa A.” [email protected]
escribió:
The second problem I encountered that I forgot to mention is:
when I send multiple of packets from a source block to a sink block and
I
measure the latency for each packet, the latency is increasing
constantly
as time advances. Why is this happening?
for output buffers. When I change them a little, the latency will change
installations to exhibit the same latency. That being said, it’s still a
would be adding a stream tag containing the current host time at an
something like 4096 samples going into a block, which then processes them,
far as breaking up GNU Radios highly modular approach.
packet. Assume that we have a source which generates packets with specific
I’m not using any protocol. This is just a GNURadio application.
It is important to me to know is there any way to optimize input/output
buffers of blocks to reduce latency as much as possible? Is the GNURadio
able to do so automatically?
Best,
Mostafa
On Sun, Sep 7, 2014 at 9:49 PM, Harold Daniel Moreno Urbina < [email protected]> wrote:
followed:
This is important to mention that the latency here is just accounted for
Just compare the nitems_written() of an upstream block with the
Also, since I bet this ends up on the Google search results for "GNU
these chunks, which is a thing that as developer/user you can configure,
Marcus
Is there any solution in GNURadio? I think it’s possible with *performance