Hello,
the recommended way to control latency due to buffers both in software
and hardware is to synchronize the TX and RX streams, namely to have a
mechanism that emits samples only when samples are received, minus a
maximum latency.
My naive solution to implement that is this:
class synchronize(gr.hier_block2):
def init(self, fs, delay):
gr.hier_block2.init(self, self.class.name,
gr.io_signature(2, 2, gr.sizeof_gr_complex),
gr.io_signature(1, 1, gr.sizeof_gr_complex))
delay = blocks.delay(gr.sizeof_gr_complex, int(fs * delay))
multiply = blocks.multiply_const_cc(0)
add = blocks.add_cc()
self.connect((self, 0), (add, 0))
self.connect((self, 1), delay, multiply, (add, 1))
self.connect(add, self)
Which is simple enough, but also probably not the most efficient or
elegant way. There is a better way of doing it, other than writing a new
block, probably based on the delay block?
Cheers,
Daniele