Delay in tx chain

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

I’ve asked a similar question before, so please try and bear with me :).

Is there a way to calculate the delay of all blocks after yours in a
chain? It might be as easy as summing up the histories of all of the
blocks…

As an illustration of why we need this, consider that the filter blocks
often set_history equal to the number of taps. This means that as soon
as a single sample gets sent through, they can begin emitting data.
However, the data that they emit isn’t useful – not until the filter
block is full of samples. If we’re streaming, this is fine, a minor
delay.

However, packet-based applications send a message and then switch back
to RX mode. The upshot of this is that the ends of packets will get
stuck in the filters and not get pushed out until the next data packet
comes through. This (not end-of-packet-ISI) is why we have to append at
least one nonsense byte to the end of a packet in order to make the CRCs
match, and why that end-of-packet error is deterministic rather than
random.

A fix seems to be to appending a number of additional samples (maybe
zero samples) equal to the total pipeline delay to a message in a
message_source. Do we have the facility to do this? Other suggestions?
Comments?

  • -Dan
    -----BEGIN PGP SIGNATURE-----
    Version: GnuPG v1.4.7 (GNU/Linux)
    Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGjUNoy9GYuuMoUJ4RAkRhAJ4+duozGplcFxHmiz+qNP7N7m/0CACdGGQ7
Zz8uuwOFeA57mKvf+fy4on8=
=irgt
-----END PGP SIGNATURE-----