-----BEGIN PGP SIGNED MESSAGE-----
Ok, I see the point here and I agree that it would be nice if a block
was notified of a “filling output buffer chain” in advance. (you
could, however, use the noutput_items parameter or nitems_written()
together with some system clock to get an idea of what is being
However, you explained the problem very nicely: Your hardware does
something strange, e.g. repeating samples, when it goes too slow/fast.
The only way to actually account for that is the monitoring of the
sink sampling clocks relative to the source sampling clock. However,
that’s something you can’t do in software, you need some hardware
feedback to give you a reasonably exact estimate of the frequency
offset, since you don’t see the output of your hardware sink (usually).
For different soundcards there might (and might not) exist some method
to find out how fast a playback buffer got used up; this is very
hardware-dependent and not easily unified in software, especially
since in this case timing offsets in the order of microseconds are
relevant and if your feedback mechanism incorporates context switches,
async messaging and computation, you won’t be getting reliable results.
Basically: If your hardware is “bad” by means of drifting/wrong
clocks, and you can’t monitor your output, you’ll be having a bad day.
This is an inherent problem of SDR, I reckon, and from my point of
view the only solution would be constraining hardware clocks of sinks
and sources using a common master clock (that’s what most SDR
peripherals do, therefore), or monitoring the sink output using the
same source clock (which is just another incarnation of controlling
the sink clock using the source clock).
In software architectures like GR that rely on “large” buffers to
enable normal operating systems to run the SDR, I guess you will have
a hard time figuring out actual clock deviations just by measuring
buffer filling; there’s just so much more than just the sink clock
going astray that could happen which will make things look like
latency was building up. Again: Using GR you usually try to ignore the
fact that computation may (and will) take different amounts of time at
different points in time, even for the same block, and rely on the
scheduler to call the linked blocks’ work()s with fitting parameters.
Measuring the amount of data that passes through software buffers
won’t therefore help you much when measuring the sampling speed of a
hardware sink, or only on very large time scales.
On 13.12.2013 11:48, Sylvain M. wrote:
Blocks should be able to monitor the buffer level to act in
course much better than dropping/repeating samples blindly.
But to be able to use this, the codec block needs to be able to
monitor the output buffer level so it can try to maintain it at a
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
-----END PGP SIGNATURE-----