How to implement synchronous source block correctly?

Last few days I tried to implement RTP stream source block (based on
sync_block) and found that this simple task is not trivial, as it seems
to
be, because GNU Radio scheduler and general data flow are not documented
(for users).

What mecahisms are allowed to be used in order to achieve producing data
synchronously and make possible to use block in any flow graph ?
Blocking/sleeping inside work() function ? I wasted a lot of time to
find
out that my flow graph works crappy not because of my block. Create
“signal
source → throttle → complex to float → audio sink” and you will hear
jerky sound. Is it because of having more than one synchronous block in
single flow chain ? If so, do I have to implement two versions of my
block
(sync and async) and user have to be responsible in selecting correct
one ?
Furthermore, there are no correct way to stop graph, instead work()
function
must never block for more than some finite interval of time. How to
choose
it - 10, 50, 100 ms ?.. Also note, that stop() method doesn’t allow
implementing any kind of interruption, it just called after graph
finished
already.

I consider these issues are fundamental.

Hope, at least, my remarks will help users who read encouraging “writing
gnuradio blocks is simple !” at wiki and being stuck in practice.


View this message in context:
http://gnuradio.4.n7.nabble.com/how-to-implement-synchronous-source-block-correctly-tp45083.html
Sent from the GnuRadio mailing list archive at Nabble.com.

On Tue, Dec 03, 2013 at 03:59:34AM -0800, Artem P. wrote:

Last few days I tried to implement RTP stream source block (based on
sync_block) and found that this simple task is not trivial, as it seems to
be, because GNU Radio scheduler and general data flow are not documented
(for users).

There’s an overview of the scheduler:
http://gnuradio.squarespace.com/blog/2013/9/15/explaining-the-gnu-radio-scheduler.html

“Users”, as you say, usually don’t need more than this to write GNU
Radio code, and most often don’t need to know anything at all about it.

it - 10, 50, 100 ms ?.. Also note, that stop() method doesn’t allow
implementing any kind of interruption, it just called after graph finished
already.

I consider these issues are fundamental.

A couple of things need clarifying:

  • You never use a throttle and a hardware clock in one flow graph (e.g.
    throttle + audio)
  • work() should never block. Sources are a bit of an exception, though,
    because blocking might be better than continuously producing no output
    if there’s nothing to produce. In this case, it’s your job to never
    produce underruns (what you called ‘jerky sound’), and produce enough
    items often enough.
  • I’m pretty sure you’ve misunderstood the concept of a “sync block”.
    Refer to [1] for an introduction. It merely describes the ratio of
    input and output rates. The opposite of a sync block is not an
    ‘async’ block.
  • The scheduler does all the work for you regarding calling of work().
    You don’t need to interrupt work(). Not sure what you’re intentions
    were with using stop().

Hope, at least, my remarks will help users who read encouraging “writing
gnuradio blocks is simple !” at wiki and being stuck in practice.

Writing blocks is one of the things we try and document as good as
possible. The corresponding tutorial [1] has received a lot of feedback
and has been continuously updated. It also discusses most of the
questions you had earlier.

I also hope that nothing on gnuradio.org discourages people from using
GNU Radio and writing blocks.

Martin

[1] http://gnuradio.org/redmine/projects/gnuradio/wiki/OutOfTreeModules


Karlsruhe Institute of Technology (KIT)
Communications Engineering Lab (CEL)

Dipl.-Ing. Martin B.
Research Associate

Kaiserstraße 12
Building 05.01
76131 Karlsruhe

Phone: +49 721 608-43790
Fax: +49 721 608-46071
www.cel.kit.edu

KIT – University of the State of Baden-Württemberg and
National Laboratory of the Helmholtz Association

On Tue, Dec 03, 2013 at 11:50:44PM -0800, Artem P. wrote:

Martin B. (CEL) wrote

There’s an overview of the scheduler:

http://gnuradio.squarespace.com/blog/2013/9/15/explaining-the-gnu-radio-scheduler.html

“Users”, as you say, usually don’t need more than this to write GNU
Radio code, and most often don’t need to know anything at all about it.

If I’m not user, but also I’m not developer, who am I then ? :wink:
I reviewed this “developers-internal” document already and that’s why I
intentionally mentioned user point of view.

In GNU Radio terminology, since we’re more of a library than an
application, users are people who use GNU Radio to develop their own
applications. Developers are people who actively work on GNU Radio and
improve it.

It’s the same with other libraries. Do you use Boost or are you
developing Boost? Obviously, I don’t know what you do outside this list,
but chances are you use it. Still, the only thing you’ll be using it for
is developing.

Martin B. (CEL) wrote

You never use a throttle and a hardware clock in one flow graph (e.g.
throttle + audio)

Does it means that I can use multiple synchronous blocks of same type ? I’ve
checked “audio source → audio sink” and I get same underruns.

You still have two clocks in one flow graph. Underruns can still happen.
You may use as many gr::sync_blocks in one flow graph as you wish. They
have nothing to do with clocks.

Martin B. (CEL) wrote

work() should never block. Sources are a bit of an exception, though…

I see audio sink doesn’t consume CPU, so it blocks too ? Another exception
?..

It can block, depending on the backend, but doesn’t have to.
Note that not consuming CPU is not necessarily due to blocking. An audio
sink is quite a simple matter, from a signal processing point of view:
Incoming samples are passed on to the sound driver, then the work
function can immediately return (there’s lots of status checking in
there too).

Martin B. (CEL) wrote

  • I’m pretty sure you’ve misunderstood the concept of a “sync block”.
    Refer to [1] for an introduction. It merely describes the ratio of
    input and output rates. The opposite of a sync block is not an
    ‘async’ block.

No, it’s just ambiguous terminology. How should we call blocks producing
output synchronously with some reference source ?

We call them ‘source block’. The fact that some hardware clock is
somewhere in that block is irrelevant, all GNU Radio blocks are driven
by the states of their buffers.

MB

Writing blocks is one of the things we try and document as good as
possible. The corresponding tutorial [1] has received a lot of feedback
and has been continuously updated. It also discusses most of the
questions you had earlier.

Due to lack of documentation, this tutorial was the only source and I
learned it from cover to cover. I mentioned only non-trivial issues which is
not discussed in it. So it’s new feedback :wink:


Karlsruhe Institute of Technology (KIT)
Communications Engineering Lab (CEL)

Dipl.-Ing. Martin B.
Research Associate

Kaiserstraße 12
Building 05.01
76131 Karlsruhe

Phone: +49 721 608-43790
Fax: +49 721 608-46071
www.cel.kit.edu

KIT – University of the State of Baden-Württemberg and
National Laboratory of the Helmholtz Association

Thank you for fast answer.

Martin B. (CEL) wrote

There’s an overview of the scheduler:

http://gnuradio.squarespace.com/blog/2013/9/15/explaining-the-gnu-radio-scheduler.html

“Users”, as you say, usually don’t need more than this to write GNU
Radio code, and most often don’t need to know anything at all about it.

If I’m not user, but also I’m not developer, who am I then ? :wink:
I reviewed this “developers-internal” document already and that’s why I
intentionally mentioned user point of view.

Martin B. (CEL) wrote

You never use a throttle and a hardware clock in one flow graph (e.g.
throttle + audio)

Does it means that I can use multiple synchronous blocks of same type ?
I’ve
checked “audio source → audio sink” and I get same underruns.

Martin B. (CEL) wrote

work() should never block. Sources are a bit of an exception, though…

I see audio sink doesn’t consume CPU, so it blocks too ? Another
exception
?..

Martin B. (CEL) wrote

  • I’m pretty sure you’ve misunderstood the concept of a “sync block”.
    Refer to [1] for an introduction. It merely describes the ratio of
    input and output rates. The opposite of a sync block is not an
    ‘async’ block.

No, it’s just ambiguous terminology. How should we call blocks producing
output synchronously with some reference source ?

Martin B. (CEL) wrote

  • The scheduler does all the work for you regarding calling of work().
    You don’t need to interrupt work(). Not sure what you’re intentions
    were with using stop().

My intention was to use non-timeout blocking calls (as supposed to be
correct) in work() with wait condition on some “stop” signal.

Martin B. (CEL) wrote

Writing blocks is one of the things we try and document as good as
possible. The corresponding tutorial [1] has received a lot of feedback
and has been continuously updated. It also discusses most of the
questions you had earlier.

Due to lack of documentation, this tutorial was the only source and I
learned it from cover to cover. I mentioned only non-trivial issues
which is
not discussed in it. So it’s new feedback :wink:


View this message in context:
http://gnuradio.4.n7.nabble.com/how-to-implement-synchronous-source-block-correctly-tp45083p45129.html
Sent from the GnuRadio mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Artem,

On 04.12.2013 08:50, Artem P. wrote:

Martin B. (CEL) wrote

You never use a throttle and a hardware clock in one flow graph
(e.g. throttle + audio)

Does it means that I can use multiple synchronous blocks of same
type ?
Yes. GNU Radio is a buffered streaming architecture, so as long as the
production / consumption rates are equal on average, it should work.
If they vary over time or really differ, then you’ll end up with
over/underruns

Martin B. (CEL) wrote

work() should never block. Sources are a bit of an exception,
though…

I see audio sink doesn’t consume CPU, so it blocks too ? Another
exception ?..
The GNU Radio Block idea ™ is to have the scheduler call your work
as soon as there are samples available for processing and the blocks
downstream can take your block’s output. Therefore, work()s should be
as fast as possible.

Obviously, sources don’t have input, so the correct way to produce
limited samples over time is to take your time when producing samples.
So, for software sources (e.g. sine wave signal source), they just
produce as many samples as fast as they can, as the scheduler requests.

As for the audio source, the rate at which samples can be produced is
defined by hardware. So to produce e.g. 441 samples you need 10ms;
there’s no way around that. So it must block for as long as it takes
to produce a reasonable / requested amount of samples for the output
buffer.

Throttle just throttles :slight_smile: so you end up with backpressure that must
eventually lead to congestion which manifests in dropped samples,
stalled flowgraphs.

Martin B. (CEL) wrote

  • I’m pretty sure you’ve misunderstood the concept of a “sync
    block”. Refer to [1] for an introduction. It merely describes the
    ratio of input and output rates. The opposite of a sync block is
    not an ‘async’ block.

No, it’s just ambiguous terminology. How should we call blocks
producing output synchronously with some reference source ?
I’d like to disagree. GR talks about “sync” blocks, and since it’s a
pure software framework, it’s obvious that the synchronous aspect is
in respect to sample flow, not to time.
Remember: there is no real realtime in general purpose signal
processing with GR.

you could say, your block is time-synchronous, but seriously, since
you always fill a buffer, this is not really true; I’d call it “rate
limited by hardware”.

Martin B. (CEL) wrote

  • The scheduler does all the work for you regarding calling of
    work(). You don’t need to interrupt work(). Not sure what you’re
    intentions were with using stop().

My intention was to use non-timeout blocking calls (as supposed to
be correct) in work() with wait condition on some “stop” signal.
ah ok. But: stop() stops the flowgraph.

Greetings, and happy hacking!

Marcus
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSnu2kAAoJEAFxB7BbsDrLCvAH/ju+5l2oGU8KfB3zO+Neb8Od
nLmcb8CC0elmIF25OaB6xvWayZZp7MkQk7ulfvbIZlkE0sG6/Fdra/bPqzL519Nu
3wj6gyLA2Lrg0YpInO+zP3PIT25QPjORLvOSO4APFKFfdZMTCNbltdV5SNWiue/4
0e6Kjo7+6VZDYPkeZn2NzpKf/AcUzpnHFaM4Viz/UIjBGfDBpFCTCGFiMyC9QRgd
QxCRIm0a20Wk2O0PlyJ+e7OE1JCiFsjUjQp9QhoMcyfIxESI5sNvBamSB14+tNri
eitR2s+ExzgDRNrf7datCue2rEZ6ISQounsujjqVbVw5gPluQ+n17rXliddmGTc=
=wfRp
-----END PGP SIGNATURE-----

No, it’s just ambiguous terminology. How should we call blocks producing
output synchronously with some reference source ?
We call them ‘source block’.

OK.

developing Boost? Obviously, I don’t know what you do outside this list,
but chances are you use it. Still, the only thing you’ll be using it for
is developing.

I meant user/developer from toolkit use point of view: user - developer
who
uses GNU Radio API in its applications and have no time/budget to delve
into
innards of huge ready-to-use code only to know few key concepts,
developer -
developer of GNU Radio toolkit.

You still have two clocks in one flow graph. Underruns can still happen.

Why ??? These clocks are actualy same clock. They are synchronous.
Moreover,
I’m sure throttle block and audio hardware clock are synchronous,
because
throttle block timing and audio driver scheduling both derive from same
hardware generator, doesn’t they ?

Note that not consuming CPU is not necessarily due to blocking. An audio
sink is quite a simple matter, from a signal processing point of view:
Incoming samples are passed on to the sound driver, then the work
function can immediately return (there’s lots of status checking in
there too).

That’s why I already asked what other methods exists besides blocking.
If it
doesn’t stimulate OS to switch context (by means of blocking calls),
then it
consume all available CPU time (it doesn’t matter what kind of activity:
signal processing, checking status, polling hardware flags, or
whatever).
So, it still seems to be that audio sink uses blocking.


View this message in context:
http://gnuradio.4.n7.nabble.com/how-to-implement-synchronous-source-block-correctly-tp45083p45147.html
Sent from the GnuRadio mailing list archive at Nabble.com.

Yes. GNU Radio is a buffered streaming architecture, so as long as the
production / consumption rates are equal on average, it should work.
If they vary over time or really differ, then you’ll end up with
over/underruns

This is exactly the problem. Rates are equal, but there are
over/underruns,
even in simple “audio source → audio sink” graph.

Remember: there is no real realtime in general purpose signal
processing with GR.

Does it mean that I will not able to run in realtime simple graph with
48kHz signal source, Qt GUI Sink and audio sink on modern computer ?

ah ok. But: stop() stops the flowgraph.

No. Read my first post.

Greetings, and happy hacking!

Thanks. :slight_smile: While hacking seems to be interesting, I prefer
professional,
official supported and documented approaches.


View this message in context:
http://gnuradio.4.n7.nabble.com/how-to-implement-synchronous-source-block-correctly-tp45083p45153.html
Sent from the GnuRadio mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

if it’s just a single source->sink system, then your buffers are too
large or your computer is way slow.

On 04.12.2013 12:41, Artem P. wrote:

Remember: there is no real realtime in general purpose signal

http://gnuradio.4.n7.nabble.com/how-to-implement-synchronous-source-block-correctly-tp45083p45153.html

Sent from the GnuRadio mailing list archive at Nabble.com.

_______________________________________________ Discuss-gnuradio
mailing list [email protected]
Discuss-gnuradio Info Page

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSnyDAAAoJEAFxB7BbsDrLM88H+weT0I3j7fC3lGRzagh2MobO
kWSD2Bs1ds4l8LeoeNQN1E3iP2Rqsyj7sx2trLjAuMFjM3F9mJaKkHG32Jlp4WXw
HftPTjgjSKj2YhipFpK4xx6N80lgbOo9vp0/VcmorkxVtgX281dJ9YpmWpzoysKX
jmoV8/iA0gl65KCmKmBHzWNoNnzt5qbzJtklOmRKYZ/yBf04+0lyJIlw8APoFPlm
vp94EwLBMR3CcPbwGnaX0taSYayoBjGvPPoew2I9vTAgxeAflBADGVBQmCoWc1IU
HIMfU9f2tU+UkA0xIZJVMkHXD3X6nNkwSLnrikFqYrSSpPReafMWflxR0iQmzwQ=
=+ozc
-----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi again!

On 04.12.2013 11:36, Artem P. wrote:

In GNU Radio terminology, since we’re more of a library than an
application, users are people who use GNU Radio to develop their
own applications. Developers are people who actively work on GNU
Radio and improve it.

I meant user/developer from toolkit use point of view: user -
developer who uses GNU Radio API in its applications and have no
time/budget to delve into innards of huge ready-to-use code only to
know few key concepts, developer - developer of GNU Radio toolkit.
Well, unfortunately, this is open source :slight_smile: some things are easier to
do while others take a lot of time. Explaining how something works
internally for someone who is actually interested in that but doesn’t
have the time to read the source itself is among the things that are
hard to do well, become obsolete fast, take a lot of time – so nobody
does it.

You still have two clocks in one flow graph. Underruns can still
happen.

Why ??? These clocks are actualy same clock. They are synchronous.
No. Well, yes. But no.
Again, there is no continous stream in GNU Radio. Samples and
especially blocks of samples, as used in GR, are not like water: They
flow in chunks, there are buffers to accumulate them.
Usually, these buffers should be automatically sized so that two
sources with the same, constant sample production rate never run out
of buffer. I don’t know what’s going wrong in your case, but that
could happen if the sample delay for one of the sources differs from
the others – all (let’s call them s0 and s1) start with sample 0, but
at the point in time as s0’s sample 0 “propagates” to the block where
the two sample flows are mixed together, due to delay there is no
sample from s1; so the scheduler has to wait until there is enough
input on both input ports of the mixing block. Depending on buffer
size and production rate, this might lead to overruns on s0, or
underruns on a potential hardware sink behind that block.

Moreover, I’m sure throttle block and audio hardware clock are
synchronous, because throttle block timing and audio driver
scheduling both derive from same hardware generator, doesn’t they
?
Moreover, I’m sure you still didn’t understand what throttle does:
exactly nothing useful.
It’s an artificial software hack to slow down the average sample
processing rate by blocking in work(). YOU NEVER EVER EVER DO IT in a
flowgraph with hardware. It can only lead to overflows, underflows.

polling hardware flags, or whatever). So, it still seems to be that
audio sink uses blocking.
The audio sink has to communicate with audio hardware. However your
system does that, the audio driver interface can only consume a
defined amount of samples per second and only has a limited buffer size.
Therefore, the call that writes samples into the device usually is a
blocking one. You see-- this is hardware with a fixed sampling rate,
so blocking is an effect, not a method.

Greetings,
Marcus
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSnwmGAAoJEAFxB7BbsDrLrRUH/0uFi17FJjvQWx22wwpiHgQz
g+UHppH12GOZddfCnGvHXbm3JBOILYDJ2cScdgLFFNshWZFV1r2z2ko6knvi5FBl
MHtjRBTFheyMuONNEetPDNRQZL6KuKrB7EAUuflmJ+D/roZjR6N9aC81mI0qsOVL
mSZzmJbXM35mY0OvRDGx82MxpNzM94z7aLRqhiwXKRvMy8VwEwPBLrLgTYXy1MJe
A7K++4S8LNROoFAFHY+QQ85EJ2oVb3b1O/r/j4f80YTJ+l9LYDwVqu2yy72XVgpH
9lHvgcTFR59ISUcQRPZDUkVtffPhF6ON+Nz/Dm4jWj22ZbttaHH8U24wTtPJZvk=
=FBY0
-----END PGP SIGNATURE-----

I realized what’s the problem !!! Enlightenment has come accidentally :smiley:

Block performing blocking calls based on time calculations can’t
schedule
processing correctly, if there are other blocking calls present in
cycle. In
other words, two blocks disturb each other, and no matter whether they
time-synchronized or not. Also I’ve beent thought a little and realized
that
it’s not even possible to make block recovering from that (operating in
current gnu radio scheduler implementation).

So, I guess final solution is to make two versions of my source block:
time-synchronous ans time-asynchronous. Or define this property via
parameter.

Thanks all for responses !


View this message in context:
http://gnuradio.4.n7.nabble.com/how-to-implement-synchronous-source-block-correctly-tp45083p45168.html
Sent from the GnuRadio mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Btw, this restriction frustrating. Someone would like to make
useful
graph
containing both audio source and sink in single chain, but it’s
impossible
due to current GNU Radio design.
As I told you before: That’s plainly not true.
There are a lot of flowgraphs that have both hardware sources and sinks.
Why your’s not working is a mystery to me, because, seriously, audio
sample rates should pose no problem for a moderately capable PC,
unless you do something complicated.

I think it would be better to implement such scheduler which do
synchronization itself (using software generator or some external
source provided by user).

The scheduler is a scheduler, it schedules the calling of the work
functions. I don’t think you realize the implications of designing a
per-sample real-time signal processing framework. It basically
eradicates the possibility of having variable computational costs in
each block.
And, by the way, I think you over-estimate the real-time abilities of
modern operating systems on modern PC hardware. If you want to have a
guaranteed “sample clock” in GNU Radio, you would need HUGE amounts of
spare computational power. That’d be a waste.
GNU Radio works on blocks of samples. This implies latency, and
makes it impractical to say “ok, that single sample always comes every
x s”, but it gives you the advantage of a buffer, so that you can
have actual hardware interaction with your SDR.

Having a hardware defined sample rate basically does exactly what you
want.
And: If you’re looking for an innovative GNU Radio scheduler, look
into GRAS (GNU Radio advanced scheduler), it has a different model of
block interaction, but of course /can’t/ take the route of
time-synchronous per-sample processing. That’s just anti-SDR, to be
honest.

Greetings,
Marcus
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSoFKrAAoJEAFxB7BbsDrL0KgH/RDZztwlNgpbwgk5ggrH+ZJB
7ROXUtAxr7mMoiIXwDe6cMPXJ9uN7ykZImCFYBQguadFzPbRaa5aPg1ALbF7eRoi
6odFqi4t3bUTpD0a1KO2KTfCEF7HeLnsQ9CkspxLw1BakOf0Pnw+1+HtIpu2PRpw
OKloxGJcH5RUcQpDp99QCYFX1OYpJ8gTE4EOmY+lWFXK2MQxwtce4CiDcV7Ue1YQ
eHycxGjwayYN9hWZWrSN169EbJ2QcBQT4VsR32xmADHBqvYK6eb0u/Zheh5HPmNj
HOZSmbWGgkNNTjgMR9+TDYnJgqAXG2w1ar4rHp6PHgb5n1kyhW5WKyylMV4psKg=
=XpsW
-----END PGP SIGNATURE-----

Btw, this restriction frustrating. Someone would like to make useful
graph
containing both audio source and sink in single chain, but it’s
impossible
due to current GNU Radio design.
I think it would be better to implement such scheduler which do
synchronization itself (using software generator or some external source
provided by user). Optionally, of course. Maybe it wouldn’t be such
flexible
in this case, but overall effect is better. I guess authors considered
this
variant but abandoned it by some reasons…


View this message in context:
http://gnuradio.4.n7.nabble.com/how-to-implement-synchronous-source-block-correctly-tp45083p45202.html
Sent from the GnuRadio mailing list archive at Nabble.com.

CMakeLists.txt files in both the root directory of your OOT module and
the lib directory. Right now, what’s likely happening is that you
aren’t linking against the libraries you need. This tutorial will
explain how to find and link against libgnuradio-fft.so.

Tom


Discuss-gnuradio mailing list
[email protected]
Discuss-gnuradio Info Page

Thanks for the document Tom. I only just found time to proceed with it,
but still getting errors.

I think the reason is that I’m using 3.6.5 which has different needs to
link the FFT lib. I followed your document, found that 3.6 uses GNU_CORE
instead of GNU_RUNTIME, but still got errors. For example ‘cmake …/’
complains not being able to find files:

 GnuradioConfig.cmake
 gnuradio-config.cmake

but those files are not even present on the USB stick environment where
GNU radio is installed.

I literally spent hours with Google to find a solution how to do this in
3.6, can you help me further please? Porting my blocks to 3.7 is only
something for a later stage: right now I’m trying to find a fix in 3.6
and hope to get the blocks working first…

Regards,

   Jeroen

As I told you before: That’s plainly not true.
There are a lot of flowgraphs that have both hardware sources and sinks.
Why your’s not working is a mystery to me, because, seriously, audio
sample rates should pose no problem for a moderately capable PC,
unless you do something complicated.

Hmm… It means, my issue didn’t solved actually. I’m not doing anything
complicated. Could you point me what should I check to find out what’s
the
problem ? Is it because I run graph on virtual machine ? The only thing
I
can tell you surely, it’s not because of slow performance. (I made
graph:
“audio source → my sink block (transmit to localhost)”, “my source
block
(receive from localhost) → audio sink”, and it works like a sharm:
sound is
clean, cpu load near few percents.)

The scheduler is a scheduler, it schedules the calling of the work
functions. I don’t think you realize the implications of designing a
per-sample real-time signal processing framework.

Maybe. Per-sample ?

have actual hardware interaction with your SDR.
Why it ought to be per-sample based ? There are no requirement for every
single sample to come at fixed interval, only average total rate is
fixed. I
don’t see any conflict between “real-time” and operating on blocks of
samples at a time (even with variable size). Scheduler perfectly manages
buffers, so I don’t see any problems to prepare input buffer (source
buffer)
large enough to mitigate variable computational costs and other
system-wide
events. Yes, it’s unavoidable latency, until we have real-time OS. And
what
did you mean under guaranteed sample clock ? It actually is, otherwise
things wouldn’t worked clean. If there are no sufficient computational
performance to run given task in real-time, then it will not be possible
even in current design.
I just talked about idea to move out all time-synchronization at some
single
global place.

I think it’s a long discussion, but it meaningless, since you stated
that
things should work as expected even in current design. So, I have to
investigate what’s going wrong in my case…


View this message in context:
http://gnuradio.4.n7.nabble.com/how-to-implement-synchronous-source-block-correctly-tp45083p45209.html
Sent from the GnuRadio mailing list archive at Nabble.com.

[email protected] schreef op 2013-12-05 11:40:

CMakeLists.txt files in both the root directory of your OOT module and
Thanks for the document Tom. I only just found time to proceed with
but those files are not even present on the USB stick environment


Discuss-gnuradio mailing list
[email protected]
Discuss-gnuradio Info Page

Sorry for this, I may have done something stupid such that this message
ended up in the wrong thread… If somebody knows a way how to delete
the message and keep the original thread clean, OK for me…

On Thu, Dec 05, 2013 at 02:05:31AM -0800, Artem P. wrote:

Btw, this restriction frustrating. Someone would like to make useful graph
containing both audio source and sink in single chain, but it’s impossible
due to current GNU Radio design.

It’s not impossible. The very first thing I did in GNU Radio (~6 years
ago)
was feed my mic input into an FM modulator and transmit that. That’s 2
hardware clocks right there.

If you directly connect audio source to sink, you can run into the
problems you describe – depending on the backend (my intuition says,
Jack would handle that better than ALSA, haven’t tried).

I think it would be better to implement such scheduler which do
synchronization itself (using software generator or some external source
provided by user). Optionally, of course. Maybe it wouldn’t be such flexible
in this case, but overall effect is better. I guess authors considered this
variant but abandoned it by some reasons…

Let’s close this thread. Artem, if you have any specific questions
please ask them in a new thread. I’d also like to ask everyone to stay
respectful towards other people on this list and be appreciative of
people spending their free time towards helping out.

Martin


Karlsruhe Institute of Technology (KIT)
Communications Engineering Lab (CEL)

Dipl.-Ing. Martin B.
Research Associate

Kaiserstraße 12
Building 05.01
76131 Karlsruhe

Phone: +49 721 608-43790
Fax: +49 721 608-46071
www.cel.kit.edu

KIT – University of the State of Baden-Württemberg and
National Laboratory of the Helmholtz Association

Martin B. (CEL) wrote

If you directly connect audio source to sink, you can run into the
problems you describe – depending on the backend (my intuition says,
Jack would handle that better than ALSA, haven’t tried).

Yes, I made several experiments, and problem exists only when they
connected
directly. I’ve tried Jack but run into another adventures and finally
wasn’t
able to get it work at all.

To your actual problem:
Did I get that right:

Real world > soundcard > Host OS > Virtualization > VM Guest OS > GNU
Radio audio source > GNU Radio audio sink > VM Guest OS >
Virtualization > Host OS > soundcard ?

In this case, I don’t think the problem is in your GNU Radio app…

Yes, exactly. But also it doesn’t seem to be problem in virtualization,
because after I complicate this chain by adding simple buffered network
transfer (thus making connection indirect), it works perfect !

I think Martin pointed to right direction - problems are somewhere
between
gnu radio and audio backend.
I’ve played with audio block properties and didn’t found any consistent
pattern… Some observations:

  • when I specify device name ‘hw:0,0’ (instead leaving it empty), it
    works
    well (there are no ‘aU’ messages at all)
  • when I change ‘ok to block’ value it works less or more better
    Anyway, since this problem exists only with direct connection, it’s not
    critical. Even inserting just single ‘Delay’ block with delay value

=6699
(at sample rate 48000) completely solves issue for me. I just worried
that
even if such simple test connection not work, then my application will
not
work too, but everyting ok (still).

Thanks again for responses !

P.S. Martin said that only sources are allowed to block, but ‘Throttle’
docs
says: “…That should be controlled by a source or sink tied to sample
clock.” I’m implementing sink block as well, and these contradictory
statements confuse me a bit…


View this message in context:
http://gnuradio.4.n7.nabble.com/how-to-implement-synchronous-source-block-correctly-tp45083p45228.html
Sent from the GnuRadio mailing list archive at Nabble.com.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Thanks for the long reply!
Um, yeah, I think I might have argued in another issue than you.
Ok, so the point is, what I thought you were proposing was a common
clock that was basically determining at which rate samples go in and
out of blocks; and I was arguing that that kills efficiency and
flexibility, because the scheduler (as it is) tries to process as many
samples as possible. “As possible” is limited only by the speed at
which samples can be produced and consumed by sources or sinks, and
the time the work functions need to computate.
Having a fixed clock would obviously let some blocks sleep if they
have processed “enough” samples for the clock cycle; that would lessen
the overall performance that GNU Radio achieves, by taking all the
samples it can get out of the sources.

However, I think the problem is exactly what I was trying to describe:
let all blocks start on a common timebase. This leads to
clock-synchronous pipelining of sample processing, leading to periodic
“dead” times for most of the “fast” blocks while they wait for input
from the slow blocks. In the case of congestion this clock-synchronous
behaviour leads to unnecessary problems. Yet I don’t understand the
advantage of being clock-synchronous…

To your actual problem:
Did I get that right:

Real world > soundcard > Host OS > Virtualization > VM Guest OS > GNU
Radio audio source > GNU Radio audio sink > VM Guest OS >
Virtualization > Host OS > soundcard ?

In this case, I don’t think the problem is in your GNU Radio app…

Greetings,
Marcus

On 05.12.2013 12:43, Artem P. wrote:

machine ? The only thing I can tell you surely, it’s not because of
Maybe. Per-sample ?

µs", but it gives you the advantage of a buffer, so that you can
you mean under guaranteed sample clock ? It actually is, otherwise

– View this message in context:

http://gnuradio.4.n7.nabble.com/how-to-implement-synchronous-source-block-correctly-tp45083p45209.html

Sent from the GnuRadio mailing list archive at Nabble.com.

_______________________________________________ Discuss-gnuradio
mailing list [email protected]
Discuss-gnuradio Info Page

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSoGz3AAoJEAFxB7BbsDrLXIUH/jSfimYtG2Qvlf8jPOYdxW5n
zC41x/c76Nqi/y+UaQootwD/Jfa7OzfGzOItZQp12tNzUKUy8zNkFh9i+/XernNr
qjfDStHy1bjCNziZmOXORokJ+fKiLeOI8EqbrNwFPycizjWaDaFvTdBAlE2oXqQ4
MfFPS0Y44fl8CPh6dnojIkMiduKeCSKig0MU+DWA2ZO5+aDAwoFPTxKpa17RwVxC
hzGhjFnPkQn278sXzPTLGPFD3fG94KCuEdEjySons4+PMQWtZeDKBIhLoRnNIAQX
0gKb7zzehQd5UX3yNZz4ccbz5gBzfDjTqC7dUPXNocDGCaL9olS8Vinvosx92as=
=k/by
-----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Ok, Artem,
as has been pointed out:
Do this in a separate thread with indicative subject.

Then: The problem now obviously has been reduced to the audio
interface in the VM.
If that’s what your application needs, then consider contacting the
developers of the virtualization solution, or just stick with the
obvious: Virtualization strives to make the running of a guest OS but
a process in the Host OS. So, if asynchronizity up to the point of
distorted audio occur, it’s not a GR problem, insist on that as much
as you want, but a problem of doing things that need real time
operating system interaction (though with relatively large tolerance
and intervals) such as audio output in a VM which has seemingly not
been configured correctly to do this.
You already proposed a workaround - use a network sink. It jumps
through a fraction of the loops to get your audio out of the VM into
your host.

Sincerely,
Marcus

PS: regarding your throttle confusion: You can trust Martin. And: Read
the full paragraph. It is clear. You miscite the sentence out of
context.

On 06.12.2013 05:31, Artem P. wrote:

virtualization, because after I complicate this chain by adding
connection, it’s not critical. Even inserting just single ‘Delay’
and these contradictory statements confuse me a bit…

– View this message in context:

http://gnuradio.4.n7.nabble.com/how-to-implement-synchronous-source-block-correctly-tp45083p45228.html

Sent from the GnuRadio mailing list archive at Nabble.com.

_______________________________________________ Discuss-gnuradio
mailing list [email protected]
Discuss-gnuradio Info Page

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSoX1ZAAoJEAFxB7BbsDrLB/wH/3/P/k+UYPth6VDccjs8yFdJ
tBX4KPsKU0T5Jfa6Ll+Upza6BkdFD24L/6lxAnAlkg+lcz4V+ckB+HJ4DmP73SE0
tNDvBiIhRID4GZHJE75MMwDOetRow3a16kTQsbrcxbS+MIkK3X+B/rHdTKEwXfCK
j+eq5xD56GT/i9jsL+8dAX9vqht5V4hwji5kfNAMAAKOA8mZMi9Xu6E62l2sZPKi
HmVPMXg03iB46MbjO2RNZXqF8wlZFe/wnhiY1lvp0yo+RfC9YIXLVYoB7P9beP9b
GZJh2iTmrdvcT2uGMS0iw3ZmAKKz2Mkrv9Pkj9QHRJiP0XAj/SGjXoxUTA+edk8=
=2PGf
-----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

You exactly reproduced what I was trying to say – the problem is most
probably that GNU Radio uses the audio device as if it was an actual
audio device. But it’s not. The virtualization breaks the “easy to
achieve” real-time needed for audio playback.
Therefore more buffering is necessary. As Martin described, having an
additional layer of buffers can solve audio problems. You did that
with network loopback.
But these problems are only there because VM audio doesn’t work like
“real” audio. Still not a GR bug :slight_smile: Long story short: Don’t use your
VM to play audio, if it does not work reliably.

Greetings,
Marcus

On 06.12.2013 12:25, Artem P. wrote:

Then: The problem now obviously has been reduced to the audio

doesn’t define exact latency value, it just guarantees to keep it

– View this message in context:

http://gnuradio.4.n7.nabble.com/how-to-implement-synchronous-source-block-correctly-tp45083p45236.html

Sent from the GnuRadio mailing list archive at Nabble.com.

_______________________________________________ Discuss-gnuradio
mailing list [email protected]
Discuss-gnuradio Info Page

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSobaWAAoJEAFxB7BbsDrLAZ8H/0AHJEhRIyBnh2aSXD1IiZvJ
tKQfBYuN+2j0tbSiuvUT/mbOzx+A+CjF0047UIK9+G6mfp3cu1BnKjyU1s84CSm/
IEFEo2Uwn18aMiqClSAskEY8VosQCCbG1B8mAeIB87nvihKEivhl5fktk1CSwUZq
Fde22I/rjzXh+7lDAabvjmbw7P2ASeyC51S9QwRVKqcgXFy2vnpavK5f8UeWuj0a
qMSYmOi9zMH7LTe5mZtvG2kyh3BRS8kvu+tciUDSM/820Jca5NPxDZcIXqqGJCPQ
JLwa7sirG39jOvpdJ2ZfrCVAWB+pMGGGu8nGA0itcbHtL9XErtRuQRbG8RWqHS4=
=dfl5
-----END PGP SIGNATURE-----