Do objects that extend gr_sync_block require that their work function
always returns as many items as the scheduler asked in noutput_items,
except for the case when a block may be completely finished producing
items? What does the scheduler do if the (number of items returned) <
noutput_items? Does it ever try calling the work function again?
Thanks,
Sean
On 03/30/2012 11:23 AM, Nowlan, Sean wrote:
Do objects that extend gr_sync_block require that their work
function always returns as many items as the scheduler asked in
noutput_items, except for the case when a block may be completely
finished producing items? What does the scheduler do if the (number
of items returned) < noutput_items? Does it ever try calling the work
function again?
You can return any number of items between 1 and noutput_items. The
scheduler will simply call work again with any items that werent
consumed + extra that accumulated into the input buffers in the interim
time.
Returning 0 or -1 will tell the block executor code to stop.
-Josh
On Fri, Mar 30, 2012 at 2:58 PM, Josh B. [email protected] wrote:
You can return any number of items between 1 and noutput_items. The
scheduler will simply call work again with any items that werent
consumed + extra that accumulated into the input buffers in the interim
time.
Returning 0 or -1 will tell the block executor code to stop.
-Josh
Just to clarify, a block can legitimately return 0; it just means that
it didn’t produce any output, but it will try again.
A -1 will cause this block to stop. If the block is a source and it
produces -1, it will stop the flowgraph completely.
Tom
On Sat, Mar 31, 2012 at 07:52, Tom R. [email protected] wrote:
On Fri, Mar 30, 2012 at 2:58 PM, Josh B. [email protected] wrote:
On 03/30/2012 11:23 AM, Nowlan, Sean wrote:
it didn’t produce any output, but it will try again.
To clarify even further–a source block that returns 0 samples will
be treated as done, for other blocks it is ok.
Johnathan
Thanks for clearing that up. That’s what I surmised after poking around
gr_block_executor; the problem turned out to be a mistake in the work
function of a data source I put together.