I have a situation in which I need to read tags associated with multiple
block inputs in order to decide which input to process during
general_work. In my situation it’s possible for there to be zero input
available to some inputs, and a nonzero amount of input available to
others. The Catch-22 is that if I demand ninput_items_required[k] = 1 on
all k inputs, but any individual input doesn’t have any data available
yet, the scheduler will never call general_work. If I demand
ninput_items_required[k] = 0 for all inputs, my general_work method will
be called, but there will be no data to process, even if there actually
is data available on some inputs.
The problem is partially solved by the tagged_stream_block
implementation. However, the huge limitation with that approach is that
tagged streams are limited to a maximum on the order of 8k items (this
number is probably system dependent). If I want to operate on a tagged
stream that is several orders of magnitude greater than this limit, I’m
out of luck.
Does anybody have any suggestions? I may end up having to reimplement
tagged_stream_block to hold some internal state so it can handle tagged
streams much larger than any one call to work. This state would be some
variable, maybe “nitems_left_in_tagged_stream” that would get reset
every time a length_tag was found and would be decremented by the number
of items consumed. The forecast implementation would demand
ninput_items_reqd[k] = min(items_left_in_tagged_stream, noutput_items),
or something like that. Thoughts?
Sean