Catch-22 with forecast & general_work

I have a situation in which I need to read tags associated with multiple
block inputs in order to decide which input to process during
general_work. In my situation it’s possible for there to be zero input
available to some inputs, and a nonzero amount of input available to
others. The Catch-22 is that if I demand ninput_items_required[k] = 1 on
all k inputs, but any individual input doesn’t have any data available
yet, the scheduler will never call general_work. If I demand
ninput_items_required[k] = 0 for all inputs, my general_work method will
be called, but there will be no data to process, even if there actually
is data available on some inputs.

The problem is partially solved by the tagged_stream_block
implementation. However, the huge limitation with that approach is that
tagged streams are limited to a maximum on the order of 8k items (this
number is probably system dependent). If I want to operate on a tagged
stream that is several orders of magnitude greater than this limit, I’m
out of luck.

Does anybody have any suggestions? I may end up having to reimplement
tagged_stream_block to hold some internal state so it can handle tagged
streams much larger than any one call to work. This state would be some
variable, maybe “nitems_left_in_tagged_stream” that would get reset
every time a length_tag was found and would be decremented by the number
of items consumed. The forecast implementation would demand
ninput_items_reqd[k] = min(items_left_in_tagged_stream, noutput_items),
or something like that. Thoughts?

Sean

On 02/19/2014 11:01 PM, Nowlan, Sean wrote:

inputs.
tagged streams much larger than any one call to work. This state
would be some variable, maybe “nitems_left_in_tagged_stream” that
would get reset every time a length_tag was found and would be
decremented by the number of items consumed. The forecast
implementation would demand ninput_items_reqd[k] =
min(items_left_in_tagged_stream, noutput_items), or something like
that. Thoughts?

A couple. First, what’s your kernel.shmmax value? Increasing that will
help a bit, if you haven’t done so yet (e.g. kernel.shmmax =
2147483648).

Next, I have thought about adding a internal buffers to tagged stream
blocks for these cases. We had a discussion at one of our last dev calls
regarding the size limit for tagged stream blocks, and it was generally
agreed that if you run into these, you should probably be using messages
instead of tagged streams. However, I believe you’ve uncovered an
interesting corner case, where this might make sense after all. Contact
me off-list if you’d like to discuss implementing this (is your
copyright transfer thing through?). It should not be a big change, but
it would mean adding even more buffers to blocks.

M