Stream_to_vector input data size and lock/unlock

Hello all,

I have some interesting behaviour using a stream_to_vector with a large
number of inputs items (12800).

The standard method of top_block.connect(), top_block.start() works fine
but
when I try and perform the same operation after a top block has already
been
started I get a sched error.

Sample test case:

import time
from gnuradio import gr

class top_block (gr.top_block):
def init(self):
gr.top_block.init(self)
s2v = gr.stream_to_vector(gr.sizeof_gr_complex, 12800)
ns = gr.null_source(8)
self.connect(ns, gr.null_sink(8))
self.start()
self.lock()
self.connect(ns, s2v, gr.null_sink(102400))
self.unlock()

if name == β€˜main’:
app = top_block()
print β€œEnd”

If the above python script is run as it is I get this on my 3.1.3
installation:

sdrts@sdrts:~/test$ python s2vtest.py

sched: <gr_block stream_to_vector (1)> is requesting more input data
than we can provide.
ninput_items_required = 12800
max_possible_items_available = 4095
If this is a filter, consider reducing the number of taps.
End
sdrts@sdrts:~/test$

If however you move the self.start() in the above script below the
second
self.connect and remove the lock() / unlock() then it runs fine (and
works
perfectly). I was just wondering if anyone can explain this behaviour to
me?

Interestingly on my 3.2 install the max_possible_items_available is
8191.

Thank you,
Kieran

On Tue, Mar 31, 2009 at 04:51:45PM +1300, Kieran B. wrote:

    self.lock()

sdrts@sdrts:~/test$ python s2vtest.py
self.connect and remove the lock() / unlock() then it runs fine (and works
perfectly). I was just wondering if anyone can explain this behaviour to me?

Interestingly on my 3.2 install the max_possible_items_available is 8191.

Thank you,
Kieran

It looks like a bug in the buffer allocator / buffer reuse code that
is executed when reconfiguring a flow graph. I opened ticket:380 on
this.

Eric