Block buffer sizes

Hi all,

is there an easy way to know how big each buffer is in each of the
blocks in a
flow graph, for example:

src = usrp.sink_c()
amp = gr.multiply_const_cc(…)
dst = gr.message_sink_c(…)

fg.connect(src, amp, dst)

fg.start()

#and then check to see how big the buffers were that the scheduler
allocated
buf1 = fg.src.bufferout_size()
buf2 = fg.amp.bufferout_size() …

just curious.

David S.

On Tue, Nov 14, 2006 at 01:24:22PM -0500, [email protected] wrote:

fg.connect(src, amp, dst)

fg.start()

#and then check to see how big the buffers were that the scheduler allocated
buf1 = fg.src.bufferout_size()
buf2 = fg.amp.bufferout_size() …

just curious.
David S.

Sorry, there’s no way to fetch it from python.

However, the allocate method on line 52 of flow_graph.py is where the
size is determined and the buffer allocated. You could print it just
before the return.

def allocate (self, m, index):
    """allocate buffer for output index of block m"""
    item_size = m.output_signature().sizeof_stream_item (index)
    nitems = self.fixed_buffer_size / item_size
    if nitems < 2 * m.output_multiple ():
        nitems = 2 * m.output_multiple ()

    # if any downstream blocks is a decimator and/or has a large 

output_multiple,
# ensure that we have a buffer at least 2 * their
decimation_factor*output_multiple
for mdown in self.flow_graph.downstream_verticies_port(m,
index):
decimation = int(1.0 / mdown.relative_rate())
nitems = max(nitems, 2 * (decimation * mdown.output_multiple() +
mdown.history()))

    return buffer (nitems, item_size)        # buffer size in bytes 

= nitems * item_size

Eric