Flowgraph slows down and hangs when using benchmark_tx with a custom block

Hi,
I have a modified dbpsk.py in which I use a custom block after the
self.diffenc (differential encoder block). This custom C++ block outputs
1000 output_items of size ‘gr_sizeof_char’ for each input_item of size
‘gr_sizeof_char’. I then use benchmark_tx.py to test the functioning of
the
modified dbpsk.py and upon doing so the flowgraph slows down incredibly.
What can I do to speed up the process?

DiffEnc --> Custom Block --> Chunks2Symbols
( n ouputs) --> (n * 1000 outputs) --> (n *1000 outputs)

Thanks

On Mon, Nov 15, 2010 at 11:43 AM, John A. [email protected]
wrote:

What can I do to speed up the process?

DiffEnc → Custom Block → Chunks2Symbols
( n ouputs) → (n * 1000 outputs) → (n *1000 outputs)

Thanks

You’re really going to have to provide a lot more information about
the block you’ve created. Posting the general_work function would be
useful.

Tom

On another note I use ‘gr_block’ to build this custom block

This is what I am doing in general_work

  1. I read an item from the input stream.
  2. Check if its 0x01 or 0x00.
  3. If its 0x01 I output the contents of d_pn_array[0], d_n_pn times.
    (Basically I am spreading the input data by the ( pn_length * d_n_pn
    times
    ))
  4. But if its 0x00 I output the contents of d_pn_array[1], d_n_pn times.
    (Basically I am spreading the input data by the ( pn_length * d_n_pn
    times
    ))

The arrays d_pn_array[0] and d_pn_array[1] were initialised in the
constructor.

I only read the contents of the arrays and set the values to out[i].
This
shouldn’t take such a long time although I must say that the lengths of
the
arrays d_pn_array are 1023 and d_n_pn is 5 i.e. I output 1023*5 = 5115
items
for each input item.

int
dsss_spreading_b::general_work(int noutput_items,gr_vector_int
&ninput_items,gr_vector_const_void_star &input_items,gr_vector_void_star
&output_items)
{
const unsigned char *in = (const unsigned char )input_items[0];
unsigned char out = (unsigned char )output_items[0];
int data_items=noutput_items/(d_length_PN
d_n_pn);
int nout=0;
for(int i=0;i<data_items;i++){
if(in[i]&0x01){
for(int j=0;j<d_length_PN
d_n_pn;j++){
out[nout]=d_pn_array1[j%d_length_PN];
nout++;
}
}
else{
for(int j=0;j<d_length_PN
d_n_pn;j++){
out[nout]=d_pn_array0[j%d_length_PN];
nout++;
}
}
}

consume(0,data_items);
return noutput_items;
}

An update on this block. I am now using a gr_sync_interpolator to build
the
block but the performance is still the same. The flowgraph slows down
and
hangs. I have to force stop it using the ‘kill’ command on the linux
terminal.

What should I do so that the flowgraph works smoothly like
benchmark_tx.py
normally does with the other modulation schemes?

Thanks