Hi,
I have a modified dbpsk.py in which I use a custom block after the
self.diffenc (differential encoder block). This custom C++ block outputs
1000 output_items of size ‘gr_sizeof_char’ for each input_item of size
‘gr_sizeof_char’. I then use benchmark_tx.py to test the functioning of
the
modified dbpsk.py and upon doing so the flowgraph slows down incredibly.
What can I do to speed up the process?
If its 0x01 I output the contents of d_pn_array[0], d_n_pn times.
(Basically I am spreading the input data by the ( pn_length * d_n_pn
times
))
But if its 0x00 I output the contents of d_pn_array[1], d_n_pn times.
(Basically I am spreading the input data by the ( pn_length * d_n_pn
times
))
The arrays d_pn_array[0] and d_pn_array[1] were initialised in the
constructor.
I only read the contents of the arrays and set the values to out[i].
This
shouldn’t take such a long time although I must say that the lengths of
the
arrays d_pn_array are 1023 and d_n_pn is 5 i.e. I output 1023*5 = 5115
items
for each input item.
An update on this block. I am now using a gr_sync_interpolator to build
the
block but the performance is still the same. The flowgraph slows down
and
hangs. I have to force stop it using the ‘kill’ command on the linux
terminal.
What should I do so that the flowgraph works smoothly like
benchmark_tx.py
normally does with the other modulation schemes?
Thanks
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.