Sorry that this e-mail is long, but I’m trying to be as detailed as
possible for those reading it to understand, otherwise it will take a
couple e-mail exchanges.
I’m attempting to build a filter using the fir filter class, but have
some questions. Let’s assume for a second that set_history() does not
What I would do is use forecast() to state that the number of input
items needed to produce noutput_items is: fir->ntaps()+noutput_items
What general_work() would do is use the first ntaps() samples as
“history” and start producing output at in+ntaps(). I would then
consume ninput_items-ntaps() to then keep those last ntaps() samples in
the input queue as the “history” for the next series of input.
I’m sorry if my logic is slightly or completely off. But then I look at
most of the FIR filters, and see the use of set_history(), great! This
sounds like it is exactly what I want. Furthermore, looking at the
documentation it states that it is what I want:
This seems to be a workaround for the consume() and forecast approach.
So, I use it:
I monitor the actual input to the block by writing to the disk the
complex input samples from the pointer ‘in’ to ‘in+noutput_items.’ I’ve
noticed that the first time the block is called, there are history()
worth of 0 valued samples pre-pended in my input stream. Great, I’m
assuming the architecture is placing my “history” in my input stream for
the FIR filter to work correctly, which on first call is “nothing.”
What I’m assuming here is that there is actually history()+noutput_items
in the input buffer. I’m also assuming that a call to filterN(out, in,
noutput_items) would actually skip history() worth of samples in the
input buffer first, and then compute noutput_items. Yes/no? If not, it
would make no sense to me that with noutput_items, I would actually only
call filterN(out,in,noutput_items-history()). Furthermore, I see no
blocks do this.
OK, so, I go ahead and give it a run. Logically, with set_history() it
should all work to me. My first call to work(), noutput_items==2064,
and my history is always set to 224. If set_history() works as I
expect, I would assume my first 2064 items to be correct. However, only
2064-224=1840 are correct, verified hacking up the matched filter in C
That graph is taking the difference of the true value to the observed
value using the matched filter block. At 1840, the output values start
going haywire, which I’m assuming is due to the fact that there is not
really noutput_items+history() in the input buffer, but noutput_items.
Once the matched filter skips history() worth of data, and tries to
compute noutput_items, it starts venturing in to no mans land.
Is this really how it is supposed to work? Or, is there a bug? The
next time work() is called, the input stream is pre-pended again with
what I actually expect to be the “history” of samples to be. But again
the last ntaps() output items are incorrect, which is why I’m assuming
it’s going in to no mans land again.
If this is how it’s supposed to work, then I fear gr_fir_filter_XXX is
incorrect when the decimation is set to 1: