I’m struggling with what I hope isn’t a naive problem regarding feedback
between flowgraph blocks. I’ve written a simple frequency estimator for
FSK signals based on the squared FFT method. Its input is an N-point
FFT, created by an N-item stream-to-vector block feeding an N-bin FFT
block. The output of the frequency estimator is a float value in Hz, and
it is a sync_block. Therefore, the output produces samples at the
original (pre-stream-to-vector) sample rate divided by N. So if my
original raw sample rate is 50000, and I use a 4096-bin FFT, the output
frequency estimate arrives at ~12.2 samples per second. That’s fine.
The problem is that I want to use the output of the frequency estimator
to center the input data at baseband. With continuous streaming data,
this is easy – just use a frequency-xlating filter, and call
set_center_freq() when data comes in from the frequency estimator.
However, the packets I’m interested in are short, and I’d like the
frequency correction to be applied at or near the start of the packet so
the preamble can be detected appropriately. In order to do this, I have
to be able to determine the delay it takes for feedback to come back,
and that delay has to be constant. For this reason, I can’t use a
frequency-xlating filter, because the delay in calling set_center_freq()
from the main Python loop could take any amount of time. I could write a
version of the frequency-xlating filter that accepted a second stream of
filter offset data at the input sample rate – my thinking is that would
make the delay a fixed number of samples. Is that assumption valid?
Before I go reinventing the wheel, I’m wondering if there is a “usual”
way of solving this feedback problem within the architecture of GNU
Radio, or if I’m totally out in left field.
Thanks for any feedback (no pun intended),
Hotmail has tools for the New Busy. Search, chat and e-mail from your