I have a GNU radio based software that uses IQ modulation on the sending
side and IQ demodulation on the receiving side. In my application, the
transmitted signal could be scaled up to +/-5% in time domain. I don’t
have
control to this scaling. So I take two actions to compensate for the
scaling: (1) Scale the frequency of the sinusoidal signal used for IQ
demodulation. (2) Resample the IQ demodulator output to recover the time
scale. In order to detect the time scaling, I added to the transmitted
signal an additional fixed frequency sinusoidal signal whose frequency
doesn’t overlap with the IQ carrier frequency. On the receiving side, I
estimate the time scaling by checking the deviation of the addition
sinusoidal signal from the expected value. Once the time scale is
estimated, I adjust the frequency of the signal source for IQ
demodulation
and the sampling ratio of the re-sampler accordingly. Debug log shows
that
the latency from time scaling detected to meaningful data coming out of
IQ
demodulation is 600-700 ms.
My question for the GNU radio experts is how to reduce the latency.
I suspect part of the latency is due to the fact that while I
reconfigure
the flow graph, data continue to flow through the graph. I need to keep
the
data in the graph during reconfiguration so I don’t want to call
lock/unlock. I’m thinking of finding ways to pause/resume the scheduler.
Does this approach make sense?
Regards,
Bolin H.