Forum: GNU Radio Final Packet(s) dropped

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
615e3201b0172cd853c4ca4a02713951?d=identicon&s=25 Richard Alimi (Guest)
on 2007-03-26 17:15
(Received via mailing list)
Hello All,

I am currently having a problem with packet transmission similar to the
one
Tom Rondeau had mentioned in his February 14th 2006 post titled 'Dropped
Packets.'  However, there was no general resolution given there.

When I run the dbpsk/dqpsk/gmsk transmitter/receiver under the
gnuradio-examples directory (revision 4796), it always seems to drop the
last
packet, leading me to believe this hasn't been fully fixed.  Using
the --discontinuous option produces the same result.  Even adding a
time.sleep(1) before fg.wait() doesn't seem to fix it, as suggested by
that
post.

To be sure that all of the data was being written out to USRP when
running my
application, I put some print statements in usrp_basic_tx::write(), and
the
correct number of samples appear there (I can also verify that it is a
multiple of 128).

At the receiving end of my application, I can observe that the received
power
drops to the level of normal background noise somewhere before the final
packet is received.

I'm currently using the USRP (rev 4) with an RFX2400 daughtercard.  I
have
set_auto_tr(True) on both transmitter and receiver.

Has anyone experienced or been able to resolve this?

Thanks,
Rich
00f9c71826ac9a2c79e751116186571f?d=identicon&s=25 John Ackermann N8UR (Guest)
on 2007-03-26 17:25
(Received via mailing list)
Richard Alimi wrote:
> time.sleep(1) before fg.wait() doesn't seem to fix it, as suggested by that
> post.

I am dangerously jumping into a place that I don't understand, but I
have had a similar problem in a non-gnu-radio data system and it's
possible, I suppose, that something similar is going on here.

The issue is the timing of the signal to turn the TX off versus the time
required to get the last data bit actually on the air.  In the AX.25
packet system in question, the application would unkey the transmitter
as soon as the last payload bit was sent.  However, there was a
downstream scrambler that caused a delay of some number of bit times.
We had to set a parameter to keep the TX keyed for a few milliseconds
after the end of data in order to allow all the data in the scrambler to
make it out over the air.  In the AX.25 protocol, that was the "txtail"
parameter.

Again, I have no idea if this applies to your situation, but it caused
us some real head scratching until we figured out what was going on.

John
615e3201b0172cd853c4ca4a02713951?d=identicon&s=25 Richard Alimi (Guest)
on 2007-03-26 23:17
(Received via mailing list)
Thank you for the response.  I was afraid that might be the case :)  The
following workaround seems to work well (which seems to validate your
hypothesis):

  When a packet is _not_ being sent, the transmitter outputs a
continuous
  stream of 0-valued samples to the USRP.  When a packet _is_ being
sent,
  modulate and sample as usual.

Not the ideal solution, but it works.

Thanks,
Rich
This topic is locked and can not be replied to.