Forecast method for HDLC transmit block

I’m building a set of blocks to implement the HDLC link-layer
functions for a spacecraft communication system.
These blocks deal with IP packets from/to the stack on one side,
and a bitstream to/from the modulator/demodulator on the other side.

The receive block is already done and working. It takes a bitstream
from the demodulator, finds the frame, un-bitstuffs it, extracts the
IP packet payload and shoves it into the network stack. But I’ve run
into a snag doing the transmit part.

HDLC is a synchronous serial protocol. It has to keep clocking bits
out at a fixed rate no matter what. When there are no packets to
transmit, it outputs “flag” bytes (0x7E) continuously until there is
more packet data to transmit.

The problem is that this makes the output independent of the input.
I’m struggling with how to implement a forecast method to deal
with this. The “how-to-write-a-block” tutorial states that complex
forecast methods are possible, but gives no examples. I’ve gone
through the code of many forecast implementations, but all of them
seem to be simple decimators (N:1) or interpolators (1:N).

I need something more complicated. When there is a packet of “N”
bytes ready on the input, the number of output bits produced will
be :

( Bytes x 8 x Bitstuffing_factor ) + header_size + crc_size

but when there’s NO packet data ready on the input, the number
of output bits produced will simply be 8 (one flag byte).

Is this sort of forecast method possible? Where can I find
some code examples of complicated forecast functions, or a
more detailed description of the use of the forecast method?

Or am I overthinking this, and the simple answer is to set
ninput_items_required to Zero, since I always have something
to output?

@(^.^)@ Ed

On Tue, Oct 21, 2008 at 11:01:51AM -0400, Ed Criscuolo wrote:

HDLC is a synchronous serial protocol. It has to keep clocking bits

some code examples of complicated forecast functions, or a
more detailed description of the use of the forecast method?

Or am I overthinking this, and the simple answer is to set
ninput_items_required to Zero, since I always have something
to output?

Ed,

The problem is that you need to know when the output is about to
underrun, and only then insert flags.

Is there any external reference clock or other way to tell when the
external stream needs data? In general, GR has no tie to an external
timebase, except indirectly through sinks or sources that consume data
at a fixed rate (e.g., USRP, audio card, etc). If there is some way
to tell when the external world is ready for more data, we can solve
this problem. Is the USRP the final sink for the modulated bits?

Eric

Eric B. wrote:

to tell when the external world is ready for more data, we can solve
this problem.

The data stream needs to be at a fixed rate. I was planning to add
a throttle block to insure this.

Is the USRP the final sink for the modulated bits?

Yes, after it’s upsampled and modulated, the USRP is the final sink.
I realize that this should pace things, but I figured the throttle
block would be a good idea in that it would allow me to test
without a USRP hooked up, just a spectrum display.

At this point, I think I’ll embed all the packet reading AND
hdlc processing into a single source block with no flow
inputs. This way the block can check for packets on the TUN
device, read them, bitstuff and hdlc frame them, and put them
into an internal ring buffer. Then it would return as many bits
as requested, or flags if the ring buffer was empty.

I’m assuming that the scheduler would keep asking for enough bits
from this source block to keep the flow going at the throttled rate.
(assuming I have enough CPU power to keep up).

Does this approach sound like it will work?

@(^.^)@ Ed

On Thu, Oct 23, 2008 at 04:56:04PM -0400, Ed Criscuolo wrote:

at a fixed rate (e.g., USRP, audio card, etc). If there is some way
to tell when the external world is ready for more data, we can solve
this problem.

The data stream needs to be at a fixed rate. I was planning to add
a throttle block to insure this.

You definitely don’t want to use a throttle block for this purpose.
It’s only reason for existence is so that file driven GUI test code
doesn’t suck down all the CPU available.

Is the USRP the final sink for the modulated bits?

Yes, after it’s upsampled and modulated, the USRP is the final sink.

Good.

I realize that this should pace things, but I figured the throttle
block would be a good idea in that it would allow me to test
without a USRP hooked up, just a spectrum display.

I strongly suggest not using the throttle block. Definitely don’t use
it if the USRP is in the graph. There should be only a single clock
in the system.

At this point, I think I’ll embed all the packet reading AND
hdlc processing into a single source block with no flow
inputs. This way the block can check for packets on the TUN
device, read them, bitstuff and hdlc frame them, and put them
into an internal ring buffer. Then it would return as many bits
as requested, or flags if the ring buffer was empty.

OK. The only problem that I can see with this is that the scheduler
will run this block whenever there is space available in the
downstream buffer. If your data rate is low (100’s of b/s) we could
be adding a serious amount of latency to the system. If this is a
deep space probe, no problem :slight_smile: otherwise we may need to cookup some
way to limit the amount of buffer used between the blocks. The
default is ~32KB. If the data rate is relatively high, the 32KB of
buffering may not be an issue.

I’m assuming that the scheduler would keep asking for enough bits
from this source block to keep the flow going at the throttled rate.
(assuming I have enough CPU power to keep up).

Yes.

Does this approach sound like it will work?

Yes.

Let us know how it works out!

Eric

Eric B. wrote:

On Thu, Oct 23, 2008 at 04:56:04PM -0400, Ed Criscuolo wrote:

be adding a serious amount of latency to the system. If this is a
deep space probe, no problem :slight_smile: otherwise we may need to cookup some
way to limit the amount of buffer used between the blocks. The
default is ~32KB. If the data rate is relatively high, the 32KB of
buffering may not be an issue.

It’s not deep space, just LEO (Low Earth Orbit), so speed-of-light
latency is only on the order of 5 - 30 mS.

This flow will be feeding the uplink, which runs at 9600 bits/sec.
Not that low, but not very high either.

My initial plan, was to have the block output an unpacked stream
of bits, one per byte. Now, if I’m understanding you correctly,
the scheduler will request as much as 32KB of output from this
hdlc source block at one time. At one bit/byte, and 9600 bits/sec,
that amounts to just over 3 seconds worth of data. If my ring
buffer just happens to be empty, this means I’m going to insert
3 seconds worth of flags into the stream at once, even if I have
a packet come along in the next millisecond. This would cause
me to fall behind, never to catch up. The overall effect would
be to reduce my effective data rate to something less than 9600.

Seems like I should be checking for newly arrived packets after
sending each flag, not just once per invocation of the work method.
But this sounds like a lot of extra processing overhead. Is it
possible for a block to return less than the number of outputs
requested? (I think the answer is yes for a block, no for a sync
block). Would that be a better way to limit the amount of
data output at once?

@(^.^)@ Ed

On Thu, Oct 23, 2008 at 05:52:18PM -0400, Ed Criscuolo wrote:

OK. The only problem that I can see with this is that the scheduler
latency is only on the order of 5 - 30 mS.

This flow will be feeding the uplink, which runs at 9600 bits/sec.
Not that low, but not very high either.

OK

My initial plan, was to have the block output an unpacked stream
of bits, one per byte. Now, if I’m understanding you correctly,
the scheduler will request as much as 32KB of output from this
hdlc source block at one time. At one bit/byte, and 9600 bits/sec,
that amounts to just over 3 seconds worth of data. If my ring
buffer just happens to be empty, this means I’m going to insert
3 seconds worth of flags into the stream at once, even if I have
a packet come along in the next millisecond. This would cause
me to fall behind, never to catch up. The overall effect would
be to reduce my effective data rate to something less than 9600.

Seems like I should be checking for newly arrived packets after
sending each flag, not just once per invocation of the work method.
But this sounds like a lot of extra processing overhead. Is it
possible for a block to return less than the number of outputs
requested? (I think the answer is yes for a block, no for a sync
block).

You can return less. In any event, the scheduler will call you again
if there’s still room in the output buffer.

Would that be a better way to limit the amount of data output at once?

I think the right answer is to come up with a way to limit the total
buffer space you see.

My suggestion is to not worry about it right now, get the rest of it
working, then we can fix this problem. It shouldn’t be a big deal.

(I just spent a couple of minutes looking at it. It won’t be hard to
limit the buffer space seen. We just need to come up with a
reasonable API for it.)

Eric