M-block timeouts, time passed, or pause?

Is there any way to get an approximate amount of time that has elapsed
since a timeout has been scheduled? Or better yet, a way to pause a
timer? This is extremely useful for backoff periods in MAC contention
protocols. Where, for instance, the timer is only decremented when the
channel is considered idle. The best method would be a way to pause it.
The alternative would be to find the time passed, cancel it, and then
when the channel becomes idle to schedule another timer with the amount
of remaining time.

After checking mb_mblock.h, I don’t think any of this is readily
available… can either of these can be simply implemented? If so, can
you pass along some advice? :slight_smile:

Thanks!
George

On Mon, Apr 21, 2008 at 10:57:25PM -0400, George N. wrote:

Is there any way to get an approximate amount of time that has elapsed
since a timeout has been scheduled? Or better yet, a way to pause a timer?

Nope.

This is extremely useful for backoff periods in MAC contention protocols.
Where, for instance, the timer is only decremented when the channel is
considered idle. The best method would be a way to pause it. The
alternative would be to find the time passed, cancel it, and then when the
channel becomes idle to schedule another timer with the amount of remaining
time.

How often are you checking for channel idle? Couldn’t you just cancel
and schedule a new one each time you check for channel idle?

After checking mb_mblock.h, I don’t think any of this is readily
available… can either of these can be simply implemented? If so, can you
pass along some advice? :slight_smile:

Thanks!
George

Eric

Eric B. wrote:

How often are you checking for channel idle? Couldn’t you just
cancel
and schedule a new one each time you check for channel idle?

A new RSSI value is computed at the host with every new block of
samples. It doesn’t use the FPGA value. I suppose there is an
approximate inter-block spacing that could be used to approximate the
time passed by counting the number of blocks seen since the timer
started and basing it off the decimation value… but that varies with
queueing and isn’t very CS-friendly which I think the m-block interface
for MAC implementations should be. A timer pause method is much more
straight forward. But, I don’t know if the base architecture can
support this.

  • George

On Mon, Apr 21, 2008 at 11:18:22PM -0400, George N. wrote:

counting the number of blocks seen since the timer started and basing it
off the decimation value… but that varies with queueing and isn’t very
CS-friendly which I think the m-block interface for MAC implementations
should be. A timer pause method is much more straight forward. But, I
don’t know if the base architecture can support this.

  • George

What your asking for would be hard to do using the existing framework.

I still don’t understand where you want to do this from. That is,
which code would be making the determination that it was time to pause
or unpause the timer.

Eric

Eric B. wrote:

What your asking for would be hard to do using the existing
framework.

I still don’t understand where you want to do this from. That is,
which code would be making the determination that it was time to pause
or unpause the timer.

For instance, a new node enters a backoff state and decides that it
needs to wait for the channel to be idle for 10ms before it can transmit
(802.11 like). It starts a one shot timer for 10ms, and with each new
RSSI reading from the PHY, which comes at variable times due to
processing and queues, the protocol checks if the new RSSI reading is
greater than some CCA threshold. If it’s greater, it needs to pause the
timer because the timer represents the time at which the node must wait
when the channel is idle, not busy. Once the RSSI drops below the CCA
threshold, it can then resume the timer. Once the timer fires, the node
attempts to transmit.

  • George