This is the mail archive of the ecos-discuss@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: RedBoot network timer question


On Thu, Jan 18, 2001 at 04:35:42PM -0700, Gary Thomas wrote:

> > Are the following observations correct?
> > 
> >  1) The network code keeps track of millisecond "ticks" by
> >     delaying for 1ms and incrementing a counter every time any
> >     of the code uses the MS_TICKS() to check the current time.
> > 
> >  2) But, the network polling code is only called once every
> >     250ms [the timeout value passed to gets() by the main
> >     loop].  I verified this by pinging the board and response
> >     times varied from 4m to 290ms with a mean of 144ms.
> > 
> >  3) That means that the network time only increments by a few
> >     milliseconds once every 250ms.  Time would appear to pass
> >     very slowly to the network routines, making the TCP
> >     timeouts longer by a factor of about 100.
> 
> Depending on the platform, these observations vary in their
> correctness. Some platforms have running timers which the delay
> routines simply monitor.  Others will actually simply wait.

Right. That's how my hal_delay_us() routine works: it monitors
a hardware timer. But, it doesn't return until the timer times
out, so no other code is running during delays.  

Do some implementations of hal_delay_us() call some sort of
"idle task" routine that does tcp_polling or other useful stuff
while waiting for the timer to time out?  I looked at most of
the other ARM hal sources before I did mine, and all of the
hal_delay_us() routines just sat in a loop until the requested
amount of time had passed -- none of them called anything.

> Point of observation: timers in the RedBoot stack are meant to
> keep things from hanging up, not meant to be necessarily
> "wallclock" accurate. If you can tell me how to implement such
> [accurate] timers without using interrupts [and with reasonable
> overheads], I'm all ears.

The easiest thing I can think of would be to call MS_TICKS() an
appropriate number of times instead of ..._DELAY_US() wherever
a delay of 1ms or greater is needed.  The most important place
would be in the HAL code itself inside the getc_timeout()
routine(s).  There are a couple places within the network code
where small (100us) delays are required -- we can just leave
those alone since they're small with respect to a 1ms tick and
they seem to occur infrequently. This wouldn't provide AC mains
accuracy but I think it would be within a factor of 2.

If the network stuff isn't installed, we could define
MS_TICKS() to be CYGACC_CALL_IF_DELAY_US(1000)

Or, I may have misunderstood something fundamental and this
won't work at all...

I think I'll probably try it to see what happens.  I'll
probably also reduce the network polling from 250ms to around
10ms, since some of our host-end programs timeout after 200ms.

-- 
Grant Edwards
grante@visi.com

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]