This is the mail archive of the ecos-discuss@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: RedBoot network timer question


On Fri, Jan 19, 2001 at 06:42:09PM +0000, Hugo Tyson wrote:

> > Right. That's how my hal_delay_us() routine works: it monitors
> > a hardware timer. But, it doesn't return until the timer times
> 
> Which is a good implementation.
>  
> > out, so no other code is running during delays.  
> 
> (unless it is preempted by a clock or timer interrupt, I assume)

There aren't any interrupts enabled, so nothing else is going
to happen.

> > Do some implementations of hal_delay_us() call some sort of
> > "idle task" routine that does tcp_polling or other useful stuff
> > while waiting for the timer to time out?  I looked at most of
> > the other ARM hal sources before I did mine, and all of the
> > hal_delay_us() routines just sat in a loop until the requested
> > amount of time had passed -- none of them called anything.
> 
> AFAIK they all just loop.  They're intended for use only with small
> numbers, y'know?

Right.  I had inferred from Gary's statement that some
platforms, while delaying, do useful work such that network
stuff continues to happen:

> > > >  2) But, the network polling code is only called once every
> > > >     250ms [the timeout value passed to gets() by the main
> > > >     loop].  I verified this by pinging the board and response
> > > >     times varied from 4m to 290ms with a mean of 144ms.
> > > >
> > > >  3) That means that the network time only increments by a few
> > > >     milliseconds once every 250ms.  Time would appear to pass
> > > >     very slowly to the network routines, making the TCP
> > > >     timeouts longer by a factor of about 100.
> > > 
> > > Depending on the platform, these observations vary in their
> > > correctness. Some platforms have running timers which the delay
> > > routines simply monitor.  Others will actually simply wait.

[...]

> I suppose one could argue that there's a functionality gap in
> that we don't support an efficient (ie. yields the CPU) wait in
> the 100uS-1mS range.  But OTOH, is there a need?

If you want to "yield the CPU" during delays, then you've got
to make the leap to a multi-tasking system.  I was just trying
to think of a way for the system to record the passage of time
during delays so that the network timer increments regularly.

Don't misunderstand: I'm not trying to criticize the design of
RedBoot.  It looks like it does what it's supposed to do quite
well.  I'm going to try to add some network-based
functionality, and I'm just trying to make sure I understand
exactly how the system works before I starting adding stuff.

FWIW, I've gotten pretty good results (network time is accurate
to within about 10%) by modified my HAL and net_io.c to call
MS_TICKS() to delay 1ms at a time instead of calling
CYG_IF_DELAY_US().  [patch below] 

The main difference is that the getc() routines are waiting in
1ms increments instead of 0.1ms increments.  I'm not sure what
the implications of that are going to be.  It would be simple
enough to add a 100us delay macro (similar to MS_TICKS) that
increments the network tick counter every tenth call.  If that
were done, there would be no noticeable change to the
functionality of the getc() routines while maintaining a pretty
decent network timer.

This probably isn't very important for a TCP console
connection, but I'm planning to use a second TCP socket for
some other stuff, and a raw Ethernet based protocol for yet
other stuff.  The existing code that I hope to use to implement
those features expects a millisecond system tick count to be
available.

I've also decreased the gets() timeout to 10ms in the main
loop.  [If nobody's typing commands, I might as well use the
CPU time by doing network stuff instead of spending it all in
the delay routine.]

------------------------------------------------------------------------
--- net_io.c.old	Fri Jan 19 12:56:06 2001
+++ net_io.c	Fri Jan 19 12:41:29 2001
@@ -83,20 +83,24 @@
     );
 RedBoot_config_option("Default server IP address",
                       bootp_server_ip,
                       "bootp", false,
                       CONFIG_IP
     );
 #endif
 
 #define TCP_CHANNEL CYGNUM_HAL_VIRTUAL_VECTOR_COMM_CHANNELS
 
+#if !defined(CYGPKG_REDBOOT_NETWORKING)
+#define MS_TICKS() hal_delay_us(1000)
+#endif
+
 #ifdef DEBUG_TCP
 int show_tcp = 0;
 #endif 
 
 static tcp_socket_t tcp_sock;
 static int state;
 static int _timeout = 500;
 static int orig_console, orig_debug;
 
 static int in_buflen = 0;
@@ -187,30 +191,30 @@
         }
     } else {
         return false;
     }
 }
 
 static cyg_uint8
 net_io_getc(void* __ch_data)
 {
     cyg_uint8 ch;
-    int idle_timeout = 100;  // 10ms
+    int idle_timeout = 10;  // 10ms
 
     CYGARC_HAL_SAVE_GP();
     while (true) {
         if (net_io_getc_nonblock(__ch_data, &ch)) break;
         if (--idle_timeout == 0) {
             net_io_flush();
-            idle_timeout = 100;
+            idle_timeout = 10;
         } else {
-            CYGACC_CALL_IF_DELAY_US(100);
+            MS_TICKS();
         }
     }
     CYGARC_HAL_RESTORE_GP();
     return ch;
 }
 
 static void
 net_io_flush(void)
 {
     int n;
@@ -307,27 +311,27 @@
 }
 
 static cyg_bool
 net_io_getc_timeout(void* __ch_data, cyg_uint8* ch)
 {
     int delay_count;
     cyg_bool res;
     CYGARC_HAL_SAVE_GP();
 
     net_io_flush();  // Make sure any output has been sent
-    delay_count = _timeout * 10; // delay in .1 ms steps
+    delay_count = _timeout;
 
     for(;;) {
         res = net_io_getc_nonblock(__ch_data, ch);
         if (res || 0 == delay_count--)
             break;
-        CYGACC_CALL_IF_DELAY_US(100);
+        MS_TICKS();
     }
 
     CYGARC_HAL_RESTORE_GP();
     return res;
 }
 
 static int
 net_io_control(void *__ch_data, __comm_control_cmd_t __func, ...)
 {
     static int vector = 0;
------------------------------------------------------------------------



-- 
Grant Edwards
grante@visi.com

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]