This is the mail archive of the ecos-discuss@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Problem with IP Stack ...


On Mon, 2002-09-09 at 09:52, Thomas BINDER wrote:
> Andrew Lunn wrote:
> > 
> > > There is one interesting thing, however, I discovered just this
> > > morning. I redirected the diag_printf messages to a different
> > > (debug) stream (not IP based) and ECOS does not crash any longer!
> > 
> > That not really surprising. Using the stack to debug the stack is
> > never a good idea. Have gdb user the serial port, not the stack.
> 
> Well, I was not really debugging the stack. Isn't that a `regular' warning
> message that gets printed (and thereby somehow crashes the stack)? By the way I
> don't even use gdb at all. I have Ctrl-C support disabled but that obviously
> makes no difference either.
> 
> 
> > > With the diag_printf redirection in place I also see that the IP
> > > stack does not recover MBUFs (at least not within a few minutes)
> > > although it is still answering pings. It seems that MBUFs are never
> > > recovered not even in regular operation. At
> > 
> > Thats not correct. The number of mbufs is not going to zero. You have
> > lots of mbufs free. The information it prints out for clusters is 0,
> > but that figure is bogus anyway. The stack gets clusters from the
> > pool. It does not return them to the pool, but puts them on a linked
> > list. When it needs another cluster, it first tries the linked list
> > and then tries the pool.
> 
> You're right. This figure is indeed bogus. What is still interesting is that the
> number of free clusters is not increasing above 27 (in my case). What happens to
> the rest? What is the Pool actually used for if memory is never returned to it
> (as opposed to the MBUFs pool) ?
> 

I'd suggest turning off "CYGPKG_IO_ETH_DRIVERS_WARN_NO_MBUFS" to see if
it helps the stability of your system.

Another couple of data items which might help understand your situation:
  * Which stack (old==OpenBSD, new==FreeBSD, LWP)? are you using?
  * What sort of ethernet adaptor are you using?
  * What is the architecture (ARM, PowerPC, etc?)
  * What is the vintage of your RedBoot?

> 
> > 
> > So i think you are running out of clusters, not mbufs. To prove this,
> > change io/eth/current/net/eth_drv.c:873. The error message is
> > wrong. MCLGET allocates a cluster not an mbuf. Change this message so
> > you can tell it apart from the error at line 820.
> 
> You're right again. I've already checked it.
> 
> > 
> > Whats interesting is that number of free clusters does not slowly
> > drop, but jumps. Some sort of event is happening which causes a whole
> > lot of clusters to suddenly get lost. I would look at the ring
> > buffer. What happens when the receive ring buffer wraps around when
> > the ethernet device is receiving faster than the stack is taking them
> > out of the ring buffer. I've had problems like this with TX.
> 
> What ringbuffers are you referring to? 
> 
> 
> Thx,
> Tom
> 
> -- 
> Before posting, please read the FAQ: http://sources.redhat.com/fom/ecos
> and search the list archive: http://sources.redhat.com/ml/ecos-discuss

-- 
------------------------------------------------------------
Gary Thomas                  |
eCosCentric, Ltd.            |  
+1 (970) 229-1963            |  eCos & RedBoot experts
gthomas@ecoscentric.com      |
http://www.ecoscentric.com/  |
------------------------------------------------------------


-- 
Before posting, please read the FAQ: http://sources.redhat.com/fom/ecos
and search the list archive: http://sources.redhat.com/ml/ecos-discuss


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]