[ECOS] amount of cpu state to be saved for exceptions

sandeep shimple0@yahoo.com
Tue Dec 14 13:16:00 GMT 2004

> > > in __default_exception_handler, cyg_hal_exception_handler is passed the
> > > address of saved CPU state.
> > >
> > > since we don't know, in what way application installed handler would
> > > want to make use of saved state (it might choose to look at some/all of
> > > the saved cpu state), we can't drop any savings even for production
> > > kernel, where for speedups we could drop saving callee-saved registers
> > > in case of interrupts.
> > 
> > Is above right?
> > 
> > There is another related doubt that i have is -
> > 
> > is it necessary to use same HAL_SavedRegisters structure for interrupts,
> > exceptions and context switch? could you have variations of same structure
> > definition to be used with each of them?

thanks john for your explanation. Nick also had explained that in thread
involving following posts

http://sources.redhat.com/ml/ecos-discuss/2002-10/msg00058.html and

> I probably misunderstood what you are trying to do though.  :)
yes, you probably did. i was asking if it is possible to have three separate
structures for saving cpu/register information for interrupts, exception and
thread context rather than having one single structure HAL_SavedRegisters for
all. does doing that violate any eCos convention?

i had given relevant information to emphasize the utility/savings that is
obtained in terms of stack space savings. specially it is of use in
architectures where you have different kind of memories (in certain SOC, i have
mentioned in past)- DRAM and some low access latency R/W memory, that is
limited. if some special threads stacks you want to keep there, every word
saved matters. the figures i mentioned indicate quite some saving, in order of
100 bytes per state save.


Do you Yahoo!? 
Dress up your holiday email, Hollywood style. Learn more. 

Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

More information about the Ecos-discuss mailing list