[ECOS] Serial buffer overrun

Shannon Holland holland@loser.net
Thu Jan 16 20:26:00 GMT 2003

Excellent - thanks all for the info!

First order of business will change the libc driver - sounds like that's 
the sources of many of my problems when debugging!

As an experiment, I think I'll try toggling a PIO on the ISR/DSR so I 
can measure the switch time on a scope. My app is doing pretty much 
nothing in terms of interrupts/tasks/etc so it should be minimal in 
terms of the switch time. We shall see.

Longer term, I may look at handling the basic hardware IO in the ISR 
then scheduling the DSR to move the data back to the serial channels, 
etc. I would think this will help quite a bit.

After that, I'll probably try out Paul's driver that Jani sent me - I 
would like to eventually be using high speed serial so this would look 
to be a good thing!



On Thursday, January 16, 2003, at 09:56  AM, Jonathan Larmour wrote:

> Grant Edwards wrote:
>> On Wed, Jan 15, 2003 at 08:23:49PM -0800, Shannon Holland wrote:
>>> I also notice that the low level ISR immediately schedules a DSR - 
>>> how long is the delay from the exit from the ISR until the DSR 
>>> routine is called?
>> That depends on what other DSRs are running or how long an
>> application or driver has DSRs locked out.
> Indeed. Having to do a timeslice could be all it takes.
>>> I also noticed that in the DSR it pulls in a single byte and then 
>>> calls out to the channel callback, then gets the next byte, etc. Just 
>>> for grins I modified this code to pull in a number of bytes at a time 
>>> before calling the channel callback. I'm not sure this will buy me 
>>> anything (doesn't change behavior) - I need to read up on the uart 
>>> docs!
>> I gave up on the single-character callback scheme years ago and
>> made my DSR cognisant of the cbuf structure so that data bytes
>> are transferred directly to/from the circular buffers.  I found
>> that using the single-character callbacks I had no chance
>> whatsoever to keep up with multiple serial ports at high baud
>> rates.
> It's true that the driver design could be improved in that respect. 
> It's all possible...
>>> I also have another question as to how the debugger printf's interact 
>>> with program flow: I notice I drop a whole ton of bytes if I call 
>>> printf (anywhere from 20-60 bytes!). Are interrupts disabled when 
>>> using the monitor printf?
>> Don't know.  The diag_printf with which I'm familiar busy-waits
>> on the UART to which it's writing.  If you call it with
>> interrupts disabled, then interrupts stay disabled the whole
>> time.  If not, they stay enabled the whole time.
> The default diag_printf will disable interrupts while it is outputting. 
> Since you may well be using a serial line for output at the same speed, 
> that means ints will be off for the same number of chars as you're 
> writing.
> It would be better to use normal printf and change the driver 
> underneath libc stdio from /dev/ttydiag to /dev/ttyS0 or whatever it 
> is; that way you get interrupt driven output so they can interleave.
> Jifl
> -- eCosCentric       http://www.eCosCentric.com/       
> <info@eCosCentric.com>
> --[ "You can complain because roses have thorns, or you ]--
> --[  can rejoice because thorns have roses." -Lincoln   ]-- 
> Opinions==mine

Before posting, please read the FAQ: http://sources.redhat.com/fom/ecos
and search the list archive: http://sources.redhat.com/ml/ecos-discuss

More information about the Ecos-discuss mailing list