This is the mail archive of the ecos-discuss@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: from ISR to thread


Hi,

I do not understand, why don't you use a counting semaphore instead of the
condition variable? What makes you think that a DSR can signal a condition
variable but not a counting semaphore?

Robin

Schmidt Henning Larsen <HenningLS@danfoss.com> writes:

> Hi
> 
> I'm looking for some of the advantages and disadvantages there is in Ecos.
> One of the great constrains I've found I eCos is that it is difficult to get
> continues data from ISR to a thread.
> I will here describe what I think is the problem, and I hope someone will
> give it a comment.
> 
> In operating systems that allow us to signal on a counting semaphore in
> ISR's and doesn't have DSR's. It will be simple to get data from ISR to
> thread.
> 
> Pseudo code for such a RTOS:
> 
> Semaphore: sem
> Hardware: hw
> Ringbuffer: rbuf
> 
> Threadcode:
> {
>   while(1)
>   {
>     sem.wait()
>     data = rbuf.getdata()
>     doSomething(data)
>   }
> }
> 
> ISR code:
> {
>   newData = hw.data // read data from hardware
>   rbuf.putdata(newData) // put in ringbuffer
>   sem.signal()
>   interrupt_acknowledge()
> }
> 
> 
> This will work, even if we get a new interrupt when the thread is
> "doingsomething", because it is a counting semaphore.
> If we do it the same way in eCos, using a dsr and a condition variable it
> will look as follows:
> 
> 
> Condition variable: cond
> Hardware data: hw
> Ringbuffer: rbuf
> 
> Threadcode:
> {
>   while(1)
>   {
>     cyg_drv_cond_wait(cond)
>     data = rbuf.getdata()
>     doSomething(data)
>   }
> }
> 
> ISR code:
> {
>   rbuf.putdata(hw.data)
>   cyg_drv_interrupt_mask() // don't allow interrupt on this device
>   cyg_drv_interrupt_acknowledge()
>   return cyg_isr_call_dsr
> }
> 
> DSR code
> {
>   cyg_drv_cond_signal(cond)
>   cyg_drv_interrupt_unmask() // allow interrupt on this device
> }
> 
> 
> This will not work, because if a new interrupt appears while the thread is
> "doingSomething", the thread will not be notified because it hasn't returned
> to the condition variable. When the tread returns to the condition variable
> it has missed the notify, and don't know that it actually should run once
> more. The problem is that we can't call a counting semaphore.
> 
> If we assign yet another thread we can solve the problem this way:
> 
> 
> Condition variable: cond
> Semaphore: sem
> Hardware: hw
> Ringbuffer: rbuf
> 
> WorkingThread code:
> {
>   while(1)
>   {
>     cyg_semaphore_wait(sem)
>     data = rbuf.getdata()
>     doSomething(data)
>   }
> }
> 
> ISR code:
> {
>   rbuf.putdata(hw.data)
>   cyg_drv_interrupt_mask() // don't allow interrupt on this device
>   cyg_drv_interrupt_acknowledge() //allow other interrupts to appear
>   return cyg_isr_call_dsr // tells the kernel to run the dsr
> }
> 
> DSR code:
> {
>   cyg_drv_cond_signal(cond)
> }
> 
> extra thread code:
> {
>   while(1)
>   {
>     cyg_drv_cond_wait(cond)
>     cyg_semaphore_post(sem)
>     cyg_drv_interrupt_unmask() // allow interrupt on this device
>   }
> }
> 
> 
> This should work, but there are some problems:
> It will introduce a very long interrupt latency on the device.
> Large overhead ( DSR, extra thread, both a condition variable and a
> semaphore )
> Another way to solve the problem, is to be absolute sure that the
> workingthread has returned to the condition variable before a new interrupt
> appears, but that will be a great constrain to the way we write our program.
> 
> My conclusion is: 
> ECos doesn't have a useful synchronization mechanism, in situations where we
> will allow interrupts to appear, while we are processing data from previous
> interrupts on the same source/device.
> 
> I hope someone can tell me if I'm wrong.
> 
> Henning


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]