This is the mail archive of the ecos-discuss@sourceware.org mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Scheduler lock in Doug Lea's malloc implementation - be real careful


>>
>>    Bob> Can someone explain me why the default memory allocator locks
>>    Bob> the scheduler ? I would expect it to be legitimate for a low
>>    Bob> priority thread with no real time constraints to do dynamic
>>    Bob> memory allocation without influencing the real-time behaviour
>>    Bob> of threads with real-time constraints, as long as they have a
>>    Bob> higher priority.
>>
>> I was not involved in the initial implementation of the memory
>> allocators, so this is not gospel. I do remember that the uITRON
>> compatibility layer imposed various requirements.
>>
>> I believe the theory is that the memory allocators should be very
>> fast. If you use an alternative locking mechanism like a mutex, both
>> the lock and unlock functions will themselves lock and unlock the
>> scheduler. When you have a memory allocator that takes a comparable
>> amount of time to a mutex operation, you are better off locking the
>> scheduler around the memory allocation rather than going through mutex
>> lock and unlock calls.
>>
>> If you are using a fixed block memory pool as per mfiximpl.hxx then
>> that theory may well be correct. If you are using some other allocator
>> such as dlmalloc then I suspect the theory is wrong and for such
>> allocators the code should use a mutex instead, but I have not done
>> any measurements. There are going to be complications, e.g.
>> average vs. maximum times.
>>
>> Bart
>>
>> --
>> Bart Veer                       eCos Configuration Architect
>> http://www.ecoscentric.com/     The eCos and RedBoot experts
>>
>> --
>
>The data that describes what portion(s) of the pool is common to all threads
>accessing the pool.
>Each successful allocate/deallocate request results at least 1 instance of a
>read of a portion of that common data,
>a varying quantifty of instructions to evaluate, and finally updates to the
>affected data value(s).
>
>The above "read/process/write" sequence is commonly referred to as a
>"critical instruction sequence" that requires
>a guaratee that absolutely precludes any potential of a 2nd operation from
>occurring while a prior one is in progress.
>To not do so allows for sporatic, unpredictable timing errors that are
>extremely difficult to recognize, let alone identify and correct.

Obviously, you need protection there. What I do not understand is why it is implemented using scheduler lock/unlock instead of using mutexes or semaphores. Also the proposed use of memory pools is not really clear to me (and it certainly raises portability issues when moving from one OS to another). As a general rule, I _always_ consider dynamic memory allocation an undeterministic process, so I _never_ use it in time critical code. It nevertheless seems odd to me that you shouldn't be allowed to use dynamic memory allocation in low priority threads with no real-time requirements at all, just because of timing requirements in higher priority threads. I see no reason why scanning the heap could not be interrupted (put on hold for a while) to give a real-time thread time to finish its job. This is not possible if you protect things with a big kernel lock, but would be if searching for memory would be protected by a mutex.

Any comments ? Am I making a fundamental reasoning error here ? If not, I am going to try to change the implementation and see if it solves our problem and propose a patch.

Thanks,

Bob




--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]