This is the mail archive of the
ecos-discuss@sources.redhat.com
mailing list for the eCos project.
Re: malloc/new in DSRs
- From: Bart Veer <bartv at tymora dot demon dot co dot uk>
- To: george at stratalight dot com
- Cc: ecos-discuss at sources dot redhat dot com
- Date: Mon, 29 Jul 2002 22:34:16 +0100 (BST)
- Subject: Re: [ECOS] malloc/new in DSRs
- References: <F626113795D3EB4482E5ACFBD93465121361A4@mailhost.stratalight.com>
- Reply-to: bartv at tymora dot demon dot co dot uk
>>>>> "George" == George Sosnowski <george@stratalight.com> writes:
George> If malloc is configed to be threadsafe in ecos.ecc, then
George> is it ok to use malloc/new/free/delete in DSRs? I assume
George> it is, but want to make sure.
The short answer is no: thread context is very different from DSR
context, see the kernel documentation. The obvious way of implementing
a thread-safe malloc() uses a mutex to protect the heap shared data,
so any malloc() or free() calls need to lock the mutex and then unlock
it again. A DSR is not allowed to call a mutex lock function.
Consider what might happen if a DSR did try to call malloc(). Suppose
some thread is in the middle of a malloc() call, and hence has the
mutex locked. An interrupt now goes off, the ISR runs and requests a
DSR invocation. Your DSR now calls malloc(), tries to lock the mutex,
and discovers the mutex is already owned by a thread. So the DSR would
need to wait until the thread had unlocked the mutex, but DSRs have
absolute priority over threads so the thread cannot run again until
the DSR has completed. Deadlock.
Now for a longer answer: the current implementation of threadsafe
malloc() does not always use a mutex to protect the heap. Instead it
locks the scheduler. In this scenario it would actually be safe to
call malloc() from a DSR because DSRs will not run if the scheduler is
locked. However this is really a bug in the current malloc
implementation, a left-over from the early days when the only
allocator was the fixed block one, and may get fixed at any time.
Therefore you should not rely on the current behaviour.
There is an argument that for certain implementations of memory
allocators, especially fixed block ones, it is legitimate to use a
scheduler lock rather than a mutex: the implementation of mutex lock
and unlock implicitly involves locking the scheduler; so for a
sufficiently simple memory allocator, briefly locking the scheduler
might actually be preferable to using a mutex. This needs further
investigation.
Bart
--
Before posting, please read the FAQ: http://sources.redhat.com/fom/ecos
and search the list archive: http://sources.redhat.com/ml/ecos-discuss