This is the mail archive of the ecos-discuss@sourceware.org mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Scheduler interrupt_end starving and cause stack overflow


I am having trouble with overflowing the idle thread stack. I reduced my application down to a single thread + idle thread to test.

I am using the K70 hal.

The main thread is write/reading to flash, and so it uses a cond wait. All it does is loop on writes and reads for a very long time, like a minute or so. Basically the application uses a single thread for some initialization of the system and I don't care if it starves other threads during startup.

What happens is the idle thread and the main thread trade back and forth, but eventually there is an interrupt_end. This adds some values to the stack. And this happens multiple times.

It appears that the main thread so starves the time the idle thread gets that eventually the calls to interrupt_end overflow the stack. This is simply because starving the idle thread never lets it return.

I can call a wait to give more time for the idle thread, or I can initialize before I start my threads. 

But, does someone have a elegant solution that guarantees there is no overflow? Seems that for a robust system you would want to ensure that if a thread gets real busy due to some outside stimulus you would want to guarantee never to overflow the idle thread.
--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]