This is the mail archive of the ecos-discuss@sourceware.org mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RE: mbuf leak/full or lockup in overloaded condition


 
Hello,
  
  I have a switch running ecos2.0 and when left in network (isolated) with loops for a day, I see management locked up. No rx/tx is happing though the hw is functional. When I dump the stats, I see the following..
 
mbufs pool is full with no free buffers (marked with in ###). This state continues there after. I have left the switch for a day but it did not recover. We are seeing this issue often in the above condition.
 
Network stack mbuf stats:
   mbufs 2045, clusters 10, free clusters 10
   Failed to get 0 times
   Waited to get 0 times
   Drained queues to get 0 times
VM zone 'ripcb':
  Total: 80, Free: 80, Allocs: 1, Frees: 1, Fails: 0
VM zone 'tcpcb':
  Total: 80, Free: 74, Allocs: 20, Frees: 14, Fails: 0
VM zone 'udpcb':
  Total: 80, Free: 77, Allocs: 18, Frees: 15, Fails: 0
VM zone 'socket':
  Total: 80, Free: 71, Allocs: 39, Frees: 30, Fails: 0
Misc mpool: total  131056, free   14800, max free block 14020
Mbufs pool: total  130944, #### free       0 ####, blocksize  128
Clust pool: total  262144, free  239616, blocksize 2048
 
I have a similar problem with ecos2.0 where free clusters is zero, this is due to the ip fragmentation/reassembly code where timers where not kicking to collect the leftovers from the queue. The fix was taken from the latest sources.
 
Let me know if you have any insights into the above problem.

Thanks,
Rajesh


--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]