This is the mail archive of the
ecos-discuss@sourceware.org
mailing list for the eCos project.
RE: Mbufs leak- inputs
- From: "Saritha Yellanki" <ysaritha at broadcom dot com>
- To: Avishay <avishay at wavion dot net>, "ecos-discuss at ecos dot sourceware dot org" <ecos-discuss at ecos dot sourceware dot org>
- Date: Fri, 29 May 2009 07:17:20 -0700
- Subject: RE: [ECOS] Mbufs leak- inputs
- References: <641013.18100.qm@web94614.mail.in2.yahoo.com> <3B0AC7A3DE042D478300B74F7E21C128101BDC1A8F@SJEXCHCCR01.corp.ad.broadcom.com> <005c01c97c04$b744e9b0$25cebd10$@gellatly@netic.com> <3B0AC7A3DE042D478300B74F7E21C1281026D60675@SJEXCHCCR01.corp.ad.broadcom.com> <23736399.post@talk.nabble.com>
Hi Avishay,
I am not sure,the problem you are facing is same as ours.
But I would like to give you the description of our issue and the fix details.
Due to the limitation in our ethernet driver (resources) we drop some of the IP fragments.( ICMP or ping packets (ping > 16K) which is IP fragmented).Due to the drop, remaining IP fragments remaining in the IP re-assembly queue and were not getting freed up. ( They are sitting in the re-assembly queue). This resulted in the leak of Mbufs in the eCos.
The reason for above is that - The re-assembly timer was not working. Fix for that was done in the bsd timer as mentioned below by taking the patch from eCosCVS (latest version)
With the below patch, the timers are working, hence the re-assembly queue gets flushed,mbufs leak is fixed.
///////////////////////////////////////////////////////////////////////////////////////////////
106c106
< e = _timeouts;
---
> e = timeouts;
266a267
> int i;
275c276
< for (e = _timeouts; e; e = e->next) {
---
> for (e = _timeouts, i = 0; i < NTIMEOUTS; i++, e++) {
295a297
> int i;
299c301
< for (e = _timeouts; e; e = e->next) {
---
> for (e = _timeouts, i = 0; i < NTIMEOUTS; i++, e++) {
404a407,409
> // the following triggers if the "next" timeout has not just
> // passed, but passed by 1000 ticks - which with the normal
> // 1 tick = 10ms means 10 seconds - a long time.
406c411
< "Recorded alarm not in the future!" );
---
> "Recorded alarm not in the future! Starved network thread?" );
////////////////////////////////////////////////////////////////////////////////////////////////
I would suggest that you need to figure out , where exactly the mbufs are getting leaked or getting held-up in your case.
Thanks,
Saritha.
-----Original Message-----
From: ecos-discuss-owner@ecos.sourceware.org [mailto:ecos-discuss-owner@ecos.sourceware.org] On Behalf Of Avishay
Sent: Wednesday, May 27, 2009 11:23 AM
To: ecos-discuss@ecos.sourceware.org
Subject: RE: [ECOS] Mbufs leak- inputs
Hey all,
I have encountered the same "Out of MBUFs" problem myself several times in
the past couple of months and I was able also to simulate it in several
ways.
I was very happy to read that there is a patch which solves this problem,
but I was unable to find it in the ECOS repository.
Can someone please guide me to the correct source code?
Thanks in advance,
Avishay
Saritha Yellanki wrote:
>
> Hi,
>
> Thanks to all them , who have responded.
>
> The issue is resolved. The bsd tcp/ip timers were not working on our code
> base ( hence the ip-reassembly freeing timer was not kicking in ).
>
> We have updated our file from ecos cvs , hence with the patch in (
> ecos-2.0--src\packages\net\bsd_tcpip\v2_0\src\ecos\timeout.c ) file our
> problem is resolved.
>
> Thanks
> Saritha.
>
> -----Original Message-----
> From: ecos-discuss-owner@ecos.sourceware.org
> [mailto:ecos-discuss-owner@ecos.sourceware.org] On Behalf Of Laurie
> Gellatly
> Sent: Thursday, January 22, 2009 1:43 AM
> To: ecos-discuss@ecos.sourceware.org; Alok Singh
> Subject: RE: [ECOS] Mbufs leak- inputs
>
> Hi Saritha,
> I've been having the 'out of MBUFs' problem as well.
> I had not been able to determine why. When my system reports it though
> it does not recover. I can reproduce your problem as you describe.
> It only takes about 10 pings for me to see it appear but at least
> the system does not lock up in this case.
>
> ...Laurie:{)
>
>> -----Original Message-----
>> From: ecos-discuss-owner@ecos.sourceware.org [mailto:ecos-discuss-
>> owner@ecos.sourceware.org] On Behalf Of Saritha Yellanki
>> Sent: Thursday, 22 January 2009 4:46 AM
>> To: ecos-discuss@ecos.sourceware.org; Alok Singh
>> Subject: [ECOS] Mbufs leak- inputs
>>
>> Hi All,
>>
>> Did any come across this kind of issue with ecos stack.
>>
>> We are having a system running ecos. And we do a icmp ping to that
>> device with size of 32000 from a windows PC, the requests are timed out
>> after sometime we are running of mbufs
>>
>>
>> Any inputs or insights into this?
>>
>> Thanks,
>> saritha
>>
>>
>>
>> ================================================================
>>
>>
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>> warning: eth_recv out of MBUFs
>>
>> =======================================================================
>> ======
>>
>>
>> Dump
>> =======================================================================
>> =
>> Network stack mbuf stats:
>> mbufs 288, clusters 127, free clusters 0
>> Failed to get 0 times
>> Waited to get 0 times
>> Drained queues to get 0 times
>> VM zone 'ripcb':
>> Total: 32, Free: 32, Allocs: 0, Frees: 0, Fails: 0
>> VM zone 'tcpcb':
>> Total: 32, Free: 24, Allocs: 24, Frees: 16, Fails: 0
>> VM zone 'udpcb':
>> Total: 32, Free: 32, Allocs: 20, Frees: 20, Fails: 0
>> VM zone 'socket':
>> Total: 32, Free: 24, Allocs: 44, Frees: 36, Fails: 0
>> Misc mpool: total 131056, free 79344, max free block 76980
>> Mbufs pool: total 130944, free 112384, blocksize 128
>> Clust pool: total 262144, free 0, blocksize 2048
>>
>>
>>
>>
>> --
>> Before posting, please read the FAQ:
>> http://ecos.sourceware.org/fom/ecos
>> and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss
>
>
>
> --
> Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
> and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss
>
>
>
>
> --
> Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
> and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss
>
>
>
--
View this message in context: http://www.nabble.com/quesry-regarding-ecos-tp20125766p23736399.html
Sent from the Sourceware - ecos-discuss mailing list archive at Nabble.com.
--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss
--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss