This is the mail archive of the ecos-discuss@sources.redhat.com mailing list for the eCos project.
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |
Other format: | [Raw text] |
With the attached change, I was able to significantly reduce the memory requirements for my PPP test application(~30% ish, from NET_MEM_USAGE = 90k to 60k). The problem is that the compression buffer(~4k) is allocated out of a pool that is NET_MEM_USAGE/4 bytes in size. This is quite interesting, since it begs the question of whether the current pool based allocation scheme for networking gives the optimum memory utilisation when memory is tight. -- Øyvind Harboe http://www.zylin.com
Index: src/if_ppp.c =================================================================== RCS file: /cvs/ecos/ecos-opt/net/net/ppp/current/src/if_ppp.c,v retrieving revision 1.2 diff -u -w -r1.2 if_ppp.c --- src/if_ppp.c 17 Apr 2004 03:13:06 -0000 1.2 +++ src/if_ppp.c 15 Jul 2004 13:38:44 -0000 @@ -123,7 +123,6 @@ #include <sys/sockio.h> //#include <sys/kernel.h> #include <sys/time.h> -#include <sys/malloc.h> #include <net/if.h> #include <net/if_types.h> @@ -286,8 +285,7 @@ sc->sc_relinq = NULL; bzero((char *)&sc->sc_stats, sizeof(sc->sc_stats)); #ifdef VJC - MALLOC(sc->sc_comp, struct slcompress *, sizeof(struct slcompress), - M_DEVBUF, M_NOWAIT); + sc->sc_comp=(struct slcompress *)malloc(sizeof(struct slcompress)); if (sc->sc_comp) sl_compress_init(sc->sc_comp, -1); #endif @@ -363,7 +361,7 @@ #endif /* PPP_FILTER */ #ifdef VJC if (sc->sc_comp != 0) { - FREE(sc->sc_comp, M_DEVBUF); + free(sc->sc_comp); sc->sc_comp = 0; } #endif
Attachment:
eb40appptest.ecm
Description: Text document
-- Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |