This is the mail archive of the ecos-patches@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Pooled memory allocation for JFFS2


On Saturday 22 November 2003 10:41, Andrew Lunn wrote:
> > I have RedBoot loading files from JFFS2 now. I did need to disable
> > zlib in my mkfs.jffs2 because I was getting 'deflateInit failed'... will
> > look at that on Monday.
>
> FYI: I committed a patch to the zlib package a couple of days ago. It
> might be worth checking out an older version just to make sure its not
> zlib that is your problem, not jffs2.
>
>      Andrew

I encountered this quite often. It is always an indication that you are
running out of RAM. If one of zlib's internal memory allocations fails, this
is not translated to a sensible error message. You can reduce memory requirements
for zlib by passing suitable defines on the compiler's command line; see
services/compress/zlib/current/include/zconf.h for a description. Applying the
measures described there will adversely affect the compression ratio achieved,
of course.

In general, zlibs memory requirements are huge when using the standard values
for MAX_WBITS and MAX_MEM_LEVEL; I doubt many embedded systems will have that
much RAM available. This was my main motivation to search for a way to disable
zlib compression. Also, for large file systems where the amount of RAM used is
mostly determined by the large number of in-core jffs2_raw_node_ref structs, it
can be very helpful to use the pooled allocation, and to increase the 'page size'.

And finally, a few general remarks about compression from a user's perspective:
Having the compression inside the file system may be convenient, but it also
prevents me from exercising much control over it. Neither can I choose a particular
compression method that suits my kind of data, nor can I selectively enable
compression only for data for which this is worthwile. And then, the results of
compressing data in small chunks tend to be inferior to those obtained by compressing
the entire amount of data at once. Finally, the compression seems to cause problems
within the FS itself (why was it that we need five spare erase blocks at any point in
time? I vaguely remember the reasoning was somehow related to the possibility of
compressed data expanding under garbage collection).

For all those reasons, a developer writing software for small resource-constrained
embedded targets will probably almost always be better off doing any required
compression by himself instead of relying on the FS' internal compression. File
systems and compressors are two unrelated functions, and I highly value modularity.
If compression had be left out of the FS, the code would probably have been simpler
and easier to maintain. Just my opinion, of course.

tk
-- 
Thomas Koeller, Software Development

Basler Vision Technologies
An der Strusbek 60-62
22926 Ahrensburg
Germany

Tel +49 (4102) 463-162
Fax +49 (4102) 463-239

mailto:thomas.koeller@baslerweb.com
http://www.baslerweb.com


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]