This is the mail archive of the
mailing list for the eCos project.
Re: NAND technical review
- From: Jonathan Larmour <jifl at jifvik dot org>
- To: Jürgen Lambrecht <J dot Lambrecht at televic dot com>
- Cc: Ross Younger <wry at ecoscentric dot com>, Rutger Hofman <rutger at cs dot vu dot nl>, eCos developers <ecos-devel at ecos dot sourceware dot org>, Deroo Stijn <S dot Deroo at TELEVIC dot com>
- Date: Tue, 13 Oct 2009 03:42:54 +0100
- Subject: Re: NAND technical review
- References: <4ACB4B58.firstname.lastname@example.org> <4ACC61F0.email@example.com>
Jürgen Lambrecht wrote:
Ross Younger wrote:
- E's high-level driver interface makes it harder to add new functions
later, necessitating a change to that API (H2 above). R's does not; the
requisite logic would only need to be added to the ANC. It is not thought
that more than a handful such changes will ever be required, and it
possible to maintain backwards compatibility. (As a case in point,
for hardware ECC is currently work-in-progress within eCosCentric, and
require such a change, but now is not the right time to discuss that.)
Therefore we prefer R's model.
Is it possible that R's model follows better the "general" structure of
drivers in eCos?
I mean: (I follow our CVS, could maybe differ from the final commit of
Rutger to eCos)
1. with the low-level chip-specific code in /devs
(devs/flash/arm/at91/[board] and devs/flash/arm/at91/nfc, and
2. with the "middleware" in /io (io/flash_nand/current/src and there
/anc, /chip, /controller)
3. with the high-level code in /fs
I don't see E's model as being much different in that perspective. There
is stuff in devs/flash, io/nand and (presumably) fs as well.
The difference is more the separation out of the controller functionality
into a different layer.
Is it correct that R's abstraction makes it possible to add partitioning
(because that is an interesting feature of E's implementation)
As Rutger said, it could be done - there's nothing in his design which
presents it. It's not there now though, so unless someone's working on it
it's probably not something to consider in the decision process.
Especially since it would be a big user API change.
We also prefer R's model of course because we started with R's model and
use it now.
You haven't done any profiling by any luck have you? Or code size
analysis? Although I haven't got into the detail of R's version yet (since
I was starting with dissecting E's), both the footprint and the cumulative
function call and indirection time overhead are concerns of mine.
(b) Availability of drivers
- One chip: the ST Micro 0xG chip (large page, x8 and x16 present but
presumably only tested on the x8 chip on the BlackFin board?)
- Two: also the Micron MT29F2G08AACWP-ET:D 256MB 3V3 NAND FLASH (2kB
page size, x8)
Because if this chip, Rutger adapted the hardware ECC controller code,
because our chip uses more bits (for details, ask Stijn or Rutger).
I'd be interested in what the issue was. From admittedly a quick look I
can't find anything about this in the code.
(d) Degree of testing
We have it very well tested, amongst others
- an automatic (continual) nand-flash test in a clima chamber
- stress tests: putting it full over and over again via FTP (both with
af few big and many small files) and check the heap remaining:
* Put 25 files with a filesize of 10.000.000 bytes on the filesystem
* Put 2500 files with a filesize of 100.000 bytes on the filesystem
* Put 7000 files with a filesize of 10.000 bytes on the filesystem
Conclusion: storing smaller files needs more heap, but we still have
plenty left with our 16MB
* Write a bundle of files over and over again on the filesystem. We put
everytime 1000 files of 100.000 bytes filesize on the flash drive.
- used in the final mp3-player application
That's extremely useful to know, thanks! But a couple of further questions
on this: Did any bad blocks show up at any point? Were you using a bad
block table? Presumably there were factory-marked bad blocks on some?
--["No sense being pessimistic, it wouldn't work anyway"]-- Opinions==mine