This is the mail archive of the ecos-devel@sourceware.org mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: NAND technical review


Rutger Hofman wrote:
Jonathan Larmour wrote:
[snip]

We also prefer R's model of course because we started with R's model and use it now.


You haven't done any profiling by any luck have you? Or code size analysis? Although I haven't got into the detail of R's version yet (since I was starting with dissecting E's), both the footprint and the cumulative function call and indirection time overhead are concerns of mine.


In a first step in mitigating the 'footprint pressure', I have added CDL options to configure in/out support for the various chips types, to wit: - ONFI chips;
- 'regular' large-page chips;
- 'regular' small-page chips.
It is in r678 on my download page (http://www.cs.vu.nl/~rutger/software/ecos/nand-flash/). As I had suggested before, this was a very small refactoring (although code has moved about in io_nand_chip.c to save on the number of #ifdefs).

I'm sure that's useful.


One more candidate for a reduce in code footprint: I can add a CDL option to configure out support for heterogeneous controllers/chips. The ANC layer will become paper-thin then. If this change will make any difference, I will do it within, say, a week's time.

I wouldn't want you to spend time until the decision's made. I'll make a note that it would take a week to do. Admittedly, I'm not sure the savings would be enough to make it "paper-thin".


As regards the concerns for (indirect) function call overhead: my intuition is that the NAND operations themselves (page read, page write, block erase) will dominate. It takes 200..500us only to transfer a page over the data bus to the NAND chip; one recent data sheet mentions program time 200us, erase time 1.5ms. I think only a very slow CPU would show the overhead of less than 10 indirect function calls.

I think it's more the cumulative effect, primarily on reads. Especially as there's no asynchronous aspect - the control process is synchronous, so any delays between real underlying NAND operations only add up. Ross quoted an example of about 25us for a page read. Off the top of my head, for something like a 64MHz CPU with 4 clock ticks per instruction on average, that's 16 insns per us, so a page read is about equivalent to 400 insns. At that sort of level I'm not sure overheads are lost in the noise. Maybe I've messed up those guestimates though.


I wonder if Ross has any performance data for E he could contribute?

On a separate point, while I'm here, I think the use of printf via cyg_nand_global.pf would want tidied up a lot. Some of them seem to be there to mention errors to the user, but without any programmatic treatment of the errors, primarily reporting them to higher layers.

It should also be possible to eliminate the overheads of the printf. Right now there's quite a lot of them, involving function calls, allocation of const string data, and occasionally calculation of arguments, even if the pf function pointer is pointing to an empty null printf function. It should be possible to turn them off entirely, and not be any worse off for it (including error reporting back up to higher layers). It might not be so bad if the strings were a lot shorter, or the printf functions less frequently used, but being able to turn them off entirely would seem better.

Jifl
--
--["No sense being pessimistic, it wouldn't work anyway"]-- Opinions==mine


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]