This is the mail archive of the mailing list for the eCos project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Re: NAND flash driver considerations: RFC

Andrew Lunn wrote:
On Thu, Sep 25, 2008 at 02:29:42AM +0200, Rutger Hofman wrote:
I have received no responses for this RFC.

Wouldn't anybody care to comment?

O.K. I'll comment. However i've never used NAND, i've no idea how they work, how they differ between devices and what quirks they have... So this is probably not the most educated reply....

Ah, thanks, I am glad.

eCos directory structure
+++ My proposal:

have packages/io/nand/ and packages/devs/nand/* *beside* packages/flash/ and packages/devs/flash/*.

I don't see there being any problem mixing up NOR and NAND drivers in
packages/dev/flash/ The CDL rules will prevent somebody from trying to
use a NAND driver with the NOR generic parts and vica-verse.

I am not sure. NOR and NAND are really very different, not only internally but also in the API to higher layers.

E.g., NAND flash file systems usually use the spare area on the flash to write meta-data; they handle the ECC explicitly; they are aware of/manage bad blocks etc. NOR flash is memory-mapped, NAND cannot be. In consequence, literally nothing of the io/flash/common code can be used for NAND flash.

So, if we keep NAND under flash/, we would have io/flash/common and io/flash/nand/common. 2nd prize in beauty contest. Then, where should individual controller/chip packages go? Either under devs/flash/nand/, so not on a level with NOR parts; or under devs/flash/, obfuscating the fact that they are not interested in io/flash/common but in io/flash/nand/common/. This all seemed not very good to me and I moved away from my initial, obvious idea of 'keep (NAND) flash with (NOR) flash'.

You talk about separate controllers and chips directories. Do we
actually need chip packages. You said that mostly any controller can
talk to any chip. This makes me think a chip packages are not
needed. What are needed are target hardware specific packages which
contain all quirks and configuration information needed for a specific
controller and chips pairing on a specific board. I think we probably
have enough structure in the packages/dev/flash/ to handle this.

For these paragraphs, I'll ignore the fact that there are also NAND controllers.

At the hardware level, NAND chips appear to be conformant. They have wires like nCE (chip enable), nR/B (Ready/Busy), ALE (address latch enable), CLE (command latch enable), nWE (write enable) nRE (read enable) and some more, and a 8-bit or 16-bit data bus.

NAND chips are controlled by sending a sequence of (generally speaking) a command (enable CLE, toggle the data wires), an address (enable ALE, ...), sending/receiving data (enable nWE/nRE, ...), checking status etc. These command sequences are defined in the terms of the wires above. Command sequences are usually just named 'command' (and we hope there is no confusion with the wire-level command). The NAND chip data sheets specify which commands in what format are supported. ONFI (a standardization effort underway since 2006 that builds upon de-facto 'standards') attempts to canonicalize the command set.

If a chip is ONFI-compliant, it is also conformant at the command layer; e.g. an ONFI page program command is: send (wire) command 0x80, send address = ca. 5 bytes, then send the data, then send (wire) command 0x10, then wait until status bit[6] has a rising flank. If a chip is ONFI-conformant, then we can use generic ONFI encodings and no chip-specific code is needed.

Well, ONFI is recent, so although many chips support most of the ONFI command set (e.g. page program command = 0x80), they are often not *completely* compliant. E.g. the chip I am working with now has a custom command for interrogation, which is used to get essential parameters like block/page size, #blocks, x8 or x16, etc. So I needed to write a chip-specific piece of code to handle that. We wouldn't want every platform to repeat the chip-specific code. That is the reason that I think a NAND chip device type is required.

I fully agree that the platform target package must (be able to) configure some stuff for the NAND chips, e.g. indicate which (GPIO?) pin the nCE pin of some NAND chip is attached to. In general, the target should configure which chips are attached to which controller, their device names etc etc.

NAND eCos device type

The disk package uses the naming scheme


XXX is the type of disc, eg mmc, ide etc. Y is the disk number, Z is
the partition. Maybe for nand /dev/namdX/Y might be better. X being
the controller number and Y being the chip number.

I will do that.

What we probably need to do is think about how we would want to use
the devices. eg for a filesystem we don't really care about how many
chips there are and how they are arranged. We just want to put a
filesystem on it, or a subsection of it. The filesystem probably does
not want to address controller:chip:block, it wants to use a more
abstract interface, maybe even just a block number.

I would say some more thought is required here...

Agreed. See at the bottom.

External interface of NAND devices (controller on top of chip)

+++ My proposal:
the ONFI functions exported by the generic NAND controller code are sufficient.

So you are saying there will not be any generic code in io/nand/

There is certainly generic code in io/nand. It will implement one or more higher-level APIs in terms of the ONFI commands.

The code for the *basic* API is something like this.

Example: program (within) one page:

cyg_nand_page_program(cyg_nand_t *nand,
                      const void *data, size_t len,
                      size_t col, size_t row,
                      const void *spare, size_t spare_len)

must implement the ONFI command sequence for page program. So, it is essentially implemented as follows:

   nand->cmd(nand, 0x80);
   nand->addr(nand, col, row);
   nand->program_data(nand, data, len);
   nand->goto_addr(nand, spare_offset, row);
   nand->program_data(nand, spare, spare_len);
   nand->cmd(nand, 0x10);
   nand->await_status(nand, 6);

where the indirect calls are implemented by the specific controller.

A minimal set of other calls would be:
   cyg_nand_chip_select(nand, chip_number)

Now, it seems YAFFS only requires the following commands:
   and some calls for initialization and bad block handling.

YAFFS is aware of page/block/spare size and the rest, it uses the spare area, it does ECC by itself, and it handles bad blocks. So, this basic API would suffice to run YAFFS on one chip.

I agree with your remark above that a higher-level API is also desirable. For starters, if the nand chips have identical page/block size, spare size, bus width, etc, we need only a thin layer to hide the fact that there are multiple chips and possibly multiple controllers. This would still fully expose bad blocks, spare and ECC handling. It would allow YAFFS to run on one 'abstract NAND'.

If we want to offer another layer that can hide the NAND-specific nasties (spare, ECC, bad blocks) from the upper layer, more thought is needed. ECC handling is not an issue, it can just be implemented. But e.g. if we would want to present all the NANDs as one contiguous area, one must handle blocks that go bad. One might use an indirection table for that, and reserve blocks as backups. Or there might be different solutions. How would one handle multiple writes to one page? Usually, these are limited, a typical allowed value is 4 before an erase is required. So that would require buffering, flushing, and/or relocation of pages. What if we want to hide the fact that there are pages/blocks/luns/chips? Then we must address the bytes in this abstract NAND, but 32bit addresses will be insufficient.

So, I agree that lots of thought is required for the fancier higher-level APIs. But if we implement the basic API plus the extension of abstracting away uniform chips, I think we can serve the most important target: flash file systems like YAFFS.


Before posting, please read the FAQ:
and search the list archive:

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]