From: Martin Egholm Nielsen <martin@egholm-nielsen.dk>
To: linux-mtd@lists.infradead.org
Subject: Re: Writing frequently to NAND - wearing, caching?
Date: Mon, 14 Feb 2005 09:37:45 +0100 [thread overview]
Message-ID: <cupnsf$skj$1@sea.gmane.org> (raw)
In-Reply-To: <4209AAC3.2090806@magellan-technology.com>
Hi Aras,
>>>>> I have an application which may need to write states frequently to my
>>>>> nand-fs in order to have these states in case of powerdown.
>>>>> But I'm a bit concerned about wearing the nand if I write to
>>>>> frequently.
> > <snip>
>> (because we're not using explicite wear levelling) we still have only
>> reached 24,000 - only 24% of the 100,000 cycle lifetime of the flash.
Thanks for the "comments" below - they are nice to have...
> ... and then there is the fact that the 100,000 write cycle limit is
> generally a conservative estimate based on testing of the device at an
> operating temperature of around 125 celcius! Most likely the device will
> be able to withstand over to 1,000,000 write cycles before failure. If
> your FS uses write verification to make sure the data is secure then you
> shouldn't have any problems even if you do reach this limit on some
> areas of the Flash.
>
> From the "Toshiba NAND Flash Applications Design Guide"
>
> "NOR Flash is typically limited to around 100,000 cycles. Since the
> electron flow-path due to the hot electron injection for programming is
> different from the one due to tunneling from the floating gate to the
> source for erasing, degradation is enhanced. However, in NAND Flash,
> both the programming and erasing is achieved by uniform Fowler- Nordheim
> tunneling between the floating gate and the substrate. This uniform
> programming and uniform erasing technology guarantees a wide cell
> threshold window even after 1,000,000 cycles."
>
> and
>
> "There is one question that often comes up Is ECC really necessary?
> After all, the likeliest cause of a bit error is during the programming
> process. For example, if you program a block, then verify it has no
> errors, how reliable is the data? In these ROM-like applications where
> the write/erase cycles is very low, the actual failure rate for a block
> is about 3 ppm after 10 years (i.e. 3 blocks out of every million blocks
> will have a bit error after 10 years) in which a block failure is
> defined as a single bit error. This result was derived from testing
> 29708 pieces of 512Mb NAND (0.16um) by writing a checkerboard pattern
> into blocks and storing at 125C. Since there will be a non-zero data
> retention failure rate, you should limit the amount of code to 1 block
> to achieve a low ppm probability of failure."
prev parent reply other threads:[~2005-02-14 8:38 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-02-03 9:31 Writing frequently to NAND - wearing, caching? Martin Egholm Nielsen
2005-02-06 22:14 ` Charles Manning
2005-02-07 8:33 ` Martin Egholm Nielsen
2005-02-07 12:02 ` Estelle HAMMACHE
2005-02-08 0:12 ` Charles Manning
2005-02-09 6:16 ` Aras Vaichas
2005-02-14 8:37 ` Martin Egholm Nielsen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='cupnsf$skj$1@sea.gmane.org' \
--to=martin@egholm-nielsen.dk \
--cc=linux-mtd@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.