From: Artem Bityutskiy <dedekind@infradead.org>
To: Eric Holmberg <Eric_Holmberg@Trimble.com>
Cc: Nicolas Pitre <nico@cam.org>, Urs Muff <urs_muff@Trimble.com>,
Jamie Lokier <jamie@shareable.org>,
linux-mtd@lists.infradead.org,
Adrian Hunter <adrian.hunter@nokia.com>
Subject: RE: UBIFS Corrupt during power failure
Date: Thu, 16 Apr 2009 08:51:26 +0300 [thread overview]
Message-ID: <1239861086.3390.209.camel@localhost.localdomain> (raw)
In-Reply-To: <C77C279BA71FD14985DC8E75FB265AB70305C86F@usw-am-xch-02.am.trimblecorp.net>
On Wed, 2009-04-15 at 13:33 -0600, Eric Holmberg wrote:
> > I don't remember if it was NOR, NAND or something else, but I remember
> > reading about some flash which supports 1 concurrent write and 1
> > erase, and thinking "oh that's clever, it means you can do streaming
> > writes or rapid fsync/database commits without long pauses
> > for erasing".
>
> The evolution seems to be:
> 1. Allow erase / program suspend to do a read from a different PEB (the
> chip I'm using supports this)
> 2. Allow simultaneous read while either erasing or programming a
> different PEB
> 3. Allow parallel operations on different flash banks
> 4. Combine NOR and NAND onto the same chip
>
> My understanding is that the parallel operations are only valid on
> different flash banks, where a flash bank could be thought of
> conceptually as a separate flash chip. I'm no flash memory expert by
> any means, so I'm sure there are some other systems out there.
As Nikolas noted, intel guys sent an mtdstripe layer implementation
here, but for some reasons they did not make it into mainline. That
level could interleave between several chips. E.g., you have 2 NANDs,
then that layer could present them as one virtual device with twice as
large eraseblock size and twice as large page size. And you get 2x
speed.
--
Best regards,
Artem Bityutskiy (Битюцкий Артём)
next prev parent reply other threads:[~2009-04-16 5:51 UTC|newest]
Thread overview: 89+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-03-24 13:45 UBIFS Corrupt during power failure Eric Holmberg
2009-03-24 15:30 ` Adrian Hunter
2009-03-24 17:04 ` Eric Holmberg
2009-03-24 18:16 ` Eric Holmberg
2009-03-25 6:32 ` Artem Bityutskiy
2009-03-26 6:59 ` Artem Bityutskiy
2009-03-26 14:09 ` Eric Holmberg
2009-03-30 19:00 ` Eric Holmberg
2009-03-31 14:45 ` Artem Bityutskiy
2009-04-10 12:25 ` Artem Bityutskiy
2009-04-10 14:27 ` Eric Holmberg
2009-04-10 15:17 ` Artem Bityutskiy
2009-04-10 15:49 ` Artem Bityutskiy
2009-04-10 17:00 ` Eric Holmberg
2009-04-10 17:11 ` Artem Bityutskiy
2009-04-10 18:33 ` Eric Holmberg
2009-04-14 6:11 ` Artem Bityutskiy
2009-04-14 15:09 ` Eric Holmberg
2009-04-14 15:45 ` Artem Bityutskiy
2009-04-14 15:53 ` Artem Bityutskiy
2009-04-14 18:00 ` Jamie Lokier
2009-04-15 6:00 ` Artem Bityutskiy
2009-04-15 15:17 ` Eric Holmberg
2009-04-15 16:09 ` Jamie Lokier
2009-04-15 16:12 ` Artem Bityutskiy
2009-04-15 16:32 ` Eric Holmberg
2009-04-15 16:44 ` Jamie Lokier
2009-04-15 18:26 ` Nicolas Pitre
2009-04-15 18:38 ` Jamie Lokier
2009-04-15 19:33 ` Eric Holmberg
2009-04-15 20:15 ` Nicolas Pitre
2009-04-15 20:46 ` Jamie Lokier
2009-04-16 5:51 ` Artem Bityutskiy [this message]
2009-04-16 5:46 ` Artem Bityutskiy
2009-04-16 21:34 ` Jamie Lokier
2009-04-17 8:56 ` Artem Bityutskiy
2009-04-17 13:51 ` Jamie Lokier
2009-04-17 14:36 ` Artem Bityutskiy
2009-04-17 23:49 ` Eric Holmberg
2009-05-15 7:16 ` Stefan Roese
2009-05-18 17:30 ` Eric Holmberg
2009-05-19 8:18 ` Artem Bityutskiy
2009-05-19 22:16 ` Eric Holmberg
2009-05-25 8:38 ` Artem Bityutskiy
2009-05-25 12:54 ` Artem Bityutskiy
2009-05-25 12:57 ` Artem Bityutskiy
2009-07-03 13:26 ` Artem Bityutskiy
2009-07-03 13:29 ` Artem Bityutskiy
2009-07-03 13:33 ` Urs Muff
2009-07-03 14:05 ` Artem Bityutskiy
2009-07-03 14:47 ` Urs Muff
2009-07-03 14:58 ` Artem Bityutskiy
2009-07-06 4:30 ` Artem Bityutskiy
2009-07-06 4:51 ` Artem Bityutskiy
2009-07-06 6:43 ` Artem Bityutskiy
2009-07-07 6:46 ` Artem Bityutskiy
2009-07-07 7:05 ` Urs Muff
2009-07-13 18:22 ` Eric Holmberg
2009-07-14 5:34 ` Artem Bityutskiy
2009-07-15 20:52 ` Jamie Lokier
2009-07-15 21:35 ` Eric Holmberg
2009-07-16 7:33 ` Artem Bityutskiy
2009-07-24 6:49 ` Artem Bityutskiy
2009-07-24 12:00 ` Artem Bityutskiy
2009-07-24 13:39 ` Eric Holmberg
2009-07-24 14:55 ` Artem Bityutskiy
2009-07-24 14:05 ` Jamie Lokier
2009-07-24 14:09 ` Artem Bityutskiy
2009-07-16 7:09 ` Artem Bityutskiy
2009-07-16 16:49 ` Jamie Lokier
2009-07-17 7:07 ` Artem Bityutskiy
2009-07-15 20:55 ` Jamie Lokier
2009-07-15 21:36 ` Eric Holmberg
2009-07-15 22:09 ` Jamie Lokier
2009-07-16 7:22 ` Artem Bityutskiy
2009-07-16 7:16 ` Artem Bityutskiy
2009-07-16 20:54 ` Gilles Casse
2009-07-17 0:29 ` Carl-Daniel Hailfinger
2009-07-24 14:08 ` Jamie Lokier
2009-07-16 7:14 ` Artem Bityutskiy
2009-06-03 8:08 ` Artem Bityutskiy
2009-06-03 8:25 ` Stefan Roese
2009-06-03 13:50 ` Eric Holmberg
2009-06-07 10:16 ` Artem Bityutskiy
2009-07-28 12:01 ` news
2009-07-28 12:24 ` Adrian Hunter
2009-07-28 17:19 ` Eric Holmberg
2009-08-09 4:59 ` Artem Bityutskiy
2009-04-17 8:58 ` Artem Bityutskiy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1239861086.3390.209.camel@localhost.localdomain \
--to=dedekind@infradead.org \
--cc=Eric_Holmberg@Trimble.com \
--cc=adrian.hunter@nokia.com \
--cc=jamie@shareable.org \
--cc=linux-mtd@lists.infradead.org \
--cc=nico@cam.org \
--cc=urs_muff@Trimble.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox