From: srimugunthan dhandapani <srimugunthan.dhandapani@gmail.com>
To: dedekind1@gmail.com
Cc: linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org
Subject: Re: [RFC] LFTL: a FTL for large parallel IO flash cards
Date: Fri, 30 Nov 2012 16:34:24 +0530 [thread overview]
Message-ID: <CAMjNe_cON6-=MKzMrYAEhkryyvcQApMLpVdSyLPUXPBDU_1Pzw@mail.gmail.com> (raw)
In-Reply-To: <1354268388.30168.80.camel@sauron.fi.intel.com>
On Fri, Nov 30, 2012 at 3:09 PM, Artem Bityutskiy <dedekind1@gmail.com> wrote:
> On Sat, 2012-11-17 at 01:04 +0530, srimugunthan dhandapani wrote:
>> Hi all,
>>
>> Due to fundamental limits like size-per-chip and interface speed
>> limits all large capacity Flash are made of multiple chips or banks.
>> The presence of multiple chips readily offers parallel read or write support.
>> Unlike an SSD, for a raw flash card , this parallelism is visible to
>> the software layer and there are many opportunities
>> for exploiting this parallelism.
>>
>> The presented LFTL is meant for flash cards with multiple banks and
>> larger minimum write sizes.
>> LFTL mostly reuses code from mtd_blkdevs.c and mtdblock.c.
>> The LFTL was tested on a 512GB raw flash card which has no firmware
>> for wearlevelling or garbage collection.
>>
>> The following are the important points regarding the LFTL:
>>
>> 1. multiqueued/multithreaded design:(Thanks to Joern engel for a
>> mail-discussion)
>> The mtd_blkdevs.c dequeues block I/O requests from the block layer
>> provided request queue from a single kthread.
>> This design of IO requests dequeued from a single queue by a single
>> thread is a bottleneck for flash cards that supports hundreds of MB/sec.
>> We use a multiqueued and multithreaded design.
>> We bypass the block layer by registering a new make_request and
>> the LFTL maintains several queues of its own and the block IO requests are
>> put in one of these queues. For every queue there is an associated kthread
>> that processes requests from that queue. The number of "FTL IO kthreads"
>> is #defined as 64 currently.
>
> Hmm, should this be done in MTD layer, not hacked in in LFTL, so that
> every MTD user could benefit?
>
> Long time ago Intel guys implemented "striping" in MTD, sent out, but it
> did not make it to upstream. This is probably something your need.
>
> With striping support in MTD, you will end up with a 'virtual' MTD
> device with larger eraseblock and minimum I/O unit. MTD would split all
> the I/O requests and work with all the chips in parallel.
>
Thanks for replying.
Current large capacity flash have several levels of parallelism
chip-level, channel-level, package-level.
1. http://www.cse.ohio-state.edu/~fchen/paper/papers/hpca11.pdf
2. http://research.microsoft.com/pubs/63596/usenix-08-ssd.pdf
Assuming only chip level parallelism
and providing only striping feature may not exploit all the
capabilities of flash
hardware
In the card that i worked, the hardware provides DMA read/write capability
which automatically stripes the data across the
chips.(hence the larger writesize = 32K)
But it exposes the other levels of parallelism.
LFTL does not stripe the data across the parallel I/O units(called
"banks" in the code).
But it dynamically selects one of the bank to write and one of the
bank to garbage collect.
Presently with respect to UBI+UBIFS, as block allocation is done by
UBI and garbage collection
by UBIFS, it is not possible to dynamically split the I/O read/writes
and garbage collection read/writes
across the banks.
Although LFTL assumes only bank level parallelism and is currently
not aware of hierarchy of parallel I/O units,
i think it is possible to make LFTL aware of it in future.
> This would be a big work, but everyone would benefit.
>
> --
> Best Regards,
> Artem Bityutskiy
prev parent reply other threads:[~2012-11-30 11:04 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-11-16 19:34 [RFC] LFTL: a FTL for large parallel IO flash cards srimugunthan dhandapani
2012-11-16 20:34 ` Ezequiel Garcia
2012-11-17 11:37 ` srimugunthan dhandapani
2012-11-17 13:11 ` Ezequiel Garcia
2012-11-23 9:09 ` srimugunthan dhandapani
2012-11-30 9:39 ` Artem Bityutskiy
2012-11-30 11:04 ` srimugunthan dhandapani [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAMjNe_cON6-=MKzMrYAEhkryyvcQApMLpVdSyLPUXPBDU_1Pzw@mail.gmail.com' \
--to=srimugunthan.dhandapani@gmail.com \
--cc=dedekind1@gmail.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mtd@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).