From: Artem Bityutskiy <dedekind1@gmail.com>
To: srimugunthan dhandapani <srimugunthan.dhandapani@gmail.com>
Cc: linux-mtd@lists.infradead.org, linux-fsdevel@vger.kernel.org
Subject: Re: [RFC] LFTL: a FTL for large parallel IO flash cards
Date: Fri, 30 Nov 2012 11:39:48 +0200 [thread overview]
Message-ID: <1354268388.30168.80.camel@sauron.fi.intel.com> (raw)
In-Reply-To: <CAMjNe_cJG6wkiPDvVZzqURz4gVTW8Dx4JrM-KPVbVedLwsDfcw@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 2129 bytes --]
On Sat, 2012-11-17 at 01:04 +0530, srimugunthan dhandapani wrote:
> Hi all,
>
> Due to fundamental limits like size-per-chip and interface speed
> limits all large capacity Flash are made of multiple chips or banks.
> The presence of multiple chips readily offers parallel read or write support.
> Unlike an SSD, for a raw flash card , this parallelism is visible to
> the software layer and there are many opportunities
> for exploiting this parallelism.
>
> The presented LFTL is meant for flash cards with multiple banks and
> larger minimum write sizes.
> LFTL mostly reuses code from mtd_blkdevs.c and mtdblock.c.
> The LFTL was tested on a 512GB raw flash card which has no firmware
> for wearlevelling or garbage collection.
>
> The following are the important points regarding the LFTL:
>
> 1. multiqueued/multithreaded design:(Thanks to Joern engel for a
> mail-discussion)
> The mtd_blkdevs.c dequeues block I/O requests from the block layer
> provided request queue from a single kthread.
> This design of IO requests dequeued from a single queue by a single
> thread is a bottleneck for flash cards that supports hundreds of MB/sec.
> We use a multiqueued and multithreaded design.
> We bypass the block layer by registering a new make_request and
> the LFTL maintains several queues of its own and the block IO requests are
> put in one of these queues. For every queue there is an associated kthread
> that processes requests from that queue. The number of "FTL IO kthreads"
> is #defined as 64 currently.
Hmm, should this be done in MTD layer, not hacked in in LFTL, so that
every MTD user could benefit?
Long time ago Intel guys implemented "striping" in MTD, sent out, but it
did not make it to upstream. This is probably something your need.
With striping support in MTD, you will end up with a 'virtual' MTD
device with larger eraseblock and minimum I/O unit. MTD would split all
the I/O requests and work with all the chips in parallel.
This would be a big work, but everyone would benefit.
--
Best Regards,
Artem Bityutskiy
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 836 bytes --]
next prev parent reply other threads:[~2012-11-30 9:38 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-11-16 19:34 [RFC] LFTL: a FTL for large parallel IO flash cards srimugunthan dhandapani
2012-11-16 20:34 ` Ezequiel Garcia
2012-11-17 11:37 ` srimugunthan dhandapani
2012-11-17 13:11 ` Ezequiel Garcia
2012-11-23 9:09 ` srimugunthan dhandapani
2012-11-30 9:39 ` Artem Bityutskiy [this message]
2012-11-30 11:04 ` srimugunthan dhandapani
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1354268388.30168.80.camel@sauron.fi.intel.com \
--to=dedekind1@gmail.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mtd@lists.infradead.org \
--cc=srimugunthan.dhandapani@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).