From: Boris Brezillon <boris.brezillon@free-electrons.com>
To: Richard Weinberger <richard@nod.at>
Cc: Artem Bityutskiy <dedekind1@gmail.com>,
Brian Norris <computersforpeace@gmail.com>,
"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>
Subject: Re: Cached NAND reads and UBIFS
Date: Wed, 13 Jul 2016 14:43:18 +0200 [thread overview]
Message-ID: <20160713144318.38f809f7@bbrezillon> (raw)
In-Reply-To: <57863449.4070108@nod.at>
On Wed, 13 Jul 2016 14:30:01 +0200
Richard Weinberger <richard@nod.at> wrote:
> Hi!
>
> As discussed on IRC, Boris and I figured that on our target UBIFS is sometimes
> very slow.
> i.e. deleting a 1GiB file right after a reboot takes more than 30 seconds.
>
> When deleting a file with a cold TNC UBIFS has to lookup a lot of znodes
> on the flash.
> For every single znode lookup UBIFS requests a few bytes from the flash.
> This is slow.
>
> After some investigation we found out that the NAND read cache is disabled
> when the NAND driver supports reading subpages.
> So we removed the NAND_SUBPAGE_READ flag from the driver and suddenly
> lookups were fast. Really fast. Deleting a 1GiB took less than 5 seconds.
> Since on our MLC NAND a page is 16KiB many znodes can be read very fast
> directly out of the NAND read cache.
> The read cache helps here a lot because in the regular case UBIFS' index
> nodes are linearly stored in a LEB.
>
> The TNC seems to assume that it can do a lot of short reads since the NAND
> read cache will help.
> But as soon subpage reads are possible this assumption is no longer true.
>
> Now we're not sure what do do, should we implement bulk reading in the TNC
> code or improve NAND read caching?
Hm, NAND page caching is something I'd like to get rid of at some
point, and this for several reasons:
1/ it brings some confusion in NAND controller drivers, where those
don't know when they are allowed to use chip->buffer, and what to do
with ->pagebuf in this case
2/ caching is already implemented at the FS level, so I'm not sure we
really need another level of caching at the MTD/NAND level (except for
those specific use cases where the MTD user relies on this caching to
improve accesses to small contiguous chunks)
3/ it hides the real number of bitflips in a given page: say someone is
reading over and over the same page, the MTD user will never be able to
detect when the number of bitflips exceed the threshold. This should
not be a problem in real world, because MTD users are unlikely to always
read the same page without reading other pages in the meantime, but
still, I think it adds some confusion, especially if one wants to write
a test that reads over and over the same page to see the impact of
read-disturb.
Regards,
Boris
next prev parent reply other threads:[~2016-07-13 12:43 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-13 12:30 Cached NAND reads and UBIFS Richard Weinberger
2016-07-13 12:43 ` Boris Brezillon [this message]
2016-07-13 13:13 ` Artem Bityutskiy
2016-07-14 7:58 ` Boris Brezillon
2016-07-13 16:54 ` Brian Norris
2016-07-14 8:33 ` Boris Brezillon
2016-07-14 13:06 ` Richard Weinberger
2016-07-15 8:30 ` Boris Brezillon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160713144318.38f809f7@bbrezillon \
--to=boris.brezillon@free-electrons.com \
--cc=computersforpeace@gmail.com \
--cc=dedekind1@gmail.com \
--cc=linux-mtd@lists.infradead.org \
--cc=richard@nod.at \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).