public inbox for linux-mtd@lists.infradead.org
 help / color / mirror / Atom feed
From: Boris Brezillon <boris.brezillon@free-electrons.com>
To: Brian Norris <computersforpeace@gmail.com>
Cc: Richard Weinberger <richard@nod.at>,
	Artem Bityutskiy <dedekind1@gmail.com>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>
Subject: Re: Cached NAND reads and UBIFS
Date: Thu, 14 Jul 2016 10:33:01 +0200	[thread overview]
Message-ID: <20160714103301.4b01b0ce@bbrezillon> (raw)
In-Reply-To: <20160713165422.GA130613@google.com>

On Wed, 13 Jul 2016 09:54:22 -0700
Brian Norris <computersforpeace@gmail.com> wrote:

> On Wed, Jul 13, 2016 at 02:30:01PM +0200, Richard Weinberger wrote:
> > The TNC seems to assume that it can do a lot of short reads since the NAND
> > read cache will help.  
> 
> For my info: by "short reads" are you talking page-size/aligned?
> Subpage-sized/aligned? Or none of the above?

None of them. IIRC, the size was around 200bytes and not necessarily
page or ECC-chunk aligned.

> 
> > But as soon subpage reads are possible this assumption is no longer true.  
> 
> This sounds similar to the problem we were just discussing, about a
> bug/oversight/suboptimality in nand_do_read_ops()'s use of an extra
> buffer + memcpy(). I guess the difference here is that a sub-page read
> can't fill the cache as-is (we have no way to mark the cache as
> partially valid), so we just skip it. To "fixup" the cache
> for subpages, I guess we'd have to split it into subpages...

Well, it's not exactly the same problem. For the EC/VID headers we know
that we'll only need to read 64 bytes from the whole page. Here, we
don't know: the code might well read the whole page in small chunks,
and using subpage read in this case is inefficient: each time you have
to issue a READ command to store the page in the internal NAND cache,
and then I/Os + ECC calculation for the ECC chunk (note that modern
NAND controllers are able to pipeline I/Os and ECC calculation, thus
making full-page read more efficient than partial ones especially when
you end-up reading the whole page).

This is why this decision to use full-page vs ECC-chunk (AKA subpage)
reads can only be done at the MTD user level (in this particular case,
only UBIFS can predict whether we are likely to need the whole page
content or only a sub-part of it).

> 
> > Now we're not sure what do do, should we implement bulk reading in the TNC
> > code or improve NAND read caching?  
> 
> Seems like we should improve the caching, either in a layer above, or
> just in the NAND layer.

I think we all agree on that one :).

> 
> But, we haven't really figured out how to advertise the minimum
> (optimal) read size to the MTD user, right? We have at best
> 
>   writesize >> subpage_sft
> 
> but that's really the minimum write size, not read size, right?

Correct, the minimum write-size (when the controller supports
individual ECC chunk read) is an ECC chunk (usually 512 or 1024 bytes).

But again, the minimum read size is not necessarily the most optimal one
(see my explanation above), this means we'll have to differentiate the
minimum read size from the optimal read size (the one providing the
best throughput).

> So even
> if we're trying to shore up the read cacheability, we should better
> define the constraints first, so that we know what we can guarantee an
> MTD user, and what we can't. e.g., we will cache on size/alignment FOO,
> and anything not aligned to that is not guaranteed.

The read request will be ECC chunk (or page) aligned anyway, and the
core will use an intermediate buffer when the user request does not
match this constraint, so we'll always end-up with aligned content that
we can store in the cache.

> 
> On a potentially conflicting note, I think nand_do_read_ops() is too
> complicated. I think Boris expressed a similar sentiment. So if we can
> find a good way to factor the caching code to be less entangled with the
> mechanics of reading, that'd be a win for me.

I fully agree, and I'm not even sure this caching mechanism should be
implemented in the NAND layer.

Regards,

Boris

  reply	other threads:[~2016-07-14  8:33 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-13 12:30 Cached NAND reads and UBIFS Richard Weinberger
2016-07-13 12:43 ` Boris Brezillon
2016-07-13 13:13   ` Artem Bityutskiy
2016-07-14  7:58     ` Boris Brezillon
2016-07-13 16:54 ` Brian Norris
2016-07-14  8:33   ` Boris Brezillon [this message]
2016-07-14 13:06     ` Richard Weinberger
2016-07-15  8:30       ` Boris Brezillon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160714103301.4b01b0ce@bbrezillon \
    --to=boris.brezillon@free-electrons.com \
    --cc=computersforpeace@gmail.com \
    --cc=dedekind1@gmail.com \
    --cc=linux-mtd@lists.infradead.org \
    --cc=richard@nod.at \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox