From: Boris Brezillon <boris.brezillon@free-electrons.com>
To: Richard Weinberger <richard@nod.at>
Cc: Brian Norris <computersforpeace@gmail.com>,
Artem Bityutskiy <dedekind1@gmail.com>,
"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>
Subject: Re: Cached NAND reads and UBIFS
Date: Fri, 15 Jul 2016 10:30:57 +0200 [thread overview]
Message-ID: <20160715103057.691c320f@bbrezillon> (raw)
In-Reply-To: <57878E68.30901@nod.at>
Hi Richard,
On Thu, 14 Jul 2016 15:06:48 +0200
Richard Weinberger <richard@nod.at> wrote:
> Am 14.07.2016 um 10:33 schrieb Boris Brezillon:
> >>> Now we're not sure what do do, should we implement bulk reading in the TNC
> >>> code or improve NAND read caching?
> >>
> >> Seems like we should improve the caching, either in a layer above, or
> >> just in the NAND layer.
> >
> > I think we all agree on that one :).
>
> Today I found some time to implement a PoC of a single page cache directly
> in UBI.
Oh, that was fast. I'm looking forward to seeing this implementation.
> IMHO UBI is the right layer for that. NAND is too low and UBIFS too high
> level.
> With a few lines of case I was able to speedup UBIFS TNC lookups a lot,
> in fact it gave me the same speed up as caching in the NAND layer does.
As explained in a previous answer, assuming the user will always need
the content of a full page is not necessarily the good option, even if,
as seen with the UBIFS example, it's usually what we want.
In our use case, only UBIFS can know/guess how much data will be needed
in each page.
The good thing is that caching at the UBI level allows you to use
sub-page reads for EC/VID headers.
>
> As next step I'll tidy the code, add a debugfs interface to expose cache hit/miss
> counters such that we can experiment with different workloads and users ontop
> of UBIFS. /me thinks of squashfs+ubiblock which is also rather common these days.
Yep, it might help, especially if the squashfs 'block size' is less
than the NAND page size.
>
> Maybe caching more than one page helps too.
That would also help, though IMO it requires using the 'page cache'
mechanism so that the system can reclaim cached pages when it is under
memory pressure.
Anyway, thanks for investigating this option, let's see what Artem and
Brian think about this approach.
Regards,
Boris
prev parent reply other threads:[~2016-07-15 8:31 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-13 12:30 Cached NAND reads and UBIFS Richard Weinberger
2016-07-13 12:43 ` Boris Brezillon
2016-07-13 13:13 ` Artem Bityutskiy
2016-07-14 7:58 ` Boris Brezillon
2016-07-13 16:54 ` Brian Norris
2016-07-14 8:33 ` Boris Brezillon
2016-07-14 13:06 ` Richard Weinberger
2016-07-15 8:30 ` Boris Brezillon [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160715103057.691c320f@bbrezillon \
--to=boris.brezillon@free-electrons.com \
--cc=computersforpeace@gmail.com \
--cc=dedekind1@gmail.com \
--cc=linux-mtd@lists.infradead.org \
--cc=richard@nod.at \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox