From: Matthew Wilcox <willy@infradead.org>
To: Dave Chinner <david@fromorbit.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
Alexander Viro <viro@zeniv.linux.org.uk>,
Christian Brauner <brauner@kernel.org>,
Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
Frederic Weisbecker <frederic@kernel.org>,
Valentin Schneider <vschneid@redhat.com>,
Leonardo Bras <leobras@redhat.com>,
Yair Podemsky <ypodemsk@redhat.com>, P J P <ppandit@redhat.com>
Subject: Re: [PATCH] fs/buffer.c: remove per-CPU buffer_head lookup cache
Date: Tue, 27 Jun 2023 01:13:25 +0100 [thread overview]
Message-ID: <ZJoppezn+EiLQvUm@casper.infradead.org> (raw)
In-Reply-To: <ZJofgZ/EHR8kFtth@dread.disaster.area>
On Tue, Jun 27, 2023 at 09:30:09AM +1000, Dave Chinner wrote:
> On Mon, Jun 26, 2023 at 07:47:42PM +0100, Matthew Wilcox wrote:
> > On Mon, Jun 26, 2023 at 03:04:53PM -0300, Marcelo Tosatti wrote:
> > > Upon closer investigation, it was found that in current codebase, lookup_bh_lru
> > > is slower than __find_get_block_slow:
> > >
> > > 114 ns per __find_get_block
> > > 68 ns per __find_get_block_slow
> > >
> > > So remove the per-CPU buffer_head caching.
> >
> > LOL. That's amazing. I can't even see why it's so expensive. The
> > local_irq_disable(), perhaps? Your test case is the best possible
> > one for lookup_bh_lru() where you're not even doing the copy.
>
> I think it's even simpler than that.
>
> i.e. the lookaside cache is being missed, so it's a pure cost and
> the code is always having to call __find_get_block_slow() anyway.
How does that happen?
__find_get_block(struct block_device *bdev, sector_t block, unsigned size)
{
struct buffer_head *bh = lookup_bh_lru(bdev, block, size);
if (bh == NULL) {
/* __find_get_block_slow will mark the page accessed */
bh = __find_get_block_slow(bdev, block);
if (bh)
bh_lru_install(bh);
The second (and all subsequent) calls to __find_get_block() should find
the BH in the LRU.
> IMO, this is an example of how lookaside caches are only a benefit
> if the working set of items largely fits in the lookaside cache and
> the cache lookup itself is much, much slower than a lookaside cache
> miss.
But the test code he posted always asks for the same buffer each time.
So it should find it in the lookaside cache?
next prev parent reply other threads:[~2023-06-27 0:13 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-26 18:04 [PATCH] fs/buffer.c: remove per-CPU buffer_head lookup cache Marcelo Tosatti
2023-06-26 18:47 ` Matthew Wilcox
2023-06-26 23:30 ` Dave Chinner
2023-06-27 0:13 ` Matthew Wilcox [this message]
2023-06-27 0:52 ` Dave Chinner
2023-06-27 17:53 ` Marcelo Tosatti
2023-06-26 22:23 ` Matthew Wilcox
2023-06-27 6:39 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZJoppezn+EiLQvUm@casper.infradead.org \
--to=willy@infradead.org \
--cc=axboe@kernel.dk \
--cc=brauner@kernel.org \
--cc=david@fromorbit.com \
--cc=frederic@kernel.org \
--cc=hch@lst.de \
--cc=leobras@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=ppandit@redhat.com \
--cc=viro@zeniv.linux.org.uk \
--cc=vschneid@redhat.com \
--cc=ypodemsk@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).