From: Dave Chinner <david@fromorbit.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: "Ritesh Harjani (IBM)" <ritesh.list@gmail.com>,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
Christoph Hellwig <hch@infradead.org>,
"Darrick J . Wong" <djwong@kernel.org>,
Aravinda Herle <araherle@in.ibm.com>,
David Howells <dhowells@redhat.com>
Subject: Re: [RFC 2/2] iomap: Support subpage size dirty tracking to improve write performance
Date: Mon, 31 Oct 2022 18:08:53 +1100 [thread overview]
Message-ID: <20221031070853.GL3600936@dread.disaster.area> (raw)
In-Reply-To: <Y19EXLfn8APg3adO@casper.infradead.org>
On Mon, Oct 31, 2022 at 03:43:24AM +0000, Matthew Wilcox wrote:
> On Sat, Oct 29, 2022 at 08:04:22AM +1100, Dave Chinner wrote:
> > As it is, we already have the capability for the mapping tree to
> > have multiple indexes pointing to the same folio - perhaps it's time
> > to start thinking about using filesystem blocks as the mapping tree
> > index rather than PAGE_SIZE chunks, so that the page cache can then
> > track dirty state on filesystem block boundaries natively and
> > this whole problem goes away. We have to solve this sub-folio dirty
> > tracking problem for multi-page folios anyway, so it seems to me
> > that we should solve the sub-page block size dirty tracking problem
> > the same way....
>
> That's an interesting proposal. From the page cache's point of
> view right now, there is only one dirty bit per folio, not per page.
Per folio, yes, but I thought we also had a dirty bit per index
entry in the mapping tree. Writeback code uses the
PAGECACHE_TAG_DIRTY mark to find the dirty folios efficiently (i.e.
the write_cache_pages() iterator), so it's not like this is
something new. i.e. we already have coherent, external dirty bit
tracking mechanisms outside the folio itself that filesystems
use.
That's kinda what I'm getting at here - we already have coherent
dirty state tracking outside of the individual folios themselves.
Hence if we have to track sub-folio up-to-date state, sub-folio
dirty state and, potentially, sub-folio writeback state outside the
folio itself, why not do it by extending the existing coherent dirty
state tracking that is built into the mapping tree itself?
Folios + Xarray have given us the ability to disconnect the size of
the cached item at any given index from the index granularity - why
not extend that down to sub-page folio granularity in addition to
the scaling up we've been doing for large (multipage) folio
mappings?
Then we don't need any sort of filesystem specific "add-on" that sits
alongside the mapping tree that tries to keep track of dirty state
in addition to the folio and the mapping tree tracking that already
exists...
> We have a number of people looking at the analogous problem for network
> filesystems right now. Dave Howells' netfs infrastructure is trying
> to solve the problem for everyone (and he's been looking at iomap as
> inspiration for what he's doing). I'm kind of hoping we end up with one
> unified solution that can be used for all filesystems that want sub-folio
> dirty tracking. His solution is a bit more complex than I really want
> to see, at least partially because he's trying to track dirtiness at
> byte granularity, no matter how much pain that causes to the server.
Byte range granularity is probably overkill for block based
filesystems - all we need is a couple of extra bits per block to be
stored in the mapping tree alongside the folio....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2022-10-31 7:09 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-28 4:30 [RFC 0/2] iomap: Add support for subpage dirty state tracking to improve write performance Ritesh Harjani (IBM)
2022-10-28 4:30 ` [RFC 1/2] iomap: Change uptodate variable name to state Ritesh Harjani (IBM)
2022-10-28 16:31 ` Darrick J. Wong
2022-10-29 3:09 ` Ritesh Harjani (IBM)
2022-10-28 4:30 ` [RFC 2/2] iomap: Support subpage size dirty tracking to improve write performance Ritesh Harjani (IBM)
2022-10-28 12:42 ` Matthew Wilcox
2022-10-29 3:05 ` Ritesh Harjani (IBM)
2022-10-28 17:01 ` Darrick J. Wong
2022-10-28 18:15 ` Matthew Wilcox
2022-10-29 3:25 ` Ritesh Harjani (IBM)
2022-10-28 21:04 ` Dave Chinner
2022-10-30 3:27 ` Ritesh Harjani (IBM)
2022-10-30 22:31 ` Dave Chinner
2022-10-31 3:43 ` Matthew Wilcox
2022-10-31 7:08 ` Dave Chinner [this message]
2022-10-31 10:27 ` Matthew Wilcox
2022-11-02 8:57 ` Christoph Hellwig
2022-11-03 0:38 ` Dave Chinner
2022-11-02 9:03 ` Christoph Hellwig
2022-11-02 17:35 ` Darrick J. Wong
2022-11-04 7:27 ` Christoph Hellwig
2022-11-04 14:15 ` Ritesh Harjani (IBM)
2022-11-03 14:51 ` David Howells
2022-11-04 7:30 ` Christoph Hellwig
2022-11-07 13:03 ` David Howells
2022-11-03 14:12 ` David Howells
2022-11-04 11:28 ` Ritesh Harjani (IBM)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221031070853.GL3600936@dread.disaster.area \
--to=david@fromorbit.com \
--cc=araherle@in.ibm.com \
--cc=dhowells@redhat.com \
--cc=djwong@kernel.org \
--cc=hch@infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=ritesh.list@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox