From: "Darrick J. Wong" <djwong@kernel.org>
To: Joanne Koong <joannelkoong@gmail.com>
Cc: brauner@kernel.org, miklos@szeredi.hu, hch@infradead.org,
linux-block@vger.kernel.org, gfs2@lists.linux.dev,
linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org,
linux-doc@vger.kernel.org, hsiangkao@linux.alibaba.com,
kernel-team@meta.com
Subject: Re: [PATCH v4 07/15] iomap: track read/readahead folio ownership internally
Date: Tue, 23 Sep 2025 17:13:35 -0700 [thread overview]
Message-ID: <20250924001335.GL1587915@frogsfrogsfrogs> (raw)
In-Reply-To: <20250923002353.2961514-8-joannelkoong@gmail.com>
On Mon, Sep 22, 2025 at 05:23:45PM -0700, Joanne Koong wrote:
> The purpose of "struct iomap_read_folio_ctx->cur_folio_in_bio" is to
> track folio ownership to know who is responsible for unlocking it.
> Rename "cur_folio_in_bio" to "cur_folio_owned" to better reflect this
> purpose and so that this can be generically used later on by filesystems
> that are not block-based.
>
> Since "struct iomap_read_folio_ctx" will be made a public interface
> later on when read/readahead takes in caller-provided callbacks, track
> the folio ownership state internally instead of exposing it in "struct
> iomap_read_folio_ctx" to make the interface simpler for end users.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Looks good to me now,
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
--D
> ---
> fs/iomap/buffered-io.c | 34 +++++++++++++++++++++++-----------
> 1 file changed, 23 insertions(+), 11 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 09e65771a947..34df1cddf65c 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -362,7 +362,6 @@ static void iomap_read_end_io(struct bio *bio)
>
> struct iomap_read_folio_ctx {
> struct folio *cur_folio;
> - bool cur_folio_in_bio;
> void *read_ctx;
> struct readahead_control *rac;
> };
> @@ -386,7 +385,6 @@ static void iomap_bio_read_folio_range(const struct iomap_iter *iter,
> sector_t sector;
> struct bio *bio = ctx->read_ctx;
>
> - ctx->cur_folio_in_bio = true;
> if (ifs) {
> spin_lock_irq(&ifs->state_lock);
> ifs->read_bytes_pending += plen;
> @@ -423,7 +421,7 @@ static void iomap_bio_read_folio_range(const struct iomap_iter *iter,
> }
>
> static int iomap_read_folio_iter(struct iomap_iter *iter,
> - struct iomap_read_folio_ctx *ctx)
> + struct iomap_read_folio_ctx *ctx, bool *folio_owned)
> {
> const struct iomap *iomap = &iter->iomap;
> loff_t pos = iter->pos;
> @@ -460,6 +458,7 @@ static int iomap_read_folio_iter(struct iomap_iter *iter,
> folio_zero_range(folio, poff, plen);
> iomap_set_range_uptodate(folio, poff, plen);
> } else {
> + *folio_owned = true;
> iomap_bio_read_folio_range(iter, ctx, pos, plen);
> }
>
> @@ -482,16 +481,22 @@ int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops)
> struct iomap_read_folio_ctx ctx = {
> .cur_folio = folio,
> };
> + /*
> + * If an IO helper takes ownership of the folio, it is responsible for
> + * unlocking it when the read completes.
> + */
> + bool folio_owned = false;
> int ret;
>
> trace_iomap_readpage(iter.inode, 1);
>
> while ((ret = iomap_iter(&iter, ops)) > 0)
> - iter.status = iomap_read_folio_iter(&iter, &ctx);
> + iter.status = iomap_read_folio_iter(&iter, &ctx,
> + &folio_owned);
>
> iomap_bio_submit_read(&ctx);
>
> - if (!ctx.cur_folio_in_bio)
> + if (!folio_owned)
> folio_unlock(folio);
>
> /*
> @@ -504,14 +509,15 @@ int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops)
> EXPORT_SYMBOL_GPL(iomap_read_folio);
>
> static int iomap_readahead_iter(struct iomap_iter *iter,
> - struct iomap_read_folio_ctx *ctx)
> + struct iomap_read_folio_ctx *ctx,
> + bool *cur_folio_owned)
> {
> int ret;
>
> while (iomap_length(iter)) {
> if (ctx->cur_folio &&
> offset_in_folio(ctx->cur_folio, iter->pos) == 0) {
> - if (!ctx->cur_folio_in_bio)
> + if (!*cur_folio_owned)
> folio_unlock(ctx->cur_folio);
> ctx->cur_folio = NULL;
> }
> @@ -519,9 +525,9 @@ static int iomap_readahead_iter(struct iomap_iter *iter,
> ctx->cur_folio = readahead_folio(ctx->rac);
> if (WARN_ON_ONCE(!ctx->cur_folio))
> return -EINVAL;
> - ctx->cur_folio_in_bio = false;
> + *cur_folio_owned = false;
> }
> - ret = iomap_read_folio_iter(iter, ctx);
> + ret = iomap_read_folio_iter(iter, ctx, cur_folio_owned);
> if (ret)
> return ret;
> }
> @@ -554,15 +560,21 @@ void iomap_readahead(struct readahead_control *rac, const struct iomap_ops *ops)
> struct iomap_read_folio_ctx ctx = {
> .rac = rac,
> };
> + /*
> + * If an IO helper takes ownership of the folio, it is responsible for
> + * unlocking it when the read completes.
> + */
> + bool cur_folio_owned = false;
>
> trace_iomap_readahead(rac->mapping->host, readahead_count(rac));
>
> while (iomap_iter(&iter, ops) > 0)
> - iter.status = iomap_readahead_iter(&iter, &ctx);
> + iter.status = iomap_readahead_iter(&iter, &ctx,
> + &cur_folio_owned);
>
> iomap_bio_submit_read(&ctx);
>
> - if (ctx.cur_folio && !ctx.cur_folio_in_bio)
> + if (ctx.cur_folio && !cur_folio_owned)
> folio_unlock(ctx.cur_folio);
> }
> EXPORT_SYMBOL_GPL(iomap_readahead);
> --
> 2.47.3
>
>
next prev parent reply other threads:[~2025-09-24 0:13 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-23 0:23 [PATCH v4 00/15] fuse: use iomap for buffered reads + readahead Joanne Koong
2025-09-23 0:23 ` [PATCH v4 01/15] iomap: move bio read logic into helper function Joanne Koong
2025-09-23 0:23 ` [PATCH v4 02/15] iomap: move read/readahead bio submission " Joanne Koong
2025-09-23 0:23 ` [PATCH v4 03/15] iomap: store read/readahead bio generically Joanne Koong
2025-09-23 0:23 ` [PATCH v4 04/15] iomap: iterate over folio mapping in iomap_readpage_iter() Joanne Koong
2025-09-23 0:23 ` [PATCH v4 05/15] iomap: rename iomap_readpage_iter() to iomap_read_folio_iter() Joanne Koong
2025-09-23 0:23 ` [PATCH v4 06/15] iomap: rename iomap_readpage_ctx struct to iomap_read_folio_ctx Joanne Koong
2025-09-23 0:23 ` [PATCH v4 07/15] iomap: track read/readahead folio ownership internally Joanne Koong
2025-09-24 0:13 ` Darrick J. Wong [this message]
2025-09-23 0:23 ` [PATCH v4 08/15] iomap: add public start/finish folio read helpers Joanne Koong
2025-09-23 0:23 ` [PATCH v4 09/15] iomap: add caller-provided callbacks for read and readahead Joanne Koong
2025-09-24 0:26 ` Darrick J. Wong
2025-09-24 18:18 ` Joanne Koong
2025-09-23 0:23 ` [PATCH v4 10/15] iomap: add bias for async read requests Joanne Koong
2025-09-24 0:28 ` Darrick J. Wong
2025-09-24 18:23 ` Joanne Koong
2025-09-23 0:23 ` [PATCH v4 11/15] iomap: move buffered io bio logic into new file Joanne Koong
2025-09-23 0:23 ` [PATCH v4 12/15] iomap: make iomap_read_folio() a void return Joanne Koong
2025-09-23 0:23 ` [PATCH v4 13/15] fuse: use iomap for read_folio Joanne Koong
2025-09-23 15:39 ` Miklos Szeredi
2025-09-23 17:21 ` Darrick J. Wong
2025-09-23 0:23 ` [PATCH v4 14/15] fuse: use iomap for readahead Joanne Koong
2025-09-23 0:23 ` [PATCH v4 15/15] fuse: remove fc->blkbits workaround for partial writes Joanne Koong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250924001335.GL1587915@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=brauner@kernel.org \
--cc=gfs2@lists.linux.dev \
--cc=hch@infradead.org \
--cc=hsiangkao@linux.alibaba.com \
--cc=joannelkoong@gmail.com \
--cc=kernel-team@meta.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=miklos@szeredi.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox