From: "Darrick J. Wong" <djwong@kernel.org>
To: Joanne Koong <joannelkoong@gmail.com>
Cc: brauner@kernel.org, miklos@szeredi.hu, hch@infradead.org,
linux-fsdevel@vger.kernel.org, kernel-team@meta.com,
linux-xfs@vger.kernel.org, linux-doc@vger.kernel.org
Subject: Re: [PATCH v1 14/16] fuse: use iomap for read_folio
Date: Wed, 3 Sep 2025 14:13:27 -0700 [thread overview]
Message-ID: <20250903211327.GV1587915@frogsfrogsfrogs> (raw)
In-Reply-To: <20250829235627.4053234-15-joannelkoong@gmail.com>
On Fri, Aug 29, 2025 at 04:56:25PM -0700, Joanne Koong wrote:
> Read folio data into the page cache using iomap. This gives us granular
> uptodate tracking for large folios, which optimizes how much data needs
> to be read in. If some portions of the folio are already uptodate (eg
> through a prior write), we only need to read in the non-uptodate
> portions.
>
> Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Looks fine to me,
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
--D
> ---
> fs/fuse/file.c | 72 ++++++++++++++++++++++++++++++++++----------------
> 1 file changed, 49 insertions(+), 23 deletions(-)
>
> diff --git a/fs/fuse/file.c b/fs/fuse/file.c
> index 5525a4520b0f..bdfb13cdee4b 100644
> --- a/fs/fuse/file.c
> +++ b/fs/fuse/file.c
> @@ -828,22 +828,62 @@ static int fuse_do_readfolio(struct file *file, struct folio *folio,
> return 0;
> }
>
> +static int fuse_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
> + unsigned int flags, struct iomap *iomap,
> + struct iomap *srcmap)
> +{
> + iomap->type = IOMAP_MAPPED;
> + iomap->length = length;
> + iomap->offset = offset;
> + return 0;
> +}
> +
> +static const struct iomap_ops fuse_iomap_ops = {
> + .iomap_begin = fuse_iomap_begin,
> +};
> +
> +struct fuse_fill_read_data {
> + struct file *file;
> +};
> +
> +static int fuse_iomap_read_folio_range_async(const struct iomap_iter *iter,
> + struct folio *folio, loff_t pos,
> + size_t len)
> +{
> + struct fuse_fill_read_data *data = iter->private;
> + struct file *file = data->file;
> + size_t off = offset_in_folio(folio, pos);
> + int ret;
> +
> + /*
> + * for non-readahead read requests, do reads synchronously since
> + * it's not guaranteed that the server can handle out-of-order reads
> + */
> + iomap_start_folio_read(folio, len);
> + ret = fuse_do_readfolio(file, folio, off, len);
> + iomap_finish_folio_read(folio, off, len, ret);
> + return ret;
> +}
> +
> +static const struct iomap_read_ops fuse_iomap_read_ops = {
> + .read_folio_range = fuse_iomap_read_folio_range_async,
> +};
> +
> static int fuse_read_folio(struct file *file, struct folio *folio)
> {
> struct inode *inode = folio->mapping->host;
> + struct fuse_fill_read_data data = {
> + .file = file,
> + };
> int err;
>
> - err = -EIO;
> - if (fuse_is_bad(inode))
> - goto out;
> -
> - err = fuse_do_readfolio(file, folio, 0, folio_size(folio));
> - if (!err)
> - folio_mark_uptodate(folio);
> + if (fuse_is_bad(inode)) {
> + folio_unlock(folio);
> + return -EIO;
> + }
>
> + err = iomap_read_folio(folio, &fuse_iomap_ops, &fuse_iomap_read_ops, &data);
> fuse_invalidate_atime(inode);
> - out:
> - folio_unlock(folio);
> return err;
> }
>
> @@ -1394,20 +1434,6 @@ static const struct iomap_write_ops fuse_iomap_write_ops = {
> .read_folio_range = fuse_iomap_read_folio_range,
> };
>
> -static int fuse_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
> - unsigned int flags, struct iomap *iomap,
> - struct iomap *srcmap)
> -{
> - iomap->type = IOMAP_MAPPED;
> - iomap->length = length;
> - iomap->offset = offset;
> - return 0;
> -}
> -
> -static const struct iomap_ops fuse_iomap_ops = {
> - .iomap_begin = fuse_iomap_begin,
> -};
> -
> static ssize_t fuse_cache_write_iter(struct kiocb *iocb, struct iov_iter *from)
> {
> struct file *file = iocb->ki_filp;
> --
> 2.47.3
>
>
next prev parent reply other threads:[~2025-09-03 21:13 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-29 23:56 [PATCH v1 00/16] fuse: use iomap for buffered reads + readahead Joanne Koong
2025-08-29 23:56 ` [PATCH v1 01/16] iomap: move async bio read logic into helper function Joanne Koong
2025-09-03 20:16 ` Darrick J. Wong
2025-09-04 6:00 ` Christoph Hellwig
2025-09-04 21:44 ` Joanne Koong
2025-08-29 23:56 ` [PATCH v1 02/16] iomap: rename cur_folio_in_bio to folio_unlocked Joanne Koong
2025-09-03 20:26 ` [PATCH v1 02/16] iomap: rename cur_folio_in_bio to folio_unlockedOM Darrick J. Wong
2025-09-04 6:03 ` Christoph Hellwig
2025-09-04 22:06 ` Joanne Koong
2025-08-29 23:56 ` [PATCH v1 03/16] iomap: refactor read/readahead completion Joanne Koong
2025-09-04 6:05 ` Christoph Hellwig
2025-09-04 23:16 ` Joanne Koong
2025-08-29 23:56 ` [PATCH v1 04/16] iomap: use iomap_iter->private for stashing read/readahead bio Joanne Koong
2025-09-03 20:30 ` Darrick J. Wong
2025-09-04 6:07 ` Christoph Hellwig
2025-09-04 22:20 ` Joanne Koong
2025-09-04 23:15 ` Joanne Koong
2025-08-29 23:56 ` [PATCH v1 05/16] iomap: propagate iomap_read_folio() error to caller Joanne Koong
2025-09-03 20:32 ` Darrick J. Wong
2025-09-04 6:09 ` Christoph Hellwig
2025-09-04 21:13 ` Matthew Wilcox
2025-08-29 23:56 ` [PATCH v1 06/16] iomap: move read/readahead logic out of CONFIG_BLOCK guard Joanne Koong
2025-08-29 23:56 ` [PATCH v1 07/16] iomap: iterate through entire folio in iomap_readpage_iter() Joanne Koong
2025-09-03 20:43 ` Darrick J. Wong
2025-09-04 22:37 ` Joanne Koong
2025-09-04 6:14 ` Christoph Hellwig
2025-09-04 22:45 ` Joanne Koong
2025-08-29 23:56 ` [PATCH v1 08/16] iomap: rename iomap_readpage_iter() to iomap_readfolio_iter() Joanne Koong
2025-09-04 6:15 ` Christoph Hellwig
2025-09-04 22:47 ` Joanne Koong
2025-08-29 23:56 ` [PATCH v1 09/16] iomap: rename iomap_readpage_ctx struct to iomap_readfolio_ctx Joanne Koong
2025-09-03 20:44 ` Darrick J. Wong
2025-08-29 23:56 ` [PATCH v1 10/16] iomap: add iomap_start_folio_read() helper Joanne Koong
2025-09-03 20:52 ` Darrick J. Wong
2025-08-29 23:56 ` [PATCH v1 11/16] iomap: make start folio read and finish folio read public APIs Joanne Koong
2025-09-03 20:53 ` Darrick J. Wong
2025-09-04 6:15 ` Christoph Hellwig
2025-08-29 23:56 ` [PATCH v1 12/16] iomap: add iomap_read_ops for read and readahead Joanne Koong
2025-09-03 21:08 ` Darrick J. Wong
2025-09-04 20:58 ` Joanne Koong
2025-09-04 6:21 ` Christoph Hellwig
2025-09-04 21:38 ` Joanne Koong
2025-08-29 23:56 ` [PATCH v1 13/16] iomap: add a private arg " Joanne Koong
2025-08-30 1:54 ` Gao Xiang
2025-09-02 21:24 ` Joanne Koong
2025-09-03 1:55 ` Gao Xiang
2025-09-03 21:11 ` Darrick J. Wong
2025-08-29 23:56 ` [PATCH v1 14/16] fuse: use iomap for read_folio Joanne Koong
2025-09-03 21:13 ` Darrick J. Wong [this message]
2025-08-29 23:56 ` [PATCH v1 15/16] fuse: use iomap for readahead Joanne Koong
2025-09-03 21:17 ` Darrick J. Wong
2025-09-04 19:40 ` Joanne Koong
2025-09-04 19:46 ` Joanne Koong
2025-08-29 23:56 ` [PATCH v1 16/16] fuse: remove fuse_readpages_end() null mapping check Joanne Koong
2025-09-02 9:21 ` Miklos Szeredi
2025-09-02 21:19 ` Joanne Koong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250903211327.GV1587915@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=brauner@kernel.org \
--cc=hch@infradead.org \
--cc=joannelkoong@gmail.com \
--cc=kernel-team@meta.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
--cc=miklos@szeredi.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).