From: David Howells <dhowells@redhat.com>
To: Christian Brauner <christian@brauner.io>
Cc: David Howells <dhowells@redhat.com>,
Paulo Alcantara <pc@manguebit.org>,
netfs@lists.linux.dev, linux-afs@lists.infradead.org,
linux-cifs@vger.kernel.org, ceph-devel@vger.kernel.org,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
Matthew Wilcox <willy@infradead.org>
Subject: [PATCH v4 19/22] netfs: Fix folio->private handling in netfs_perform_write()
Date: Mon, 27 Apr 2026 16:46:34 +0100 [thread overview]
Message-ID: <20260427154639.180684-20-dhowells@redhat.com> (raw)
In-Reply-To: <20260427154639.180684-1-dhowells@redhat.com>
Under some circumstances, netfs_perform_write() doesn't correctly
manipulate folio->private between NULL, NETFS_FOLIO_COPY_TO_CACHE, pointing
to a group and pointing to a netfs_folio struct, leading to potential
multiple attachments of private data with associated folio ref leaks and
also leaks of netfs_folio structs or netfs_group refs.
Fix this by consolidating the place at which a folio is marked uptodate in
one place and having that look at what's attached to folio->private and
decide how to clean it up and then set the new group. Also, the content
shouldn't be flushed if group is NULL, even if a group is specified in the
netfs_group parameter, as that would be the case for a new folio. A
filesystem should always specify netfs_group or never specify netfs_group.
The Sashiko auto-review tool noted that it was theoretically possible that
the fpos >= ctx->zero_point section might leak if it modified a streaming
write folio. This is unlikely, but with a network filesystem, third party
changes can happen. It also pointed out that __netfs_set_group() would
leak if called multiple times on the same folio from the "whole folio
modify section".
Fixes: 8f52de0077ba ("netfs: Reduce number of conditional branches in netfs_perform_write()")
Closes: https://sashiko.dev/#/patchset/20260414082004.3756080-1-dhowells%40redhat.com
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/netfs/buffered_write.c | 109 ++++++++++++++++++++---------------
include/trace/events/netfs.h | 1 +
2 files changed, 63 insertions(+), 47 deletions(-)
diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c
index b782cf83fb57..20ce8d04d874 100644
--- a/fs/netfs/buffered_write.c
+++ b/fs/netfs/buffered_write.c
@@ -12,24 +12,6 @@
#include <linux/slab.h>
#include "internal.h"
-static void __netfs_set_group(struct folio *folio, struct netfs_group *netfs_group)
-{
- if (netfs_group)
- folio_attach_private(folio, netfs_get_group(netfs_group));
-}
-
-static void netfs_set_group(struct folio *folio, struct netfs_group *netfs_group)
-{
- void *priv = folio_get_private(folio);
-
- if (unlikely(priv != netfs_group)) {
- if (netfs_group && (!priv || priv == NETFS_FOLIO_COPY_TO_CACHE))
- folio_attach_private(folio, netfs_get_group(netfs_group));
- else if (!netfs_group && priv == NETFS_FOLIO_COPY_TO_CACHE)
- folio_detach_private(folio);
- }
-}
-
/*
* Grab a folio for writing and lock it. Attempt to allocate as large a folio
* as possible to hold as much of the remaining length as possible in one go.
@@ -157,6 +139,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
size_t offset; /* Offset into pagecache folio */
size_t part; /* Bytes to write to folio */
size_t copied; /* Bytes copied from user */
+ void *priv;
offset = pos & (max_chunk - 1);
part = min(max_chunk - offset, iov_iter_count(iter));
@@ -212,8 +195,9 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
finfo = netfs_folio_info(folio);
group = netfs_folio_group(folio);
- if (unlikely(group != netfs_group) &&
- group != NETFS_FOLIO_COPY_TO_CACHE)
+ if (unlikely(group) &&
+ group != NETFS_FOLIO_COPY_TO_CACHE &&
+ group != netfs_group)
goto flush_content;
if (folio_test_uptodate(folio)) {
@@ -222,9 +206,8 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
copied = copy_folio_from_iter_atomic(folio, offset, part, iter);
if (unlikely(copied == 0))
goto copy_failed;
- netfs_set_group(folio, netfs_group);
trace = netfs_folio_is_uptodate;
- goto copied;
+ goto copied_uptodate;
}
/* If the page is above the zero-point then we assume that the
@@ -237,24 +220,22 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
if (unlikely(copied == 0))
goto copy_failed;
folio_zero_segment(folio, offset + copied, flen);
- __netfs_set_group(folio, netfs_group);
- folio_mark_uptodate(folio);
- trace = netfs_modify_and_clear;
- goto copied;
+ if (finfo)
+ trace = netfs_modify_and_clear_rm_finfo;
+ else
+ trace = netfs_modify_and_clear;
+ goto mark_uptodate;
}
/* See if we can write a whole folio in one go. */
if (!maybe_trouble && offset == 0 && part >= flen) {
copied = copy_folio_from_iter_atomic(folio, offset, part, iter);
if (likely(copied == part)) {
- if (finfo) {
+ if (finfo)
trace = netfs_whole_folio_modify_filled;
- goto folio_now_filled;
- }
- __netfs_set_group(folio, netfs_group);
- folio_mark_uptodate(folio);
- trace = netfs_whole_folio_modify;
- goto copied;
+ else
+ trace = netfs_whole_folio_modify;
+ goto mark_uptodate;
}
if (copied == 0)
goto copy_failed;
@@ -272,7 +253,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
finfo->dirty_len += finfo->dirty_offset;
if (finfo->dirty_len == flen) {
trace = netfs_whole_folio_modify_filled_efault;
- goto folio_now_filled;
+ goto mark_uptodate;
}
if (copied > finfo->dirty_len)
finfo->dirty_len = copied;
@@ -300,11 +281,11 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
copied = copy_folio_from_iter_atomic(folio, offset, part, iter);
if (unlikely(copied == 0))
goto copy_failed;
- netfs_set_group(folio, netfs_group);
trace = netfs_just_prefetch;
- goto copied;
+ goto copied_uptodate;
}
+ /* Do a streaming write on a folio that has nothing in it yet. */
if (!finfo) {
ret = -EIO;
if (WARN_ON(folio_get_private(folio)))
@@ -313,10 +294,8 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
if (unlikely(copied == 0))
goto copy_failed;
if (offset == 0 && copied == flen) {
- __netfs_set_group(folio, netfs_group);
- folio_mark_uptodate(folio);
trace = netfs_streaming_filled_page;
- goto copied;
+ goto mark_uptodate;
}
finfo = kzalloc_obj(*finfo);
@@ -345,7 +324,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
finfo->dirty_len += copied;
if (finfo->dirty_offset == 0 && finfo->dirty_len == flen) {
trace = netfs_streaming_cont_filled_page;
- goto folio_now_filled;
+ goto mark_uptodate;
}
trace = netfs_streaming_write_cont;
goto copied;
@@ -361,13 +340,36 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
goto out;
continue;
- folio_now_filled:
- if (finfo->netfs_group)
- folio_change_private(folio, finfo->netfs_group);
- else
- folio_detach_private(folio);
+ /* Mark a folio as being up to data when we've filled it
+ * completely. If the folio has a group attached, then it must
+ * be the same group, otherwise we should have flushed it out
+ * above. We have to get rid of the netfs_folio struct if
+ * there was one.
+ */
+ mark_uptodate:
folio_mark_uptodate(folio);
- kfree(finfo);
+
+ copied_uptodate:
+ priv = folio_get_private(folio);
+ if (likely(priv == netfs_group)) {
+ /* Already set correctly; no change required. */
+ } else if (priv == NETFS_FOLIO_COPY_TO_CACHE) {
+ if (!netfs_group)
+ folio_detach_private(folio);
+ else
+ folio_change_private(folio, netfs_get_group(netfs_group));
+ } else if (!priv) {
+ folio_attach_private(folio, netfs_get_group(netfs_group));
+ } else {
+ WARN_ON_ONCE(!finfo);
+ if (netfs_group)
+ /* finfo->netfs_group has a ref */
+ folio_change_private(folio, netfs_group);
+ else
+ folio_detach_private(folio);
+ kfree(finfo);
+ }
+
copied:
trace_netfs_folio(folio, trace);
flush_dcache_folio(folio);
@@ -530,6 +532,7 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_gr
struct inode *inode = file_inode(file);
struct netfs_inode *ictx = netfs_inode(inode);
vm_fault_t ret = VM_FAULT_NOPAGE;
+ void *priv;
int err;
_enter("%lx", folio->index);
@@ -572,7 +575,19 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_gr
trace_netfs_folio(folio, netfs_folio_trace_mkwrite_plus);
else
trace_netfs_folio(folio, netfs_folio_trace_mkwrite);
- netfs_set_group(folio, netfs_group);
+
+ priv = folio_get_private(folio);
+ if (priv != netfs_group) {
+ if (!netfs_group && priv == NETFS_FOLIO_COPY_TO_CACHE)
+ folio_detach_private(folio);
+ else if (netfs_group && priv == NETFS_FOLIO_COPY_TO_CACHE)
+ folio_change_private(folio, netfs_get_group(netfs_group));
+ else if (netfs_group && !priv)
+ folio_attach_private(folio, netfs_get_group(netfs_group));
+ else
+ WARN_ON_ONCE(1);
+ }
+
file_update_time(file);
set_bit(NETFS_ICTX_MODIFIED_ATTR, &ictx->flags);
if (ictx->ops->post_modify)
diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
index aa9940ba307b..082cb03c6131 100644
--- a/include/trace/events/netfs.h
+++ b/include/trace/events/netfs.h
@@ -181,6 +181,7 @@
EM(netfs_whole_folio_modify_filled, "mod-whole-f+") \
EM(netfs_whole_folio_modify_filled_efault, "mod-whole-f+!") \
EM(netfs_modify_and_clear, "mod-n-clear") \
+ EM(netfs_modify_and_clear_rm_finfo, "mod-n-clear+") \
EM(netfs_streaming_write, "mod-streamw") \
EM(netfs_streaming_write_cont, "mod-streamw+") \
EM(netfs_flush_content, "flush") \
next prev parent reply other threads:[~2026-04-27 15:48 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-27 15:46 [PATCH v4 00/22] netfs: Miscellaneous fixes David Howells
2026-04-27 15:46 ` [PATCH v4 01/22] netfs: Fix cancellation of a DIO and single read subrequests David Howells
2026-04-27 15:46 ` [PATCH v4 02/22] netfs: Fix missing barriers when accessing stream->subrequests locklessly David Howells
2026-04-27 15:46 ` [PATCH v4 03/22] netfs: Fix missing locking around retry adding new subreqs David Howells
2026-04-27 15:46 ` [PATCH v4 04/22] netfs: Fix netfs_read_to_pagecache() to pause on subreq failure David Howells
2026-04-27 15:46 ` [PATCH v4 05/22] netfs: Fix potential for tearing in ->remote_i_size and ->zero_point David Howells
2026-04-27 15:46 ` [PATCH v4 06/22] netfs: Fix zeropoint update where i_size > remote_i_size David Howells
2026-04-27 15:46 ` [PATCH v4 07/22] netfs: fix VM_BUG_ON_FOLIO() issue in netfs_write_begin() call David Howells
2026-04-27 15:46 ` [PATCH v4 08/22] netfs: fix error handling in netfs_extract_user_iter() David Howells
2026-04-27 15:46 ` [PATCH v4 09/22] netfs: Fix netfs_invalidate_folio() to clear dirty bit if all changes gone David Howells
2026-04-27 15:46 ` [PATCH v4 10/22] netfs: Defer the emission of trace_netfs_folio() David Howells
2026-04-27 15:46 ` [PATCH v4 11/22] netfs: Fix streaming write being overwritten David Howells
2026-04-27 15:46 ` [PATCH v4 12/22] netfs: Fix read-gaps to remove netfs_folio from filled folio David Howells
2026-04-27 15:46 ` [PATCH v4 13/22] netfs: Fix write streaming disablement if fd open O_RDWR David Howells
2026-04-27 15:46 ` [PATCH v4 14/22] netfs: Fix early put of sink folio in netfs_read_gaps() David Howells
2026-04-27 15:46 ` [PATCH v4 15/22] netfs: Fix leak of request in netfs_write_begin() error handling David Howells
2026-04-27 15:46 ` [PATCH v4 16/22] netfs: Fix potential UAF in netfs_unlock_abandoned_read_pages() David Howells
2026-04-27 15:46 ` [PATCH v4 17/22] netfs: Fix potential uninitialised var in netfs_extract_user_iter() David Howells
2026-04-27 15:46 ` [PATCH v4 18/22] netfs: Fix partial invalidation of streaming-write folio David Howells
2026-04-27 15:46 ` David Howells [this message]
2026-04-27 15:46 ` [PATCH v4 20/22] netfs: Fix netfs_read_folio() to wait on writeback David Howells
2026-04-27 15:46 ` [PATCH v4 21/22] netfs, afs: Fix write skipping in dir/link writepages David Howells
2026-04-27 15:46 ` [PATCH v4 22/22] afs: Fix the locking used by afs_get_link() David Howells
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260427154639.180684-20-dhowells@redhat.com \
--to=dhowells@redhat.com \
--cc=ceph-devel@vger.kernel.org \
--cc=christian@brauner.io \
--cc=linux-afs@lists.infradead.org \
--cc=linux-cifs@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netfs@lists.linux.dev \
--cc=pc@manguebit.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox