* [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain
@ 2026-03-26 10:45 David Howells
2026-03-26 10:45 ` [PATCH 01/26] netfs: Fix NULL pointer dereference in netfs_unbuffered_write() on retry David Howells
` (25 more replies)
0 siblings, 26 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel
Hi Christian,
Could you add these patches to the VFS tree for next?
The patches get rid of folio_queue, rolling_buffer and ITER_FOLIOQ,
replacing the folio queue construct used to manage buffers in netfslib with
one based around a segmented chain of bio_vec arrays instead. There are
three main aims here:
(1) The kernel file I/O subsystem seems to be moving towards consolidating
on the use of bio_vec arrays, so embrace this by moving netfslib to
keep track of its buffers for buffered I/O in bio_vec[] form.
(2) Netfslib already uses a bio_vec[] to handle unbuffered/DIO, so the
number of different buffering schemes used can be reduced to just a
single one.
(3) Always send an entire filesystem RPC request message to a TCP socket
with single kernel_sendmsg() call as this is faster, more efficient
and doesn't require the use of corking as it puts the entire
transmission loop inside of a single tcp_sendmsg().
For the replacement of folio_queue, a segmented chain of bio_vec arrays
rather than a single monolithic array is provided:
struct bvecq {
struct bvecq *next;
struct bvecq *prev;
unsigned long long fpos;
refcount_t ref;
u32 priv;
u16 nr_segs;
u16 max_segs;
bool inline_bv:1;
bool free:1;
bool unpin:1;
bool discontig:1;
struct bio_vec *bv;
struct bio_vec __bv[];
};
The fields are:
(1) next, prev - Link segments together in a list. I want this to be
NULL-terminated linear rather than circular to make it possible to
arbitrarily glue bits on the front.
(2) fpos, discontig - Note the current file position of the first byte of
the segment; all the bio_vecs in ->bv[] must be contiguous in the file
space. The fpos can be used to find the folio by file position rather
then from the info in the bio_vec.
If there's a discontiguity, this should break over into a new bvecq
segment with the discontig flag set (though this is redundant if you
keep track of the file position). Note that the beginning and end
file positions in a segment need not be aligned to any filesystem
block size.
(3) ref - Refcount. Each bvecq keeps a ref on the next. I'm not sure
this is entirely necessary, but it makes sharing slices easier.
(4) priv - Private data for the owner. Dispensible; currently only used
for storing a debug ID for tracing in a patch not included here.
(5) max_segs, nr_segs. The size of bv[] and the number of elements used.
I've assumed a maximum of 65535 bio_vecs in the array (which would
represent a ~1MiB allocation).
(6) bv, __bv, inline_bv. bv points to the bio_vec[] array handled by
this segment. This may begin at __bv and if it does inline_bv should
be set (otherwise it's impossible to distinguish a separately
allocated bio_vec[] that follows immediately by coincidence).
(7) free, unpin. free is set if the memory pointed to by the bio_vecs
needs freeing in some way upon I/O completion. unpin is set if this
means using GUP unpinning rather than put_page().
I've also defined an iov_iter iterator type ITER_BVECQ to walk this sort of
construct so that it can be passed directly to sendmsg() or block-based DIO
(as cachefiles does).
This series makes the following changes to netfslib:
(1) The folio_queue chain used to hold folios for buffered I/O is replaced
with a bvecq chain. Each bio_vec then holds (a portion of) one folio.
Each bvecq holds a contiguous sequence of folios, but adjacent bvecqs
in a chain may be discontiguous.
(2) For unbuffered/DIO, the source iov_iter is extracted into a bvecq
chain.
(3) An abstract position representation ('bvecq_pos') is created that can
used to hold a position in a bvecq chain. For the moment, this takes
a ref on the bvecq it points to, but that may be excessive.
(4) Buffer tracking is managed with three cursors: The load_cursor, at
which new folios are added as we go; the dispatch_cursor, at which new
subrequests' buffers start when they're created; and the
collect_cursor, the point at which folios are being unlocked.
Not all cursors are necessarily needed in all situations and during
buffered writeback, we need a dispatch cursor per stream (one for the
network filesystem and one for the cache).
(5) ->prepare_read(), buffer setting up and ->issue_read() are merged, as
are the write variants, with the filesystem calling back up to
netfslib to prepare its buffer. This simplifies the process of
setting up a subrequest. It may even make sense to have the
filesystem allocate the subrequest.
(6) Retry dispatch tracking is added to netfs_io_request so that the
buffer preparation functions can find it. Retry requires an
additional buffer cursor.
(7) Netfslib dispatches I/O by accumulating enough bufferage to dispatch
at least one subrequest, then looping to generate as many as the
filesystem wants to (they may be limited by other constraints,
e.g. max RDMA segment count or negotiated max size). This loop could
be moved down into the filesystem. A new method is provided by which
netfslib can ask the filesystem to provide an estimate of the data
that should be accumulated before dispatch begins.
(8) Reading from the cache is now managed by querying the cache to provide
a list of the next two data extents within the cache.
(9) AFS directories are switched to using a bvecq rather than a
folio_queue to hold their contents.
(10) CIFS is switch to using a bvecq rather than a folio_queue for holding
a temporary encryption buffer.
(11) CIFS RDMA is given the ability to extract ITER_BVECQ and support for
extracting ITER_FOLIOQ, ITER_BVEC and ITER_KVEC is removed.
(12) All the folio_queue and rolling_buffer code is removed.
Cachefiles is also modified:
(1) The object type in the cachefiles file xattr is now correctly set to
CACHEFILES_CONTENT_{SINGLE,ALL,BACKFS_MAP} rather than just being 0,
to indicate whether we have a single monolithic blob, all the data up
to cache i_size with no holes or a sparse file with the data mapped by
the backing file system (as currently upstream).
(2) For "ALL" type files, the cache's i_size is used to track how much
data is saved in the cache and no longer bears any relation to the
netfs i_size. The actual object size is stored in the xattr.
(3) For most typical files which are contiguous and written progressively,
the object type is now set to "ALL". For anything else, cachefiles
uses SEEK_DATA/HOLE to find extent outlines at before (this is the
current behaviour and needs to be fixed, but in a separate set of
patches as it's not trivial).
Two further things that I'm working on (but not in this branch) are:
(1) Make it so that a filesystem can be given a copy of a subchain which
it can then tack header and trailer protocol elements upon to form a
single message (I have this working for cifs) and even join copies
together with intervening protocol elements to form compounds.
(2) Make it so that a filesystem can 'splice' out the contents of the TCP
receive queue into a bvecq chain. This allows the socket lock to be
dropped much more quickly and the copying of data read to the
destination buffers to happen without the lock. I have this working
for cifs too. Kernel recvmsg() doesn't then block kernel sendmsg()
for anywhere near as long.
There are also some things I want to consider for the future:
(1) Create one or more batched iteration functions to 'unlock' all the
folios in a bio_vec[], where 'unlock' is the appropriate action for
ending a read or a write. Batching should hopefully also improve the
efficiency of wrangling the marks on the xarray. Very often these
marks are going to be represented by contiguous bits, so there may be
a way to change them in bulk.
(2) Rather than walking the bvecq chain to get each individual folio out
via bv_page, use the file position stored on the bvecq and the sum of
bv_len to iterate over the appropriate range in i_pages.
(3) Change iov_iter to store the initial starting point and for
iov_iter_revert() to reset to that and advance. This would (a) help
prevent over-reversion and (b) dispense with the need for a prev
pointer.
(4) Use bvecq to replace scatterlist. One problem with replacing
scatterlist is that crypto drivers like to glue bits on the front of
the scatterlists they're given (something trivial with that API) - and
this is one way to achieve it.
The patches can also be found here:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=netfs-next
Thanks,
David
David Howells (22):
netfs: Fix read abandonment during retry
netfs: Fix the handling of stream->front by removing it
cachefiles: Fix excess dput() after end_removing()
cachefiles: Don't rely on backing fs storage map for most use cases
mm: Make readahead store folio count in readahead_control
netfs: Bulk load the readahead-provided folios up front
Add a function to kmap one page of a multipage bio_vec
iov_iter: Add a segmented queue of bio_vec[]
netfs: Add some tools for managing bvecq chains
netfs: Add a function to extract from an iter into a bvecq
afs: Use a bvecq to hold dir content rather than folioq
cifs: Use a bvecq for buffering instead of a folioq
cifs: Support ITER_BVECQ in smb_extract_iter_to_rdma()
netfs: Switch to using bvecq rather than folio_queue and
rolling_buffer
cifs: Remove support for ITER_KVEC/BVEC/FOLIOQ from
smb_extract_iter_to_rdma()
netfs: Remove netfs_alloc/free_folioq_buffer()
netfs: Remove netfs_extract_user_iter()
iov_iter: Remove ITER_FOLIOQ
netfs: Remove folio_queue and rolling_buffer
netfs: Check for too much data being read
netfs: Limit the the minimum trigger for progress reporting
netfs: Combine prepare and issue ops and grab the buffers on request
Deepanshu Kartikey (2):
netfs: Fix NULL pointer dereference in netfs_unbuffered_write() on
retry
netfs: Fix kernel BUG in netfs_limit_iter() for ITER_KVEC iterators
Paulo Alcantara (1):
netfs: fix error handling in netfs_extract_user_iter()
Viacheslav Dubeyko (1):
netfs: fix VM_BUG_ON_FOLIO() issue in netfs_write_begin() call
Documentation/core-api/folio_queue.rst | 209 ------
Documentation/core-api/index.rst | 1 -
fs/9p/vfs_addr.c | 49 +-
fs/afs/dir.c | 43 +-
fs/afs/dir_edit.c | 43 +-
fs/afs/dir_search.c | 33 +-
fs/afs/file.c | 27 +-
fs/afs/fsclient.c | 8 +-
fs/afs/inode.c | 20 +-
fs/afs/internal.h | 14 +-
fs/afs/write.c | 35 +-
fs/afs/yfsclient.c | 6 +-
fs/cachefiles/interface.c | 82 +-
fs/cachefiles/internal.h | 10 +-
fs/cachefiles/io.c | 486 ++++++++----
fs/cachefiles/namei.c | 55 +-
fs/cachefiles/xattr.c | 20 +-
fs/ceph/Kconfig | 1 +
fs/ceph/addr.c | 127 ++--
fs/netfs/Kconfig | 3 +
fs/netfs/Makefile | 4 +-
fs/netfs/buffered_read.c | 524 ++++++++-----
fs/netfs/buffered_write.c | 30 +-
fs/netfs/bvecq.c | 706 ++++++++++++++++++
fs/netfs/direct_read.c | 119 ++-
fs/netfs/direct_write.c | 174 +++--
fs/netfs/fscache_io.c | 6 -
fs/netfs/internal.h | 89 ++-
fs/netfs/iterator.c | 305 +++-----
fs/netfs/misc.c | 147 +---
fs/netfs/objects.c | 21 +-
fs/netfs/read_collect.c | 134 ++--
fs/netfs/read_pgpriv2.c | 168 +++--
fs/netfs/read_retry.c | 227 +++---
fs/netfs/read_single.c | 170 +++--
fs/netfs/rolling_buffer.c | 222 ------
fs/netfs/stats.c | 6 +-
fs/netfs/write_collect.c | 126 +++-
fs/netfs/write_issue.c | 987 ++++++++++++++-----------
fs/netfs/write_retry.c | 135 ++--
fs/nfs/Kconfig | 1 +
fs/nfs/fscache.c | 24 +-
fs/smb/client/cifsglob.h | 2 +-
fs/smb/client/cifssmb.c | 13 +-
fs/smb/client/file.c | 146 ++--
fs/smb/client/smb2ops.c | 78 +-
fs/smb/client/smb2pdu.c | 28 +-
fs/smb/client/smbdirect.c | 152 +---
fs/smb/client/transport.c | 15 +-
include/linux/bvec.h | 21 +
include/linux/bvecq.h | 205 +++++
include/linux/folio_queue.h | 282 -------
include/linux/fscache.h | 17 +
include/linux/iov_iter.h | 68 +-
include/linux/netfs.h | 145 ++--
include/linux/pagemap.h | 1 +
include/linux/rolling_buffer.h | 61 --
include/linux/uio.h | 17 +-
include/trace/events/cachefiles.h | 17 +-
include/trace/events/netfs.h | 123 ++-
lib/iov_iter.c | 395 +++++-----
lib/scatterlist.c | 57 +-
lib/tests/kunit_iov_iter.c | 185 ++---
mm/readahead.c | 4 +
net/9p/client.c | 8 +-
65 files changed, 4144 insertions(+), 3493 deletions(-)
delete mode 100644 Documentation/core-api/folio_queue.rst
create mode 100644 fs/netfs/bvecq.c
delete mode 100644 fs/netfs/rolling_buffer.c
create mode 100644 include/linux/bvecq.h
delete mode 100644 include/linux/folio_queue.h
delete mode 100644 include/linux/rolling_buffer.h
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH 01/26] netfs: Fix NULL pointer dereference in netfs_unbuffered_write() on retry
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 02/26] netfs: Fix kernel BUG in netfs_limit_iter() for ITER_KVEC iterators David Howells
` (24 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Deepanshu Kartikey,
syzbot+7227db0fbac9f348dba0, Deepanshu Kartikey
From: Deepanshu Kartikey <kartikey406@gmail.com>
When a write subrequest is marked NETFS_SREQ_NEED_RETRY, the retry path
in netfs_unbuffered_write() unconditionally calls stream->prepare_write()
without checking if it is NULL.
Filesystems such as 9P do not set the prepare_write operation, so
stream->prepare_write remains NULL. When get_user_pages() fails with
-EFAULT and the subrequest is flagged for retry, this results in a NULL
pointer dereference at fs/netfs/direct_write.c:189.
Fix this by mirroring the pattern already used in write_retry.c: if
stream->prepare_write is NULL, skip renegotiation and directly reissue
the subrequest via netfs_reissue_write(), which handles iterator reset,
IN_PROGRESS flag, stats update and reissue internally.
Fixes: a0b4c7a49137 ("netfs: Fix unbuffered/DIO writes to dispatch subrequests in strict sequence")
Reported-by: syzbot+7227db0fbac9f348dba0@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=7227db0fbac9f348dba0
Signed-off-by: Deepanshu Kartikey <Kartikey406@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
---
fs/netfs/direct_write.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
index dd1451bf7543..4d9760e36c11 100644
--- a/fs/netfs/direct_write.c
+++ b/fs/netfs/direct_write.c
@@ -186,10 +186,18 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)
stream->sreq_max_segs = INT_MAX;
netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
- stream->prepare_write(subreq);
- __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
- netfs_stat(&netfs_n_wh_retry_write_subreq);
+ if (stream->prepare_write) {
+ stream->prepare_write(subreq);
+ __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
+ netfs_stat(&netfs_n_wh_retry_write_subreq);
+ } else {
+ struct iov_iter source;
+
+ netfs_reset_iter(subreq);
+ source = subreq->io_iter;
+ netfs_reissue_write(stream, subreq, &source);
+ }
}
netfs_unbuffered_write_done(wreq);
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 02/26] netfs: Fix kernel BUG in netfs_limit_iter() for ITER_KVEC iterators
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
2026-03-26 10:45 ` [PATCH 01/26] netfs: Fix NULL pointer dereference in netfs_unbuffered_write() on retry David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 03/26] netfs: fix VM_BUG_ON_FOLIO() issue in netfs_write_begin() call David Howells
` (23 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Deepanshu Kartikey,
syzbot+9c058f0d63475adc97fd, Deepanshu Kartikey
From: Deepanshu Kartikey <kartikey406@gmail.com>
When a process crashes and the kernel writes a core dump to a 9P
filesystem, __kernel_write() creates an ITER_KVEC iterator. This
iterator reaches netfs_limit_iter() via netfs_unbuffered_write(), which
only handles ITER_FOLIOQ, ITER_BVEC and ITER_XARRAY iterator types,
hitting the BUG() for any other type.
Fix this by adding netfs_limit_kvec() following the same pattern as
netfs_limit_bvec(), since both kvec and bvec are simple segment arrays
with pointer and length fields. Dispatch it from netfs_limit_iter() when
the iterator type is ITER_KVEC.
Fixes: cae932d3aee5 ("netfs: Add func to calculate pagecount/size-limited span of an iterator")
Reported-by: syzbot+9c058f0d63475adc97fd@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=9c058f0d63475adc97fd
Tested-by: syzbot+9c058f0d63475adc97fd@syzkaller.appspotmail.com
Signed-off-by: Deepanshu Kartikey <Kartikey406@gmail.com>
Signed-off-by: David Howells <dhowells@redhat.com>
---
fs/netfs/iterator.c | 43 +++++++++++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)
diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c
index 72a435e5fc6d..154a14bb2d7f 100644
--- a/fs/netfs/iterator.c
+++ b/fs/netfs/iterator.c
@@ -142,6 +142,47 @@ static size_t netfs_limit_bvec(const struct iov_iter *iter, size_t start_offset,
return min(span, max_size);
}
+/*
+ * Select the span of a kvec iterator we're going to use. Limit it by both
+ * maximum size and maximum number of segments. Returns the size of the span
+ * in bytes.
+ */
+static size_t netfs_limit_kvec(const struct iov_iter *iter, size_t start_offset,
+ size_t max_size, size_t max_segs)
+{
+ const struct kvec *kvecs = iter->kvec;
+ unsigned int nkv = iter->nr_segs, ix = 0, nsegs = 0;
+ size_t len, span = 0, n = iter->count;
+ size_t skip = iter->iov_offset + start_offset;
+
+ if (WARN_ON(!iov_iter_is_kvec(iter)) ||
+ WARN_ON(start_offset > n) ||
+ n == 0)
+ return 0;
+
+ while (n && ix < nkv && skip) {
+ len = kvecs[ix].iov_len;
+ if (skip < len)
+ break;
+ skip -= len;
+ n -= len;
+ ix++;
+ }
+
+ while (n && ix < nkv) {
+ len = min3(n, kvecs[ix].iov_len - skip, max_size);
+ span += len;
+ nsegs++;
+ ix++;
+ if (span >= max_size || nsegs >= max_segs)
+ break;
+ skip = 0;
+ n -= len;
+ }
+
+ return min(span, max_size);
+}
+
/*
* Select the span of an xarray iterator we're going to use. Limit it by both
* maximum size and maximum number of segments. It is assumed that segments
@@ -245,6 +286,8 @@ size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,
return netfs_limit_bvec(iter, start_offset, max_size, max_segs);
if (iov_iter_is_xarray(iter))
return netfs_limit_xarray(iter, start_offset, max_size, max_segs);
+ if (iov_iter_is_kvec(iter))
+ return netfs_limit_kvec(iter, start_offset, max_size, max_segs);
BUG();
}
EXPORT_SYMBOL(netfs_limit_iter);
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 03/26] netfs: fix VM_BUG_ON_FOLIO() issue in netfs_write_begin() call
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
2026-03-26 10:45 ` [PATCH 01/26] netfs: Fix NULL pointer dereference in netfs_unbuffered_write() on retry David Howells
2026-03-26 10:45 ` [PATCH 02/26] netfs: Fix kernel BUG in netfs_limit_iter() for ITER_KVEC iterators David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 04/26] netfs: fix error handling in netfs_extract_user_iter() David Howells
` (22 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Viacheslav Dubeyko, Paulo Alcantara
From: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
The multiple runs of generic/013 test-case is capable
to reproduce a kernel BUG at mm/filemap.c:1504 with
probability of 30%.
while true; do
sudo ./check generic/013
done
[ 9849.452376] page: refcount:3 mapcount:0 mapping:00000000e58ff252 index:0x10781 pfn:0x1c322
[ 9849.452412] memcg:ffff8881a1915800
[ 9849.452417] aops:ceph_aops ino:1000058db9e dentry name(?):"f9XXXXXX"
[ 9849.452432] flags: 0x17ffffc0000000(node=0|zone=2|lastcpupid=0x1fffff)
[ 9849.452441] raw: 0017ffffc0000000 0000000000000000 dead000000000122 ffff88816110d248
[ 9849.452445] raw: 0000000000010781 0000000000000000 00000003ffffffff ffff8881a1915800
[ 9849.452447] page dumped because: VM_BUG_ON_FOLIO(!folio_test_locked(folio))
[ 9849.452474] ------------[ cut here ]------------
[ 9849.452476] kernel BUG at mm/filemap.c:1504!
[ 9849.478635] Oops: invalid opcode: 0000 [#1] SMP KASAN NOPTI
[ 9849.481772] CPU: 2 UID: 0 PID: 84223 Comm: fsstress Not tainted 7.0.0-rc1+ #18 PREEMPT(full)
[ 9849.482881] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-9.fc43 06/1
0/2025
[ 9849.484539] RIP: 0010:folio_unlock+0x85/0xa0
[ 9849.485076] Code: 89 df 31 f6 e8 1c f3 ff ff 48 8b 5d f8 c9 31 c0 31 d2 31 f6 31 ff c3 cc
cc cc cc 48 c7 c6 80 6c d9 a7 48 89 df e8 4b b3 10 00 <0f> 0b 48 89 df e8 21 e6 2c 00 eb 9d 0f 1f 40 00 66 66 2e 0f 1f 84
[ 9849.493818] RSP: 0018:ffff8881bb8076b0 EFLAGS: 00010246
[ 9849.495740] RAX: 0000000000000000 RBX: ffffea00070c8980 RCX: 0000000000000000
[ 9849.498678] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[ 9849.500559] RBP: ffff8881bb8076b8 R08: 0000000000000000 R09: 0000000000000000
[ 9849.501097] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000010782000
[ 9849.502108] R13: ffff8881935de738 R14: ffff88816110d010 R15: 0000000000001000
[ 9849.502516] FS: 00007e36cbe94740(0000) GS:ffff88824a899000(0000) knlGS:0000000000000000
[ 9849.502996] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 9849.503810] CR2: 000000c0002b0000 CR3: 000000011bbf6004 CR4: 0000000000772ef0
[ 9849.504459] PKRU: 55555554
[ 9849.504626] Call Trace:
[ 9849.505242] <TASK>
[ 9849.505379] netfs_write_begin+0x7c8/0x10a0
[ 9849.505877] ? __kasan_check_read+0x11/0x20
[ 9849.506384] ? __pfx_netfs_write_begin+0x10/0x10
[ 9849.507178] ceph_write_begin+0x8c/0x1c0
[ 9849.507934] generic_perform_write+0x391/0x8f0
[ 9849.508503] ? __pfx_generic_perform_write+0x10/0x10
[ 9849.509062] ? file_update_time_flags+0x19a/0x4b0
[ 9849.509581] ? ceph_get_caps+0x63/0xf0
[ 9849.510259] ? ceph_get_caps+0x63/0xf0
[ 9849.510530] ceph_write_iter+0xe79/0x1ae0
[ 9849.511282] ? __pfx_ceph_write_iter+0x10/0x10
[ 9849.511839] ? lock_acquire+0x1ad/0x310
[ 9849.512334] ? ksys_write+0xf9/0x230
[ 9849.512582] ? lock_is_held_type+0xaa/0x140
[ 9849.513128] vfs_write+0x512/0x1110
[ 9849.513634] ? __fget_files+0x33/0x350
[ 9849.513893] ? __pfx_vfs_write+0x10/0x10
[ 9849.514143] ? mutex_lock_nested+0x1b/0x30
[ 9849.514394] ksys_write+0xf9/0x230
[ 9849.514621] ? __pfx_ksys_write+0x10/0x10
[ 9849.514887] ? do_syscall_64+0x25e/0x1520
[ 9849.515122] ? __kasan_check_read+0x11/0x20
[ 9849.515366] ? trace_hardirqs_on_prepare+0x178/0x1c0
[ 9849.515655] __x64_sys_write+0x72/0xd0
[ 9849.515885] ? trace_hardirqs_on+0x24/0x1c0
[ 9849.516130] x64_sys_call+0x22f/0x2390
[ 9849.516341] do_syscall_64+0x12b/0x1520
[ 9849.516545] ? do_syscall_64+0x27c/0x1520
[ 9849.516783] ? do_syscall_64+0x27c/0x1520
[ 9849.517003] ? lock_release+0x318/0x480
[ 9849.517220] ? __x64_sys_io_getevents+0x143/0x2d0
[ 9849.517479] ? percpu_ref_put_many.constprop.0+0x8f/0x210
[ 9849.517779] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 9849.518073] ? do_syscall_64+0x25e/0x1520
[ 9849.518291] ? __kasan_check_read+0x11/0x20
[ 9849.518519] ? trace_hardirqs_on_prepare+0x178/0x1c0
[ 9849.518799] ? do_syscall_64+0x27c/0x1520
[ 9849.519024] ? local_clock_noinstr+0xf/0x120
[ 9849.519262] ? entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 9849.519544] ? do_syscall_64+0x25e/0x1520
[ 9849.519781] ? __kasan_check_read+0x11/0x20
[ 9849.520008] ? trace_hardirqs_on_prepare+0x178/0x1c0
[ 9849.520273] ? do_syscall_64+0x27c/0x1520
[ 9849.520491] ? trace_hardirqs_on_prepare+0x178/0x1c0
[ 9849.520767] ? irqentry_exit+0x10c/0x6c0
[ 9849.520984] ? trace_hardirqs_off+0x86/0x1b0
[ 9849.521224] ? exc_page_fault+0xab/0x130
[ 9849.521472] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 9849.521766] RIP: 0033:0x7e36cbd14907
[ 9849.521989] Code: 10 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 f3 0f 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 51 c3 48 83 ec 28 48 89 54 24 18 48 89 74 24
[ 9849.523057] RSP: 002b:00007ffff2d2a968 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[ 9849.523484] RAX: ffffffffffffffda RBX: 000000000000e549 RCX: 00007e36cbd14907
[ 9849.523885] RDX: 000000000000e549 RSI: 00005bd797ec6370 RDI: 0000000000000004
[ 9849.524277] RBP: 0000000000000004 R08: 0000000000000047 R09: 00005bd797ec6370
[ 9849.524652] R10: 0000000000000078 R11: 0000000000000246 R12: 0000000000000049
[ 9849.525062] R13: 0000000010781a37 R14: 00005bd797ec6370 R15: 0000000000000000
[ 9849.525447] </TASK>
[ 9849.525574] Modules linked in: intel_rapl_msr intel_rapl_common intel_uncore_frequency_common intel_pmc_core pmt_telemetry pmt_discovery pmt_class intel_pmc_ssram_telemetry intel_vsec kvm_intel joydev kvm irqbypass ghash_clmulni_intel aesni_intel input_leds rapl mac_hid psmouse vga16fb serio_raw vgastate floppy i2c_piix4 bochs qemu_fw_cfg i2c_smbus pata_acpi sch_fq_codel rbd msr parport_pc ppdev lp parport efi_pstore
[ 9849.529150] ---[ end trace 0000000000000000 ]---
[ 9849.529502] RIP: 0010:folio_unlock+0x85/0xa0
[ 9849.530813] Code: 89 df 31 f6 e8 1c f3 ff ff 48 8b 5d f8 c9 31 c0 31 d2 31 f6 31 ff c3 cc cc cc cc 48 c7 c6 80 6c d9 a7 48 89 df e8 4b b3 10 00 <0f> 0b 48 89 df e8 21 e6 2c 00 eb 9d 0f 1f 40 00 66 66 2e 0f 1f 84
[ 9849.534986] RSP: 0018:ffff8881bb8076b0 EFLAGS: 00010246
[ 9849.536198] RAX: 0000000000000000 RBX: ffffea00070c8980 RCX: 0000000000000000
[ 9849.537718] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[ 9849.539321] RBP: ffff8881bb8076b8 R08: 0000000000000000 R09: 0000000000000000
[ 9849.540862] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000010782000
[ 9849.542438] R13: ffff8881935de738 R14: ffff88816110d010 R15: 0000000000001000
[ 9849.543996] FS: 00007e36cbe94740(0000) GS:ffff88824b899000(0000) knlGS:0000000000000000
[ 9849.545854] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 9849.547092] CR2: 00007e36cb3ff000 CR3: 000000011bbf6006 CR4: 0000000000772ef0
[ 9849.548679] PKRU: 55555554
The race sequence:
1. Read completes -> netfs_read_collection() runs
2. netfs_wake_rreq_flag(rreq, NETFS_RREQ_IN_PROGRESS, ...)
3. netfs_wait_for_read() returns -EFAULT to netfs_write_begin()
4. The netfs_unlock_abandoned_read_pages() unlocks the folio
5. netfs_write_begin() calls folio_unlock(folio) -> VM_BUG_ON_FOLIO()
The key reason of the issue that netfs_unlock_abandoned_read_pages()
doesn't check the flag NETFS_RREQ_NO_UNLOCK_FOLIO and executes
folio_unlock() unconditionally. This patch implements in
netfs_unlock_abandoned_read_pages() logic similar to
netfs_unlock_read_folio().
Signed-off-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
cc: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
cc: Ceph Development <ceph-devel@vger.kernel.org>
Signed-off-by: David Howells <dhowells@redhat.com>
---
fs/netfs/read_retry.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c
index 7793ba5e3e8f..71a0c7ed163a 100644
--- a/fs/netfs/read_retry.c
+++ b/fs/netfs/read_retry.c
@@ -285,8 +285,15 @@ void netfs_unlock_abandoned_read_pages(struct netfs_io_request *rreq)
struct folio *folio = folioq_folio(p, slot);
if (folio && !folioq_is_marked2(p, slot)) {
- trace_netfs_folio(folio, netfs_folio_trace_abandon);
- folio_unlock(folio);
+ if (folio->index == rreq->no_unlock_folio &&
+ test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO,
+ &rreq->flags)) {
+ _debug("no unlock");
+ } else {
+ trace_netfs_folio(folio,
+ netfs_folio_trace_abandon);
+ folio_unlock(folio);
+ }
}
}
}
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 04/26] netfs: fix error handling in netfs_extract_user_iter()
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (2 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 03/26] netfs: fix VM_BUG_ON_FOLIO() issue in netfs_write_begin() call David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 05/26] netfs: Fix read abandonment during retry David Howells
` (21 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara, Xiaoli Feng, stable
From: Paulo Alcantara <pc@manguebit.org>
In netfs_extract_user_iter(), if iov_iter_extract_pages() failed to
extract user pages, bail out on -ENOMEM, otherwise return the error
code only if @npages == 0, allowing short DIO reads and writes to be
issued.
This fixes mmapstress02 from LTP tests against CIFS.
Reported-by: Xiaoli Feng <xifeng@redhat.com>
Fixes: 85dd2c8ff368 ("netfs: Add a function to extract a UBUF or IOVEC into a BVEC iterator")
Signed-off-by: Paulo Alcantara (Red Hat) <pc@manguebit.org>
Reviewed-by: David Howells <dhowells@redhat.com>
Cc: netfs@lists.linux.dev
Cc: stable@vger.kernel.org
Cc: linux-cifs@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Signed-off-by: David Howells <dhowells@redhat.com>
---
fs/netfs/iterator.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c
index 154a14bb2d7f..adca78747f23 100644
--- a/fs/netfs/iterator.c
+++ b/fs/netfs/iterator.c
@@ -22,7 +22,7 @@
*
* Extract the page fragments from the given amount of the source iterator and
* build up a second iterator that refers to all of those bits. This allows
- * the original iterator to disposed of.
+ * the original iterator to be disposed of.
*
* @extraction_flags can have ITER_ALLOW_P2PDMA set to request peer-to-peer DMA be
* allowed on the pages extracted.
@@ -67,8 +67,8 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
ret = iov_iter_extract_pages(orig, &pages, count,
max_pages - npages, extraction_flags,
&offset);
- if (ret < 0) {
- pr_err("Couldn't get user pages (rc=%zd)\n", ret);
+ if (unlikely(ret <= 0)) {
+ ret = ret ?: -EIO;
break;
}
@@ -97,6 +97,13 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
npages += cur_npages;
}
+ if (ret < 0 && (ret == -ENOMEM || npages == 0)) {
+ for (i = 0; i < npages; i++)
+ unpin_user_page(bv[i].bv_page);
+ kvfree(bv);
+ return ret;
+ }
+
iov_iter_bvec(new, orig->data_source, bv, npages, orig_len - count);
return npages;
}
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 05/26] netfs: Fix read abandonment during retry
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (3 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 04/26] netfs: fix error handling in netfs_extract_user_iter() David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 06/26] netfs: Fix the handling of stream->front by removing it David Howells
` (20 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
Under certain circumstances, all the remaining subrequests from a read
request will get abandoned during retry. The abandonment process expects
the 'subreq' variable to be set to the place to start abandonment from, but
it doesn't always have a useful value (it will be uninitialised on the
first pass through the loop and it may point to a deleted subrequest on
later passes).
Fix the first jump to "abandon:" to set subreq to the start of the first
subrequest expected to need retry (which, in this abandonment case, turned
out unexpectedly to no longer have NEED_RETRY set).
Also clear the subreq pointer after discarding superfluous retryable
subrequests to cause an oops if we do try to access it.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
Fixes: ee4cdf7ba857 ("netfs: Speed up buffered reading")
---
fs/netfs/read_retry.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c
index 71a0c7ed163a..68fc869513ef 100644
--- a/fs/netfs/read_retry.c
+++ b/fs/netfs/read_retry.c
@@ -93,8 +93,10 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
from->start, from->transferred, from->len);
if (test_bit(NETFS_SREQ_FAILED, &from->flags) ||
- !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags))
+ !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) {
+ subreq = from;
goto abandon;
+ }
list_for_each_continue(next, &stream->subrequests) {
subreq = list_entry(next, struct netfs_io_subrequest, rreq_link);
@@ -178,6 +180,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
if (subreq == to)
break;
}
+ subreq = NULL;
continue;
}
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 06/26] netfs: Fix the handling of stream->front by removing it
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (4 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 05/26] netfs: Fix read abandonment during retry David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 07/26] cachefiles: Fix excess dput() after end_removing() David Howells
` (19 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
The netfs_io_stream::front member is meant to point to the subrequest
currently being collected on a stream, but it isn't actually used this way
by direct write (which mostly ignores it). However, there's a tracepoint
which looks at it. Further, stream->front is actually redundant with
stream->subrequests.next.
Fix the potential problem in the direct code by just removing the member
and using stream->subrequests.next instead, thereby also simplifying the
code.
Fixes: a0b4c7a49137 ("netfs: Fix unbuffered/DIO writes to dispatch subrequests in strict sequence")
Reported-by: Paulo Alcantara <pc@manguebit.org>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/netfs/buffered_read.c | 3 +--
fs/netfs/direct_read.c | 3 +--
fs/netfs/direct_write.c | 1 -
fs/netfs/read_collect.c | 4 ++--
fs/netfs/read_single.c | 1 -
fs/netfs/write_collect.c | 4 ++--
fs/netfs/write_issue.c | 3 +--
include/linux/netfs.h | 1 -
include/trace/events/netfs.h | 8 ++++----
9 files changed, 11 insertions(+), 17 deletions(-)
diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
index 88a0d801525f..a8c0d86118c5 100644
--- a/fs/netfs/buffered_read.c
+++ b/fs/netfs/buffered_read.c
@@ -171,9 +171,8 @@ static void netfs_queue_read(struct netfs_io_request *rreq,
spin_lock(&rreq->lock);
list_add_tail(&subreq->rreq_link, &stream->subrequests);
if (list_is_first(&subreq->rreq_link, &stream->subrequests)) {
- stream->front = subreq;
if (!stream->active) {
- stream->collected_to = stream->front->start;
+ stream->collected_to = subreq->start;
/* Store list pointers before active flag */
smp_store_release(&stream->active, true);
}
diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c
index a498ee8d6674..f72e6da88cca 100644
--- a/fs/netfs/direct_read.c
+++ b/fs/netfs/direct_read.c
@@ -71,9 +71,8 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
spin_lock(&rreq->lock);
list_add_tail(&subreq->rreq_link, &stream->subrequests);
if (list_is_first(&subreq->rreq_link, &stream->subrequests)) {
- stream->front = subreq;
if (!stream->active) {
- stream->collected_to = stream->front->start;
+ stream->collected_to = subreq->start;
/* Store list pointers before active flag */
smp_store_release(&stream->active, true);
}
diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
index 4d9760e36c11..f9ab69de3e29 100644
--- a/fs/netfs/direct_write.c
+++ b/fs/netfs/direct_write.c
@@ -111,7 +111,6 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)
netfs_prepare_write(wreq, stream, wreq->start + wreq->transferred);
subreq = stream->construct;
stream->construct = NULL;
- stream->front = NULL;
}
/* Check if (re-)preparation failed. */
diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
index 137f0e28a44c..e5f6665b3341 100644
--- a/fs/netfs/read_collect.c
+++ b/fs/netfs/read_collect.c
@@ -205,7 +205,8 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq)
* in progress. The issuer thread may be adding stuff to the tail
* whilst we're doing this.
*/
- front = READ_ONCE(stream->front);
+ front = list_first_entry_or_null(&stream->subrequests,
+ struct netfs_io_subrequest, rreq_link);
while (front) {
size_t transferred;
@@ -301,7 +302,6 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq)
list_del_init(&front->rreq_link);
front = list_first_entry_or_null(&stream->subrequests,
struct netfs_io_subrequest, rreq_link);
- stream->front = front;
spin_unlock(&rreq->lock);
netfs_put_subrequest(remove,
notes & ABANDON_SREQ ?
diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c
index 8e6264f62a8f..d0e23bc42445 100644
--- a/fs/netfs/read_single.c
+++ b/fs/netfs/read_single.c
@@ -107,7 +107,6 @@ static int netfs_single_dispatch_read(struct netfs_io_request *rreq)
spin_lock(&rreq->lock);
list_add_tail(&subreq->rreq_link, &stream->subrequests);
trace_netfs_sreq(subreq, netfs_sreq_trace_added);
- stream->front = subreq;
/* Store list pointers before active flag */
smp_store_release(&stream->active, true);
spin_unlock(&rreq->lock);
diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c
index 83eb3dc1adf8..b194447f4b11 100644
--- a/fs/netfs/write_collect.c
+++ b/fs/netfs/write_collect.c
@@ -228,7 +228,8 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
if (!smp_load_acquire(&stream->active))
continue;
- front = stream->front;
+ front = list_first_entry_or_null(&stream->subrequests,
+ struct netfs_io_subrequest, rreq_link);
while (front) {
trace_netfs_collect_sreq(wreq, front);
//_debug("sreq [%x] %llx %zx/%zx",
@@ -279,7 +280,6 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
list_del_init(&front->rreq_link);
front = list_first_entry_or_null(&stream->subrequests,
struct netfs_io_subrequest, rreq_link);
- stream->front = front;
spin_unlock(&wreq->lock);
netfs_put_subrequest(remove,
notes & SAW_FAILURE ?
diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
index 437268f65640..2db688f94125 100644
--- a/fs/netfs/write_issue.c
+++ b/fs/netfs/write_issue.c
@@ -206,9 +206,8 @@ void netfs_prepare_write(struct netfs_io_request *wreq,
spin_lock(&wreq->lock);
list_add_tail(&subreq->rreq_link, &stream->subrequests);
if (list_is_first(&subreq->rreq_link, &stream->subrequests)) {
- stream->front = subreq;
if (!stream->active) {
- stream->collected_to = stream->front->start;
+ stream->collected_to = subreq->start;
/* Write list pointers before active flag */
smp_store_release(&stream->active, true);
}
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index 72ee7d210a74..ba17ac5bf356 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -140,7 +140,6 @@ struct netfs_io_stream {
void (*issue_write)(struct netfs_io_subrequest *subreq);
/* Collection tracking */
struct list_head subrequests; /* Contributory I/O operations */
- struct netfs_io_subrequest *front; /* Op being collected */
unsigned long long collected_to; /* Position we've collected results to */
size_t transferred; /* The amount transferred from this stream */
unsigned short error; /* Aggregate error for the stream */
diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
index 2d366be46a1c..cbe28211106c 100644
--- a/include/trace/events/netfs.h
+++ b/include/trace/events/netfs.h
@@ -740,19 +740,19 @@ TRACE_EVENT(netfs_collect_stream,
__field(unsigned int, wreq)
__field(unsigned char, stream)
__field(unsigned long long, collected_to)
- __field(unsigned long long, front)
+ __field(unsigned long long, issued_to)
),
TP_fast_assign(
__entry->wreq = wreq->debug_id;
__entry->stream = stream->stream_nr;
__entry->collected_to = stream->collected_to;
- __entry->front = stream->front ? stream->front->start : UINT_MAX;
+ __entry->issued_to = atomic64_read(&wreq->issued_to);
),
- TP_printk("R=%08x[%x:] cto=%llx frn=%llx",
+ TP_printk("R=%08x[%x:] cto=%llx ito=%llx",
__entry->wreq, __entry->stream,
- __entry->collected_to, __entry->front)
+ __entry->collected_to, __entry->issued_to)
);
TRACE_EVENT(netfs_folioq,
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 07/26] cachefiles: Fix excess dput() after end_removing()
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (5 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 06/26] netfs: Fix the handling of stream->front by removing it David Howells
@ 2026-03-26 10:45 ` David Howells
[not found] ` <CA+yaA_=gpTnueByzFNYrqNL_qSC2rE4iGDjLHtJap-=_rhE3HQ@mail.gmail.com>
2026-03-26 10:45 ` [PATCH 08/26] cachefiles: Don't rely on backing fs storage map for most use cases David Howells
` (18 subsequent siblings)
25 siblings, 1 reply; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, NeilBrown, Paulo Alcantara
When cachefiles_cull() calls cachefiles_bury_object(), the latter eats the
former's ref on the victim dentry that it obtained from
cachefiles_lookup_for_cull(). However, commit 7bb1eb45e43c left the dput
of the victim in place, resulting in occasional:
WARNING: fs/dcache.c:829 at dput.part.0+0xf5/0x110, CPU#7: cachefilesd/11831
cachefiles_cull+0x8c/0xe0 [cachefiles]
cachefiles_daemon_cull+0xcd/0x120 [cachefiles]
cachefiles_daemon_write+0x14e/0x1d0 [cachefiles]
vfs_write+0xc3/0x480
...
reports.
Actually, it's worse than that: cachefiles_bury_object() eats the ref it was
given - and then may continue to the now-unref'd dentry it if it turns out to
be a directory. So simply removing the aberrant dput() is not sufficient.
Fix this by making cachefiles_bury_object() retain the ref itself around
end_removing() if it needs to keep it and then drop the ref before returning.
Fixes: bd6ede8a06e8 ("VFS/nfsd/cachefiles/ovl: introduce start_removing() and end_removing()")
Reported-by: Marc Dionne <marc.dionne@auristor.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: NeilBrown <neil@brown.name>
cc: Paulo Alcantara <pc@manguebit.org>
cc: netfs@lists.linux.dev
cc: linux-afs@lists.infradead.org
cc: linux-fsdevel@vger.kernel.org
---
fs/cachefiles/namei.c | 36 +++++++++++++++++++++---------------
1 file changed, 21 insertions(+), 15 deletions(-)
diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
index e5ec90dccc27..20138309733f 100644
--- a/fs/cachefiles/namei.c
+++ b/fs/cachefiles/namei.c
@@ -287,14 +287,14 @@ int cachefiles_bury_object(struct cachefiles_cache *cache,
if (!d_is_dir(rep)) {
ret = cachefiles_unlink(cache, object, dir, rep, why);
end_removing(rep);
-
_leave(" = %d", ret);
return ret;
}
/* directories have to be moved to the graveyard */
_debug("move stale object to graveyard");
- end_removing(rep);
+ dget(rep);
+ end_removing(rep); /* Drops ref on rep */
try_again:
/* first step is to make up a grave dentry in the graveyard */
@@ -304,8 +304,10 @@ int cachefiles_bury_object(struct cachefiles_cache *cache,
/* do the multiway lock magic */
trap = lock_rename(cache->graveyard, dir);
- if (IS_ERR(trap))
- return PTR_ERR(trap);
+ if (IS_ERR(trap)) {
+ ret = PTR_ERR(trap);
+ goto out;
+ }
/* do some checks before getting the grave dentry */
if (rep->d_parent != dir || IS_DEADDIR(d_inode(rep))) {
@@ -313,25 +315,27 @@ int cachefiles_bury_object(struct cachefiles_cache *cache,
* lock */
unlock_rename(cache->graveyard, dir);
_leave(" = 0 [culled?]");
- return 0;
+ ret = 0;
+ goto out;
}
+ ret = -EIO;
if (!d_can_lookup(cache->graveyard)) {
unlock_rename(cache->graveyard, dir);
cachefiles_io_error(cache, "Graveyard no longer a directory");
- return -EIO;
+ goto out;
}
if (trap == rep) {
unlock_rename(cache->graveyard, dir);
cachefiles_io_error(cache, "May not make directory loop");
- return -EIO;
+ goto out;
}
if (d_mountpoint(rep)) {
unlock_rename(cache->graveyard, dir);
cachefiles_io_error(cache, "Mountpoint in cache");
- return -EIO;
+ goto out;
}
grave = lookup_one(&nop_mnt_idmap, &QSTR(nbuffer), cache->graveyard);
@@ -343,11 +347,12 @@ int cachefiles_bury_object(struct cachefiles_cache *cache,
if (PTR_ERR(grave) == -ENOMEM) {
_leave(" = -ENOMEM");
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out;
}
cachefiles_io_error(cache, "Lookup error %ld", PTR_ERR(grave));
- return -EIO;
+ goto out;
}
if (d_is_positive(grave)) {
@@ -362,7 +367,7 @@ int cachefiles_bury_object(struct cachefiles_cache *cache,
unlock_rename(cache->graveyard, dir);
dput(grave);
cachefiles_io_error(cache, "Mountpoint in graveyard");
- return -EIO;
+ goto out;
}
/* target should not be an ancestor of source */
@@ -370,7 +375,7 @@ int cachefiles_bury_object(struct cachefiles_cache *cache,
unlock_rename(cache->graveyard, dir);
dput(grave);
cachefiles_io_error(cache, "May not make directory loop");
- return -EIO;
+ goto out;
}
/* attempt the rename */
@@ -404,8 +409,10 @@ int cachefiles_bury_object(struct cachefiles_cache *cache,
__cachefiles_unmark_inode_in_use(object, d_inode(rep));
unlock_rename(cache->graveyard, dir);
dput(grave);
- _leave(" = 0");
- return 0;
+ _leave(" = %d", ret);
+out:
+ dput(rep);
+ return ret;
}
/*
@@ -812,7 +819,6 @@ int cachefiles_cull(struct cachefiles_cache *cache, struct dentry *dir,
ret = cachefiles_bury_object(cache, NULL, dir, victim,
FSCACHE_OBJECT_WAS_CULLED);
- dput(victim);
if (ret < 0)
goto error;
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 08/26] cachefiles: Don't rely on backing fs storage map for most use cases
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (6 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 07/26] cachefiles: Fix excess dput() after end_removing() David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 09/26] mm: Make readahead store folio count in readahead_control David Howells
` (17 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
Cachefiles currently uses the backing filesystem's idea of what data is
held in a backing file and queries this by means of SEEK_DATA and
SEEK_HOLE. However, this means it does two seek operations on the backing
file for each individual read call it wants to prepare (unless the first
returns -ENXIO). Worse, the backing filesystem is at liberty to insert or
remove blocks of zeros in order to optimise its layout which may cause
false positives and false negatives.
The problem is that keeping track of what is dirty is tricky (if storing
info in xattrs, which may have limited capacity and must be read and
written as one piece) and expensive (in terms of diskspace at least) and is
basically duplicating what a filesystem does.
However, the most common write case, in which the application does {
open(O_TRUNC); write(); write(); ... write(); close(); } where each write
follows directly on from the previous and leaves no gaps in the file is
reasonably easy to detect and can be noted in the primary xattr as
CACHEFILES_CONTENT_ALL, indicating we have everything up to the object size
stored.
In this specific case, given that it is known that there are no holes in
the file, there's no need to call SEEK_DATA/HOLE or use any other mechanism
to track the contents. That speeds things up enormously.
Even when it is necessary to use SEEK_DATA/HOLE, it may not be necessary to
call it for each cache read subrequest generated.
Implement this by adding support for the CACHEFILES_CONTENT_ALL content
type (which is defined, but currently unused), which requires a slight
adjustment in how backing files are managed. Specifically, the driver
needs to know how much of the tail block is data and whether storing more
data will create a hole.
To this end, the way that the size of a backing file is managed is changed.
Currently, the backing file is expanded to strictly match the size of the
network file, but this can be changed to carry more useful information.
This makes two pieces of metadata available: xattr.object_size and the
backing file's i_size. Apply the following schema:
(a) i_size is always a multiple of the DIO block size.
(b) i_size is only updated to the end of the highest write stored. This
is used to work out if we are following on without leaving a hole.
(c) xattr.object_size is the size of the network filesystem file cached
in this backing file.
(d) xattr.object_size must point after the start of the last block
(unless both are 0).
(e) If xattr.object_size is at or after the block at the current end of
the backing file (ie. i_size), then we have all the contents of the
block (if xattr.content == CACHEFILES_CONTENT_ALL).
(f) If xattr.object_size is somewhere in the middle of the last block,
then the data following it is invalid and must be ignored.
(g) If data is added to the last block, then that block must be fetched,
modified and rewritten (it must be a buffered write through the
pagecache and not DIO).
(h) Writes to cache are rounded out to blocks on both sides and the
folios used as sources must contain data for any lower gap and must
have been cleared for any upper gap, and so will rewrite any
non-data area in the tail block.
To implement this, the following changes are made:
(1) cookie->object_size is no longer updated when writes are copied into
the pagecache, but rather only updated when a write request completes.
This prevents object size miscomparison when checking the xattr
causing the backing file to be invalidated (opening and marking the
backing file and modifying the pagecache run in parallel).
(2) The cache's current idea of the amount of data that should be stored
in the backing file is kept track of in object->object_size.
Possibly this is redundant with cookie->object_size, but the latter
gets updated in some addition circumstances.
(3) The size of the backing file at the start of a request is now tracked
in struct netfs_cache_resources so that the partial EOF block can be
located and cleaned.
(4) The cache block size is now used consistently rather than using
CACHEFILES_DIO_BLOCK_SIZE (4096).
(5) The backing file size is no longer adjusted when looking up an object.
(6) When shortening a file, if the new size is not block aligned, the part
beyond the new size is cleared. If the file is truncated to zero, the
content_info gets reset to CACHEFILES_CONTENT_NO_DATA.
(7) A new struct, fscache_occupancy, is instituted to track the region
being read. Netfslib allocates it and fills in the start and end of
the region to be read then calls the ->query_occupancy() method to
find and fill in the extents. It also indicates whether a recorded
extent contains data or just contains a region that's all zeros
(FSCACHE_EXTENT_DATA or FSCACHE_EXTENT_ZERO).
(8) The ->prepare_read() cache method is changed such that, if given, it
just limits the amount that can be read from the cache in one go. It
no longer indicates what source of read should be done; that
information is now obtained from ->query_occupancy().
(9) A new cache method, ->collect_write(), is added that is called when a
contiguous series of writes have completed and a discontiguity or the
end of the request has been hit. It it supplied with the start and
length of the write made to the backing file and can use this
information to update the cache metadata.
(10) cachefiles_query_occupancy() is altered to find the next two "extents"
of data stored in the backing file by doing SEEK_DATA/HOLE between the
bounds set - unless it is known that there are no holes, in which case
a whole-file first extent can be set.
(11) cachefiles_collect_write() is implemented to take the collated write
completion information and use this to update the cache metadata, in
particular working out whether there's now a hole in the backing file
requiring future use of SEEK_DATA/HOLE instead of just assuming the
data is all present.
It also uses fallocate(FALLOC_FL_ZERO_RANGE) to clean the part of a
partial block that extended beyond the old object size. It might be
better to perform a synchronous DIO write for this purpose, but that
would mandate an RMW cycle. Ideally, it should be all zeros anyway,
but, unfortunately, shared-writable mmap can interfere.
(12) cachefiles_begin_operation() is updated to note the current backing
file size and the cache DIO size.
(13) cachefiles_create_tmpfile() no longer expands the backing file when it
creates it.
(14) cachefiles_set_object_xattr() is changed to use object->object_size
rather than cookie->object_size.
(15) cachefiles_check_auxdata() is altered to actually store the content
type and to also set object->object_size. The cachefiles_coherency
tracepoint is also modified to display xattr.object_size.
(16) netfs_read_to_pagecache() is reworked. The cache ->prepare_read()
method is replaced with ->query_occupancy() as the arbiter of what
region of the file is read from where, and that retrieves up to two
occupied extents of the backing file at once.
The cache ->prepare_read() method is now repurposed to be the same as
the equivalent network filesystem method and allows the cache to limit
the size of the read before the iterator is prepared.
netfs_single_dispatch_read() is similarly modified.
(17) netfs_update_i_size() and afs_update_i_size() no longer call
fscache_update_cookie() to update cookie->object_size.
(18) Write collection now collates contiguous sequences of writes to the
cache and calls the cache ->collect_write() method.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/afs/file.c | 1 -
fs/cachefiles/interface.c | 82 ++-------
fs/cachefiles/internal.h | 10 +-
fs/cachefiles/io.c | 265 +++++++++++++++++++++++-------
fs/cachefiles/namei.c | 19 +--
fs/cachefiles/xattr.c | 20 ++-
fs/netfs/buffered_read.c | 185 +++++++++++++--------
fs/netfs/buffered_write.c | 3 -
fs/netfs/internal.h | 2 +
fs/netfs/read_single.c | 39 +++--
fs/netfs/write_collect.c | 49 +++++-
fs/netfs/write_issue.c | 3 +
include/linux/fscache.h | 17 ++
include/linux/netfs.h | 16 +-
include/trace/events/cachefiles.h | 15 +-
15 files changed, 466 insertions(+), 260 deletions(-)
diff --git a/fs/afs/file.c b/fs/afs/file.c
index f609366fd2ac..424e0c98d67f 100644
--- a/fs/afs/file.c
+++ b/fs/afs/file.c
@@ -436,7 +436,6 @@ static void afs_update_i_size(struct inode *inode, loff_t new_i_size)
inode_set_bytes(&vnode->netfs.inode, new_i_size);
}
write_sequnlock(&vnode->cb_lock);
- fscache_update_cookie(afs_vnode_cache(vnode), NULL, &new_i_size);
}
static void afs_netfs_invalidate_cache(struct netfs_io_request *wreq)
diff --git a/fs/cachefiles/interface.c b/fs/cachefiles/interface.c
index a08250d244ea..736bfcaa4e1d 100644
--- a/fs/cachefiles/interface.c
+++ b/fs/cachefiles/interface.c
@@ -105,73 +105,6 @@ void cachefiles_put_object(struct cachefiles_object *object,
_leave("");
}
-/*
- * Adjust the size of a cache file if necessary to match the DIO size. We keep
- * the EOF marker a multiple of DIO blocks so that we don't fall back to doing
- * non-DIO for a partial block straddling the EOF, but we also have to be
- * careful of someone expanding the file and accidentally accreting the
- * padding.
- */
-static int cachefiles_adjust_size(struct cachefiles_object *object)
-{
- struct iattr newattrs;
- struct file *file = object->file;
- uint64_t ni_size;
- loff_t oi_size;
- int ret;
-
- ni_size = object->cookie->object_size;
- ni_size = round_up(ni_size, CACHEFILES_DIO_BLOCK_SIZE);
-
- _enter("{OBJ%x},[%llu]",
- object->debug_id, (unsigned long long) ni_size);
-
- if (!file)
- return -ENOBUFS;
-
- oi_size = i_size_read(file_inode(file));
- if (oi_size == ni_size)
- return 0;
-
- inode_lock(file_inode(file));
-
- /* if there's an extension to a partial page at the end of the backing
- * file, we need to discard the partial page so that we pick up new
- * data after it */
- if (oi_size & ~PAGE_MASK && ni_size > oi_size) {
- _debug("discard tail %llx", oi_size);
- newattrs.ia_valid = ATTR_SIZE;
- newattrs.ia_size = oi_size & PAGE_MASK;
- ret = cachefiles_inject_remove_error();
- if (ret == 0)
- ret = notify_change(&nop_mnt_idmap, file->f_path.dentry,
- &newattrs, NULL);
- if (ret < 0)
- goto truncate_failed;
- }
-
- newattrs.ia_valid = ATTR_SIZE;
- newattrs.ia_size = ni_size;
- ret = cachefiles_inject_write_error();
- if (ret == 0)
- ret = notify_change(&nop_mnt_idmap, file->f_path.dentry,
- &newattrs, NULL);
-
-truncate_failed:
- inode_unlock(file_inode(file));
-
- if (ret < 0)
- trace_cachefiles_io_error(NULL, file_inode(file), ret,
- cachefiles_trace_notify_change_error);
- if (ret == -EIO) {
- cachefiles_io_error_obj(object, "Size set failed");
- ret = -ENOBUFS;
- }
-
- _leave(" = %d", ret);
- return ret;
-}
-
/*
* Attempt to look up the nominated node in this cache
*/
@@ -204,7 +137,6 @@ static bool cachefiles_lookup_cookie(struct fscache_cookie *cookie)
spin_lock(&cache->object_list_lock);
list_add(&object->cache_link, &cache->object_list);
spin_unlock(&cache->object_list_lock);
- cachefiles_adjust_size(object);
cachefiles_end_secure(cache, saved_cred);
_leave(" = t");
@@ -238,7 +170,7 @@ static bool cachefiles_shorten_object(struct cachefiles_object *object,
loff_t i_size, dio_size;
int ret;
- dio_size = round_up(new_size, CACHEFILES_DIO_BLOCK_SIZE);
+ dio_size = round_up(new_size, cache->bsize);
i_size = i_size_read(inode);
trace_cachefiles_trunc(object, inode, i_size, dio_size,
@@ -270,6 +202,7 @@ static bool cachefiles_shorten_object(struct cachefiles_object *object,
}
}
+ object->object_size = new_size;
return true;
}
@@ -284,15 +217,20 @@ static void cachefiles_resize_cookie(struct netfs_cache_resources *cres,
struct fscache_cookie *cookie = object->cookie;
const struct cred *saved_cred;
struct file *file = cachefiles_cres_file(cres);
- loff_t old_size = cookie->object_size;
+ unsigned long long i_size = i_size_read(file_inode(file));
- _enter("%llu->%llu", old_size, new_size);
+ _enter("%llu->%llu", i_size, new_size);
- if (new_size < old_size) {
+ if (new_size < i_size) {
+ /* The file is being shrunk - we need to downsize the backing
+ * file and clear the end of the final block.
+ */
cachefiles_begin_secure(cache, &saved_cred);
cachefiles_shorten_object(object, file, new_size);
cachefiles_end_secure(cache, saved_cred);
object->cookie->object_size = new_size;
+ if (new_size == 0)
+ object->content_info = CACHEFILES_CONTENT_NO_DATA;
return;
}
diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h
index b62cd3e9a18e..00482a13fc48 100644
--- a/fs/cachefiles/internal.h
+++ b/fs/cachefiles/internal.h
@@ -18,8 +18,6 @@
#include <linux/xarray.h>
#include <linux/cachefiles.h>
-#define CACHEFILES_DIO_BLOCK_SIZE 4096
-
struct cachefiles_cache;
struct cachefiles_object;
@@ -68,12 +66,16 @@ struct cachefiles_object {
struct list_head cache_link; /* Link in cache->*_list */
struct file *file; /* The file representing this object */
char *d_name; /* Backing file name */
+ unsigned long flags;
+#define CACHEFILES_OBJECT_USING_TMPFILE 0 /* Have an unlinked tmpfile */
+ unsigned long long object_size; /* Size of the object stored
+ * (independent of cookie->object_size for
+ * coherency reasons)
+ */
int debug_id;
spinlock_t lock;
refcount_t ref;
enum cachefiles_content content_info:8; /* Info about content presence */
- unsigned long flags;
-#define CACHEFILES_OBJECT_USING_TMPFILE 0 /* Have an unlinked tmpfile */
#ifdef CONFIG_CACHEFILES_ONDEMAND
struct cachefiles_ondemand_info *ondemand;
#endif
diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
index eaf47851c65f..b5ff75697b3e 100644
--- a/fs/cachefiles/io.c
+++ b/fs/cachefiles/io.c
@@ -32,6 +32,8 @@ struct cachefiles_kiocb {
u64 b_writing;
};
+#define IS_ERR_VALUE_LL(x) unlikely((x) >= (unsigned long long)-MAX_ERRNO)
+
static inline void cachefiles_put_kiocb(struct cachefiles_kiocb *ki)
{
if (refcount_dec_and_test(&ki->ki_refcnt)) {
@@ -193,60 +195,81 @@ static int cachefiles_read(struct netfs_cache_resources *cres,
}
/*
- * Query the occupancy of the cache in a region, returning where the next chunk
- * of data starts and how long it is.
+ * Query the occupancy of the cache in a region, returning the extent of the
+ * next two chunks of cached data and the next hole.
*/
static int cachefiles_query_occupancy(struct netfs_cache_resources *cres,
- loff_t start, size_t len, size_t granularity,
- loff_t *_data_start, size_t *_data_len)
+ struct fscache_occupancy *occ)
{
struct cachefiles_object *object;
+ struct inode *inode;
struct file *file;
- loff_t off, off2;
-
- *_data_start = -1;
- *_data_len = 0;
+ unsigned long long i_size;
+ loff_t ret;
+ int i;
if (!fscache_wait_for_operation(cres, FSCACHE_WANT_READ))
return -ENOBUFS;
object = cachefiles_cres_object(cres);
file = cachefiles_cres_file(cres);
- granularity = max_t(size_t, object->volume->cache->bsize, granularity);
+ inode = file_inode(file);
+ occ->granularity = object->volume->cache->bsize;
+
+ _enter("%pD,%li,%llx-%llx/%llx",
+ file, inode->i_ino, occ->query_from, occ->query_to,
+ i_size_read(inode));
+
+ if (i_size_read(inode) == 0)
+ goto done;
+
+ switch (object->content_info) {
+ case CACHEFILES_CONTENT_ALL:
+ case CACHEFILES_CONTENT_SINGLE:
+ i_size = i_size_read(inode);
+ if (i_size > occ->query_from) {
+ occ->cached_from[0] = 0;
+ occ->cached_to[0] = i_size;
+ occ->cached_type[0] = FSCACHE_EXTENT_DATA;
+ occ->query_from = ULLONG_MAX;
+ }
+ goto done;
+ default:
+ break;
+ }
- _enter("%pD,%li,%llx,%zx/%llx",
- file, file_inode(file)->i_ino, start, len,
- i_size_read(file_inode(file)));
+ for (i = 0; i < ARRAY_SIZE(occ->cached_from); i++) {
+ ret = cachefiles_inject_read_error();
+ if (ret == 0)
+ ret = file->f_op->llseek(file, occ->query_from, SEEK_DATA);
+ if (IS_ERR_VALUE_LL(ret)) {
+ if (ret != -ENXIO)
+ return ret;
+ occ->query_from = ULLONG_MAX;
+ goto done;
+ }
+ occ->cached_type[i] = FSCACHE_EXTENT_DATA;
+ occ->cached_from[i] = ret;
+ occ->query_from = ret;
+
+ ret = cachefiles_inject_read_error();
+ if (ret == 0)
+ ret = file->f_op->llseek(file, occ->query_from, SEEK_HOLE);
+ if (IS_ERR_VALUE_LL(ret)) {
+ if (ret != -ENXIO)
+ return ret;
+ occ->query_from = ULLONG_MAX;
+ goto done;
+ }
+ occ->cached_to[i] = ret;
+ occ->query_from = ret;
+ if (occ->query_from >= occ->query_to)
+ break;
+ }
- off = cachefiles_inject_read_error();
- if (off == 0)
- off = vfs_llseek(file, start, SEEK_DATA);
- if (off == -ENXIO)
- return -ENODATA; /* Beyond EOF */
- if (off < 0 && off >= (loff_t)-MAX_ERRNO)
- return -ENOBUFS; /* Error. */
- if (round_up(off, granularity) >= start + len)
- return -ENODATA; /* No data in range */
-
- off2 = cachefiles_inject_read_error();
- if (off2 == 0)
- off2 = vfs_llseek(file, off, SEEK_HOLE);
- if (off2 == -ENXIO)
- return -ENODATA; /* Beyond EOF */
- if (off2 < 0 && off2 >= (loff_t)-MAX_ERRNO)
- return -ENOBUFS; /* Error. */
-
- /* Round away partial blocks */
- off = round_up(off, granularity);
- off2 = round_down(off2, granularity);
- if (off2 <= off)
- return -ENODATA;
-
- *_data_start = off;
- if (off2 > start + len)
- *_data_len = len;
- else
- *_data_len = off2 - off;
+done:
+ _debug("query[0] %llx-%llx", occ->cached_from[0], occ->cached_to[0]);
+ _debug("query[1] %llx-%llx", occ->cached_from[1], occ->cached_to[1]);
return 0;
}
@@ -489,18 +512,6 @@ cachefiles_do_prepare_read(struct netfs_cache_resources *cres,
return ret;
}
-/*
- * Prepare a read operation, shortening it to a cached/uncached
- * boundary as appropriate.
- */
-static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest *subreq,
- unsigned long long i_size)
-{
- return cachefiles_do_prepare_read(&subreq->rreq->cache_resources,
- subreq->start, &subreq->len, i_size,
- &subreq->flags, subreq->rreq->inode->i_ino);
-}
-
/*
* Prepare an on-demand read operation, shortening it to a cached/uncached
* boundary as appropriate.
@@ -658,9 +669,9 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
wreq->debug_id, subreq->debug_index, start, start + len - 1);
/* We need to start on the cache granularity boundary */
- off = start & (CACHEFILES_DIO_BLOCK_SIZE - 1);
+ off = start & (cache->bsize - 1);
if (off) {
- pre = CACHEFILES_DIO_BLOCK_SIZE - off;
+ pre = cache->bsize - off;
if (pre >= len) {
fscache_count_dio_misfit();
netfs_write_subrequest_terminated(subreq, len);
@@ -674,8 +685,8 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
/* We also need to end on the cache granularity boundary */
if (start + len == wreq->i_size) {
- size_t part = len % CACHEFILES_DIO_BLOCK_SIZE;
- size_t need = CACHEFILES_DIO_BLOCK_SIZE - part;
+ size_t part = len & (cache->bsize - 1);
+ size_t need = cache->bsize - part;
if (part && stream->submit_extendable_to >= need) {
len += need;
@@ -684,7 +695,7 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
}
}
- post = len & (CACHEFILES_DIO_BLOCK_SIZE - 1);
+ post = len & (cache->bsize - 1);
if (post) {
len -= post;
if (len == 0) {
@@ -711,6 +722,134 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
netfs_write_subrequest_terminated, subreq);
}
+/*
+ * Collect the result of buffered writeback to the cache. This includes
+ * copying a read to the cache. Netfslib collates the results, which might
+ * occur out of order, and delivers them to the cache so that it can update its
+ * content record.
+ *
+ * The writes we made are all rounded out at both sides to the nearest DIO
+ * block boundary, so if the final block contains the EOF in the middle of it
+ * (rather than at the end), padding will have been written to the file. The
+ * backing file's filesize will have been updated if the write extended the
+ * file; the filesize may still change due to outstanding subreqs.
+ *
+ * The metadata in the cache file xattr records the size of the object we have
+ * stored, but the cache file EOF only goes up to where we've cached data to
+ * and, furthermore, is rounded up to the nearest DIO block boundary.
+ */
+static void cachefiles_collect_write(struct netfs_io_request *wreq,
+ unsigned long long start, size_t len)
+{
+ struct netfs_cache_resources *cres = &wreq->cache_resources;
+ struct cachefiles_object *object = cachefiles_cres_object(cres);
+ struct cachefiles_cache *cache = object->volume->cache;
+ struct fscache_cookie *cookie = fscache_cres_cookie(cres);
+ struct file *file = cachefiles_cres_file(cres);
+ unsigned long long old_size = cres->cache_i_size;
+ unsigned long long new_size = i_size_read(file_inode(file));
+ unsigned long long data_to = cookie->object_size;
+ unsigned long long end = start + len;
+ int ret;
+
+ _enter("%llx,%zx,%x", start, len, cache->bsize);
+
+ if (WARN_ON(old_size & (cache->bsize - 1)) ||
+ WARN_ON(new_size & (cache->bsize - 1)) ||
+ WARN_ON(start & (cache->bsize - 1)) ||
+ WARN_ON(len & (cache->bsize - 1))) {
+ trace_cachefiles_io_error(object, file_inode(file), -EIO,
+ cachefiles_trace_alignment_error);
+ cachefiles_remove_object_xattr(cache, object, file->f_path.dentry);
+ return;
+ }
+
+ /* Zeroth case: Single monolithic files are handled specially.
+ */
+ if (wreq->origin == NETFS_WRITEBACK_SINGLE) {
+ object->content_info = CACHEFILES_CONTENT_SINGLE;
+ goto update_sizes;
+ }
+
+ /* First case: The backing file was empty. */
+ if (old_size == 0) {
+ if (start == 0)
+ object->content_info = CACHEFILES_CONTENT_ALL;
+ else
+ object->content_info = CACHEFILES_CONTENT_BACKFS_MAP;
+ goto update_sizes;
+ }
+
+ /* Second case: The backing file is entirely within the old object size
+ * and thus there can be no partial tail block to deal with in the
+ * cache file.
+ */
+ if (old_size <= data_to) {
+ if (start > old_size)
+ goto discontiguous;
+ goto update_sizes;
+ }
+
+ /* Third case: The write happened entirely within the bounds of the
+ * current cache file's size.
+ */
+ if (end <= old_size)
+ goto update_sizes;
+
+ /* Fourth case: The write overwrote the partial tail block and extended
+ * the file. We only need to update the object size because netfslib
+ * rounds out/pads cache writes to whole disk blocks.
+ */
+ if (start < old_size)
+ goto update_sizes;
+
+ /* Fifth case: The write started from the end of the whole tail block
+ * and extended the file. Just extend our notion of the filesize.
+ */
+ if (start == old_size && old_size == data_to)
+ goto update_sizes;
+
+ /* Sixth case: The write continued on from the partial tail block and
+ * extended the file. Need to clear the gap.
+ */
+ if (start == old_size && old_size > data_to)
+ goto clear_gap;
+
+discontiguous:
+ /* Seventh case: The write was beyond the EOF on the cache file, so now
+ * there's a hole in the file and we can no longer say in the metadata
+ * that we can assume we have it all. We may also need to clear the
+ * end of the partial tail block.
+ */
+ /* TODO: For the moment, we will have to use SEEK_HOLE/SEEK_DATA. */
+ object->content_info = CACHEFILES_CONTENT_BACKFS_MAP;
+
+clear_gap:
+ /* We need to clear any partial padding that got jumped over. It
+ * *should* be all zeros, but shared-writable mmap exists...
+ */
+ if (old_size > data_to) {
+ trace_cachefiles_trunc(object, file_inode(file), data_to, old_size,
+ cachefiles_trunc_clear_padding);
+ ret = cachefiles_inject_write_error();
+ if (ret == 0)
+ ret = vfs_fallocate(file, FALLOC_FL_ZERO_RANGE,
+ data_to, old_size - data_to);
+ if (ret < 0) {
+ trace_cachefiles_io_error(object, file_inode(file), ret,
+ cachefiles_trace_fallocate_error);
+ cachefiles_io_error_obj(object, "fallocate zero pad failed %d", ret);
+ cachefiles_remove_object_xattr(cache, object, file->f_path.dentry);
+ return;
+ }
+ }
+
+update_sizes:
+ cres->cache_i_size = umax(old_size, end);
+ object->object_size = cookie->object_size;
+ return;
+}
+
/*
* Clean up an operation.
*/
@@ -728,11 +867,11 @@ static const struct netfs_cache_ops cachefiles_netfs_cache_ops = {
.read = cachefiles_read,
.write = cachefiles_write,
.issue_write = cachefiles_issue_write,
- .prepare_read = cachefiles_prepare_read,
.prepare_write = cachefiles_prepare_write,
.prepare_write_subreq = cachefiles_prepare_write_subreq,
.prepare_ondemand_read = cachefiles_prepare_ondemand_read,
.query_occupancy = cachefiles_query_occupancy,
+ .collect_write = cachefiles_collect_write,
};
/*
@@ -742,14 +881,18 @@ bool cachefiles_begin_operation(struct netfs_cache_resources *cres,
enum fscache_want_state want_state)
{
struct cachefiles_object *object = cachefiles_cres_object(cres);
+ struct file *file;
if (!cachefiles_cres_file(cres)) {
cres->ops = &cachefiles_netfs_cache_ops;
if (object->file) {
spin_lock(&object->lock);
- if (!cres->cache_priv2 && object->file)
- cres->cache_priv2 = get_file(object->file);
+ file = object->file;
+ if (!cres->cache_priv2 && file)
+ cres->cache_priv2 = get_file(file);
spin_unlock(&object->lock);
+ cres->cache_i_size = i_size_read(file_inode(file));
+ cres->dio_size = object->volume->cache->bsize;
}
}
diff --git a/fs/cachefiles/namei.c b/fs/cachefiles/namei.c
index 20138309733f..38d730233658 100644
--- a/fs/cachefiles/namei.c
+++ b/fs/cachefiles/namei.c
@@ -449,7 +449,6 @@ struct file *cachefiles_create_tmpfile(struct cachefiles_object *object)
struct dentry *fan = volume->fanout[(u8)object->cookie->key_hash];
struct file *file;
const struct path parentpath = { .mnt = cache->mnt, .dentry = fan };
- uint64_t ni_size;
long ret;
@@ -481,23 +480,6 @@ struct file *cachefiles_create_tmpfile(struct cachefiles_object *object)
if (ret < 0)
goto err_unuse;
- ni_size = object->cookie->object_size;
- ni_size = round_up(ni_size, CACHEFILES_DIO_BLOCK_SIZE);
-
- if (ni_size > 0) {
- trace_cachefiles_trunc(object, file_inode(file), 0, ni_size,
- cachefiles_trunc_expand_tmpfile);
- ret = cachefiles_inject_write_error();
- if (ret == 0)
- ret = vfs_truncate(&file->f_path, ni_size);
- if (ret < 0) {
- trace_cachefiles_vfs_error(
- object, file_inode(file), ret,
- cachefiles_trace_trunc_error);
- goto err_unuse;
- }
- }
-
ret = -EINVAL;
if (unlikely(!file->f_op->read_iter) ||
unlikely(!file->f_op->write_iter)) {
@@ -507,6 +489,7 @@ struct file *cachefiles_create_tmpfile(struct cachefiles_object *object)
}
out:
cachefiles_end_secure(cache, saved_cred);
+ object->content_info = CACHEFILES_CONTENT_ALL;
return file;
err_unuse:
diff --git a/fs/cachefiles/xattr.c b/fs/cachefiles/xattr.c
index 52383b1d0ba6..27f969c41eef 100644
--- a/fs/cachefiles/xattr.c
+++ b/fs/cachefiles/xattr.c
@@ -54,7 +54,7 @@ int cachefiles_set_object_xattr(struct cachefiles_object *object)
if (!buf)
return -ENOMEM;
- buf->object_size = cpu_to_be64(object->cookie->object_size);
+ buf->object_size = cpu_to_be64(object->object_size);
buf->zero_point = 0;
buf->type = CACHEFILES_COOKIE_TYPE_DATA;
buf->content = object->content_info;
@@ -77,6 +77,7 @@ int cachefiles_set_object_xattr(struct cachefiles_object *object)
trace_cachefiles_vfs_error(object, file_inode(file), ret,
cachefiles_trace_setxattr_error);
trace_cachefiles_coherency(object, file_inode(file)->i_ino,
+ object->object_size,
be64_to_cpup((__be64 *)buf->data),
buf->content,
cachefiles_coherency_set_fail);
@@ -86,6 +87,7 @@ int cachefiles_set_object_xattr(struct cachefiles_object *object)
"Failed to set xattr with error %d", ret);
} else {
trace_cachefiles_coherency(object, file_inode(file)->i_ino,
+ object->object_size,
be64_to_cpup((__be64 *)buf->data),
buf->content,
cachefiles_coherency_set_ok);
@@ -106,6 +108,7 @@ int cachefiles_check_auxdata(struct cachefiles_object *object, struct file *file
unsigned int len = object->cookie->aux_len, tlen;
const void *p = fscache_get_aux(object->cookie);
enum cachefiles_coherency_trace why;
+ unsigned long long obj_size;
ssize_t xlen;
int ret = -ESTALE;
@@ -127,29 +130,33 @@ int cachefiles_check_auxdata(struct cachefiles_object *object, struct file *file
cachefiles_io_error_obj(
object,
"Failed to read aux with error %zd", xlen);
- why = cachefiles_coherency_check_xattr;
+ trace_cachefiles_coherency(object, file_inode(file)->i_ino, 0, 0, 0,
+ cachefiles_coherency_check_xattr);
goto out;
}
+ obj_size = be64_to_cpu(buf->object_size);
if (buf->type != CACHEFILES_COOKIE_TYPE_DATA) {
why = cachefiles_coherency_check_type;
} else if (memcmp(buf->data, p, len) != 0) {
why = cachefiles_coherency_check_aux;
- } else if (be64_to_cpu(buf->object_size) != object->cookie->object_size) {
+ } else if (obj_size != object->cookie->object_size) {
why = cachefiles_coherency_check_objsize;
} else if (buf->content == CACHEFILES_CONTENT_DIRTY) {
// TODO: Begin conflict resolution
pr_warn("Dirty object in cache\n");
why = cachefiles_coherency_check_dirty;
} else {
+ object->content_info = buf->content;
+ object->object_size = obj_size;
why = cachefiles_coherency_check_ok;
ret = 0;
}
-out:
- trace_cachefiles_coherency(object, file_inode(file)->i_ino,
+ trace_cachefiles_coherency(object, file_inode(file)->i_ino, obj_size,
be64_to_cpup((__be64 *)buf->data),
buf->content, why);
+out:
kfree(buf);
return ret;
}
@@ -163,6 +170,9 @@ int cachefiles_remove_object_xattr(struct cachefiles_cache *cache,
{
int ret;
+ trace_cachefiles_coherency(object, d_inode(dentry)->i_ino, 0, 0, 0,
+ cachefiles_coherency_remove);
+
ret = cachefiles_inject_remove_error();
if (ret == 0) {
ret = mnt_want_write(cache->mnt);
diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
index a8c0d86118c5..aee59ccea257 100644
--- a/fs/netfs/buffered_read.c
+++ b/fs/netfs/buffered_read.c
@@ -127,21 +127,6 @@ static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq,
return subreq->len;
}
-static enum netfs_io_source netfs_cache_prepare_read(struct netfs_io_request *rreq,
- struct netfs_io_subrequest *subreq,
- loff_t i_size)
-{
- struct netfs_cache_resources *cres = &rreq->cache_resources;
- enum netfs_io_source source;
-
- if (!cres->ops)
- return NETFS_DOWNLOAD_FROM_SERVER;
- source = cres->ops->prepare_read(subreq, i_size);
- trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
- return source;
-
-}
-
/*
* Issue a read against the cache.
* - Eats the caller's ref on subreq.
@@ -156,6 +141,19 @@ static void netfs_read_cache_to_pagecache(struct netfs_io_request *rreq,
netfs_cache_read_terminated, subreq);
}
+int netfs_read_query_cache(struct netfs_io_request *rreq, struct fscache_occupancy *occ)
+{
+ struct netfs_cache_resources *cres = &rreq->cache_resources;
+
+ occ->granularity = PAGE_SIZE;
+ if (occ->query_from >= occ->query_to)
+ return 0;
+ if (!cres->ops)
+ return 0;
+ occ->query_from = round_up(occ->query_from, occ->granularity);
+ return cres->ops->query_occupancy(cres, occ);
+}
+
static void netfs_queue_read(struct netfs_io_request *rreq,
struct netfs_io_subrequest *subreq,
bool last_subreq)
@@ -214,16 +212,55 @@ static void netfs_issue_read(struct netfs_io_request *rreq,
static void netfs_read_to_pagecache(struct netfs_io_request *rreq,
struct readahead_control *ractl)
{
+ struct fscache_occupancy _occ = {
+ .query_from = rreq->start,
+ .query_to = rreq->start + rreq->len,
+ .cached_from[0] = 0,
+ .cached_to[0] = 0,
+ .cached_from[1] = ULLONG_MAX,
+ .cached_to[1] = ULLONG_MAX,
+ };
+ struct fscache_occupancy *occ = &_occ;
struct netfs_inode *ictx = netfs_inode(rreq->inode);
unsigned long long start = rreq->start;
ssize_t size = rreq->len;
int ret = 0;
do {
+ int (*prepare_read)(struct netfs_io_subrequest *subreq) = NULL;
struct netfs_io_subrequest *subreq;
- enum netfs_io_source source = NETFS_SOURCE_UNKNOWN;
+ unsigned long long hole_to, cache_to;
ssize_t slice;
+ /* If we don't have any, find out the next couple of data
+ * extents from the cache, containing of following the
+ * specified start offset. Holes have to be fetched from the
+ * server; data regions from the cache.
+ */
+ hole_to = occ->cached_from[0];
+ cache_to = occ->cached_to[0];
+ if (start >= cache_to) {
+ /* Extent exhausted; shuffle down. */
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(occ->cached_from) - 1; i++) {
+ occ->cached_from[i] = occ->cached_from[i + 1];
+ occ->cached_to[i] = occ->cached_to[i + 1];
+ occ->cached_type[i] = occ->cached_type[i + 1];
+ }
+ occ->cached_from[i] = ULLONG_MAX;
+ occ->cached_to[i] = ULLONG_MAX;
+
+ if (occ->cached_from[0] != ULLONG_MAX)
+ continue;
+
+ /* Get new extents */
+ ret = netfs_read_query_cache(rreq, occ);
+ if (ret < 0)
+ break;
+ continue;
+ }
+
subreq = netfs_alloc_subrequest(rreq);
if (!subreq) {
ret = -ENOMEM;
@@ -233,65 +270,81 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq,
subreq->start = start;
subreq->len = size;
- source = netfs_cache_prepare_read(rreq, subreq, rreq->i_size);
- subreq->source = source;
- if (source == NETFS_DOWNLOAD_FROM_SERVER) {
- unsigned long long zp = umin(ictx->zero_point, rreq->i_size);
- size_t len = subreq->len;
-
- if (unlikely(rreq->origin == NETFS_READ_SINGLE))
- zp = rreq->i_size;
- if (subreq->start >= zp) {
- subreq->source = source = NETFS_FILL_WITH_ZEROES;
- goto fill_with_zeroes;
+ _debug("rsub %llx %llx-%llx", subreq->start, hole_to, cache_to);
+
+ if (start >= hole_to && start < cache_to) {
+ /* Overlap with a cached region, where the cache may
+ * record a block of zeroes.
+ */
+ _debug("cached s=%llx c=%llx l=%zx", start, cache_to, size);
+ subreq->len = umin(cache_to - start, size);
+ subreq->len = round_up(subreq->len, occ->granularity);
+ if (occ->cached_type[0] == FSCACHE_EXTENT_ZERO) {
+ subreq->source = NETFS_FILL_WITH_ZEROES;
+ netfs_stat(&netfs_n_rh_zero);
+ } else {
+ subreq->source = NETFS_READ_FROM_CACHE;
+ prepare_read = rreq->cache_resources.ops->prepare_read;
}
- if (len > zp - subreq->start)
- len = zp - subreq->start;
- if (len == 0) {
- pr_err("ZERO-LEN READ: R=%08x[%x] l=%zx/%zx s=%llx z=%llx i=%llx",
- rreq->debug_id, subreq->debug_index,
- subreq->len, size,
- subreq->start, ictx->zero_point, rreq->i_size);
- break;
- }
- subreq->len = len;
-
- netfs_stat(&netfs_n_rh_download);
- if (rreq->netfs_ops->prepare_read) {
- ret = rreq->netfs_ops->prepare_read(subreq);
- if (ret < 0) {
- subreq->error = ret;
- /* Not queued - release both refs. */
- netfs_put_subrequest(subreq,
- netfs_sreq_trace_put_cancel);
- netfs_put_subrequest(subreq,
- netfs_sreq_trace_put_cancel);
- break;
- }
- trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
- }
- goto issue;
- }
+ trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
- fill_with_zeroes:
- if (source == NETFS_FILL_WITH_ZEROES) {
+ } else if ((subreq->start >= ictx->zero_point ||
+ subreq->start >= rreq->i_size) &&
+ size > 0) {
+ /* If this range lies beyond the zero-point, that part
+ * can just be cleared locally.
+ */
+ _debug("zero %llx-%llx", start, start + size);
+ subreq->len = size;
subreq->source = NETFS_FILL_WITH_ZEROES;
- trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+ if (rreq->cache_resources.ops)
+ __set_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
netfs_stat(&netfs_n_rh_zero);
- goto issue;
+ } else {
+ /* Read a cache hole from the server. If any part of
+ * this range lies beyond the zero-point or the EOF,
+ * that part can just be cleared locally.
+ */
+ unsigned long long zlimit = umin(rreq->i_size, ictx->zero_point);
+ unsigned long long limit = min3(zlimit, start + size, hole_to);
+
+ _debug("limit %llx %llx", rreq->i_size, ictx->zero_point);
+ _debug("download %llx-%llx", start, start + size);
+ subreq->len = umin(limit - subreq->start, ULONG_MAX);
+ subreq->source = NETFS_DOWNLOAD_FROM_SERVER;
+ if (rreq->cache_resources.ops)
+ __set_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
+ netfs_stat(&netfs_n_rh_download);
}
- if (source == NETFS_READ_FROM_CACHE) {
- trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
- goto issue;
+ if (size == 0) {
+ pr_err("ZERO-LEN READ: R=%08x[%x] l=%zx/%zx s=%llx z=%llx i=%llx",
+ rreq->debug_id, subreq->debug_index,
+ subreq->len, size,
+ subreq->start, ictx->zero_point, rreq->i_size);
+ trace_netfs_sreq(subreq, netfs_sreq_trace_cancel);
+ /* Not queued - release both refs. */
+ netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
+ netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
+ break;
}
- pr_err("Unexpected read source %u\n", source);
- WARN_ON_ONCE(1);
- break;
+ rreq->io_streams[0].sreq_max_len = MAX_RW_COUNT;
+ rreq->io_streams[0].sreq_max_segs = INT_MAX;
+
+ if (prepare_read) {
+ ret = prepare_read(subreq);
+ if (ret < 0) {
+ subreq->error = ret;
+ /* Not queued - release both refs. */
+ netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
+ netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
+ break;
+ }
+ trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
+ }
- issue:
slice = netfs_prepare_read_iterator(subreq, ractl);
if (slice < 0) {
ret = slice;
@@ -305,6 +358,8 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq,
size -= slice;
start += slice;
+ trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+
netfs_queue_read(rreq, subreq, size <= 0);
netfs_issue_read(rreq, subreq);
cond_resched();
diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c
index 22a4d61631c9..bce3e7109ec1 100644
--- a/fs/netfs/buffered_write.c
+++ b/fs/netfs/buffered_write.c
@@ -73,9 +73,6 @@ void netfs_update_i_size(struct netfs_inode *ctx, struct inode *inode,
i_size = i_size_read(inode);
if (end > i_size) {
i_size_write(inode, end);
-#if IS_ENABLED(CONFIG_FSCACHE)
- fscache_update_cookie(ctx->cache, NULL, &end);
-#endif
gap = SECTOR_SIZE - (i_size & (SECTOR_SIZE - 1));
if (copied > gap) {
diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
index d436e20d3418..2fcf31de5f2c 100644
--- a/fs/netfs/internal.h
+++ b/fs/netfs/internal.h
@@ -23,6 +23,8 @@
/*
* buffered_read.c
*/
+int netfs_read_query_cache(struct netfs_io_request *rreq,
+ struct fscache_occupancy *occ);
void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error);
int netfs_prefetch_for_write(struct file *file, struct folio *folio,
size_t offset, size_t len);
diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c
index d0e23bc42445..d87a03859ebd 100644
--- a/fs/netfs/read_single.c
+++ b/fs/netfs/read_single.c
@@ -58,20 +58,6 @@ static int netfs_single_begin_cache_read(struct netfs_io_request *rreq, struct n
return fscache_begin_read_operation(&rreq->cache_resources, netfs_i_cookie(ctx));
}
-static void netfs_single_cache_prepare_read(struct netfs_io_request *rreq,
- struct netfs_io_subrequest *subreq)
-{
- struct netfs_cache_resources *cres = &rreq->cache_resources;
-
- if (!cres->ops) {
- subreq->source = NETFS_DOWNLOAD_FROM_SERVER;
- return;
- }
- subreq->source = cres->ops->prepare_read(subreq, rreq->i_size);
- trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
-
-}
-
static void netfs_single_read_cache(struct netfs_io_request *rreq,
struct netfs_io_subrequest *subreq)
{
@@ -90,6 +76,14 @@ static void netfs_single_read_cache(struct netfs_io_request *rreq,
static int netfs_single_dispatch_read(struct netfs_io_request *rreq)
{
struct netfs_io_stream *stream = &rreq->io_streams[0];
+ struct fscache_occupancy occ = {
+ .query_from = 0,
+ .query_to = rreq->len,
+ .cached_from[0] = ULLONG_MAX,
+ .cached_to[0] = ULLONG_MAX,
+ .cached_from[1] = ULLONG_MAX,
+ .cached_to[1] = ULLONG_MAX,
+ };
struct netfs_io_subrequest *subreq;
int ret = 0;
@@ -97,11 +91,19 @@ static int netfs_single_dispatch_read(struct netfs_io_request *rreq)
if (!subreq)
return -ENOMEM;
- subreq->source = NETFS_SOURCE_UNKNOWN;
+ subreq->source = NETFS_DOWNLOAD_FROM_SERVER;
subreq->start = 0;
subreq->len = rreq->len;
subreq->io_iter = rreq->buffer.iter;
+ /* Try to use the cache if the cache content matches the size of the
+ * remote file.
+ */
+ netfs_read_query_cache(rreq, &occ);
+ if (occ.cached_from[0] == 0 &&
+ occ.cached_to[0] == rreq->len)
+ subreq->source = NETFS_READ_FROM_CACHE;
+
__set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
spin_lock(&rreq->lock);
@@ -111,7 +113,6 @@ static int netfs_single_dispatch_read(struct netfs_io_request *rreq)
smp_store_release(&stream->active, true);
spin_unlock(&rreq->lock);
- netfs_single_cache_prepare_read(rreq, subreq);
switch (subreq->source) {
case NETFS_DOWNLOAD_FROM_SERVER:
netfs_stat(&netfs_n_rh_download);
@@ -125,6 +126,12 @@ static int netfs_single_dispatch_read(struct netfs_io_request *rreq)
rreq->submitted += subreq->len;
break;
case NETFS_READ_FROM_CACHE:
+ if (rreq->cache_resources.ops->prepare_read) {
+ ret = rreq->cache_resources.ops->prepare_read(subreq);
+ if (ret < 0)
+ goto cancel;
+ }
+
trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
netfs_single_read_cache(rreq, subreq);
rreq->submitted += subreq->len;
diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c
index b194447f4b11..a839735d5675 100644
--- a/fs/netfs/write_collect.c
+++ b/fs/netfs/write_collect.c
@@ -185,6 +185,16 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq,
wreq->buffer.first_tail_slot = slot;
}
+static void netfs_cache_collect(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream)
+{
+ struct netfs_cache_resources *cres = &wreq->cache_resources;
+
+ if (cres->ops && cres->ops->collect_write)
+ cres->ops->collect_write(wreq, wreq->cache_coll_to,
+ stream->collected_to - wreq->cache_coll_to);
+}
+
/*
* Collect and assess the results of various write subrequests. We may need to
* retry some of the results - or even do an RMW cycle for content crypto.
@@ -238,6 +248,11 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
if (stream->collected_to < front->start) {
trace_netfs_collect_gap(wreq, stream, issued_to, 'F');
stream->collected_to = front->start;
+ if (stream->source == NETFS_WRITE_TO_CACHE) {
+ if (wreq->cache_coll_to < stream->collected_to)
+ netfs_cache_collect(wreq, stream);
+ wreq->cache_coll_to = stream->collected_to;
+ }
}
/* Stall if the front is still undergoing I/O. */
@@ -261,8 +276,19 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
if (test_bit(NETFS_SREQ_FAILED, &front->flags)) {
stream->failed = true;
stream->error = front->error;
- if (stream->source == NETFS_UPLOAD_TO_SERVER)
+ switch (stream->source) {
+ case NETFS_UPLOAD_TO_SERVER:
mapping_set_error(wreq->mapping, front->error);
+ break;
+ case NETFS_WRITE_TO_CACHE:
+ if (wreq->cache_coll_to < stream->collected_to)
+ netfs_cache_collect(wreq, stream);
+ wreq->cache_coll_to = stream->collected_to + front->len;
+ break;
+ default:
+ WARN_ON(1);
+ break;
+ }
notes |= NEED_REASSESS | SAW_FAILURE;
break;
}
@@ -355,6 +381,7 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
*/
bool netfs_write_collection(struct netfs_io_request *wreq)
{
+ struct netfs_io_stream *cstream = &wreq->io_streams[1];
struct netfs_inode *ictx = netfs_inode(wreq->inode);
size_t transferred;
bool transferred_valid = false;
@@ -390,13 +417,19 @@ bool netfs_write_collection(struct netfs_io_request *wreq)
wreq->transferred = transferred;
trace_netfs_rreq(wreq, netfs_rreq_trace_write_done);
- if (wreq->io_streams[1].active &&
- wreq->io_streams[1].failed &&
- ictx->ops->invalidate_cache) {
- /* Cache write failure doesn't prevent writeback completion
- * unless we're in disconnected mode.
- */
- ictx->ops->invalidate_cache(wreq);
+ if (cstream->active) {
+ if (cstream->failed) {
+ if (ictx->ops->invalidate_cache)
+ /* Cache write failure doesn't prevent
+ * writeback completion unless we're in
+ * disconnected mode.
+ */
+ ictx->ops->invalidate_cache(wreq);
+ } else {
+ if (wreq->cache_coll_to < cstream->collected_to)
+ netfs_cache_collect(wreq, cstream);
+ wreq->cache_coll_to = cstream->collected_to;
+ }
}
_debug("finished");
diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
index 2db688f94125..2de6b8621e11 100644
--- a/fs/netfs/write_issue.c
+++ b/fs/netfs/write_issue.c
@@ -112,6 +112,8 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
goto nomem;
wreq->cleaned_to = wreq->start;
+ if (wreq->cache_resources.dio_size > 1)
+ wreq->cache_coll_to = round_down(wreq->start, wreq->cache_resources.dio_size);
wreq->io_streams[0].stream_nr = 0;
wreq->io_streams[0].source = NETFS_UPLOAD_TO_SERVER;
@@ -263,6 +265,7 @@ void netfs_issue_write(struct netfs_io_request *wreq,
if (!subreq)
return;
+
stream->construct = NULL;
subreq->io_iter.count = subreq->len;
netfs_do_issue_write(stream, subreq);
diff --git a/include/linux/fscache.h b/include/linux/fscache.h
index 58fdb9605425..850d20241075 100644
--- a/include/linux/fscache.h
+++ b/include/linux/fscache.h
@@ -147,6 +147,23 @@ struct fscache_cookie {
};
};
+enum fscache_extent_type {
+ FSCACHE_EXTENT_DATA,
+ FSCACHE_EXTENT_ZERO,
+} __mode(byte);
+
+/*
+ * Cache occupancy information.
+ */
+struct fscache_occupancy {
+ unsigned long long query_from; /* Point to query from */
+ unsigned long long query_to; /* Point to query to */
+ unsigned long long cached_from[2]; /* Point at which cache extents start */
+ unsigned long long cached_to[2]; /* Point at which cache extents end */
+ unsigned int granularity; /* Granularity desired */
+ enum fscache_extent_type cached_type[2]; /* Type of cache extent */
+};
+
/*
* slow-path functions for when there is actually caching available, and the
* netfs does actually have a valid token
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index ba17ac5bf356..77238bc4a712 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -22,6 +22,7 @@
enum netfs_sreq_ref_trace;
typedef struct mempool mempool_t;
+struct fscache_occupancy;
struct folio_queue;
/**
@@ -159,8 +160,10 @@ struct netfs_cache_resources {
const struct netfs_cache_ops *ops;
void *cache_priv;
void *cache_priv2;
+ unsigned long long cache_i_size; /* Initial size of cache file */
unsigned int debug_id; /* Cookie debug ID */
unsigned int inval_counter; /* object->inval_counter at begin_op */
+ unsigned int dio_size; /* DIO block size */
};
/*
@@ -250,6 +253,7 @@ struct netfs_io_request {
unsigned long long start; /* Start position */
atomic64_t issued_to; /* Write issuer folio cursor */
unsigned long long collected_to; /* Point we've collected to */
+ unsigned long long cache_coll_to; /* Point the cache has collected to */
unsigned long long cleaned_to; /* Position we've cleaned folios to */
unsigned long long abandon_to; /* Position to abandon folios to */
pgoff_t no_unlock_folio; /* Don't unlock this folio after read */
@@ -354,8 +358,7 @@ struct netfs_cache_ops {
/* Prepare a read operation, shortening it to a cached/uncached
* boundary as appropriate.
*/
- enum netfs_io_source (*prepare_read)(struct netfs_io_subrequest *subreq,
- unsigned long long i_size);
+ int (*prepare_read)(struct netfs_io_subrequest *subreq);
/* Prepare a write subrequest, working out if we're allowed to do it
* and finding out the maximum amount of data to gather before
@@ -383,8 +386,13 @@ struct netfs_cache_ops {
* next chunk of data starts and how long it is.
*/
int (*query_occupancy)(struct netfs_cache_resources *cres,
- loff_t start, size_t len, size_t granularity,
- loff_t *_data_start, size_t *_data_len);
+ struct fscache_occupancy *occ);
+
+ /* Collect the result of buffered writeback to the cache.
+ * This includes copying a read to the cache.
+ */
+ void (*collect_write)(struct netfs_io_request *wreq,
+ unsigned long long start, size_t len);
};
/* High-level read API. */
diff --git a/include/trace/events/cachefiles.h b/include/trace/events/cachefiles.h
index a743b2a35ea7..4bba6fda1f8b 100644
--- a/include/trace/events/cachefiles.h
+++ b/include/trace/events/cachefiles.h
@@ -56,6 +56,7 @@ enum cachefiles_coherency_trace {
cachefiles_coherency_check_ok,
cachefiles_coherency_check_type,
cachefiles_coherency_check_xattr,
+ cachefiles_coherency_remove,
cachefiles_coherency_set_fail,
cachefiles_coherency_set_ok,
cachefiles_coherency_vol_check_cmp,
@@ -67,6 +68,7 @@ enum cachefiles_coherency_trace {
};
enum cachefiles_trunc_trace {
+ cachefiles_trunc_clear_padding,
cachefiles_trunc_dio_adjust,
cachefiles_trunc_expand_tmpfile,
cachefiles_trunc_shrink,
@@ -84,6 +86,7 @@ enum cachefiles_prepare_read_trace {
};
enum cachefiles_error_trace {
+ cachefiles_trace_alignment_error,
cachefiles_trace_fallocate_error,
cachefiles_trace_getxattr_error,
cachefiles_trace_link_error,
@@ -144,6 +147,7 @@ enum cachefiles_error_trace {
EM(cachefiles_coherency_check_ok, "OK ") \
EM(cachefiles_coherency_check_type, "BAD type") \
EM(cachefiles_coherency_check_xattr, "BAD xatt") \
+ EM(cachefiles_coherency_remove, "REMOVE ") \
EM(cachefiles_coherency_set_fail, "SET fail") \
EM(cachefiles_coherency_set_ok, "SET ok ") \
EM(cachefiles_coherency_vol_check_cmp, "VOL BAD cmp ") \
@@ -154,6 +158,7 @@ enum cachefiles_error_trace {
E_(cachefiles_coherency_vol_set_ok, "VOL SET ok ")
#define cachefiles_trunc_traces \
+ EM(cachefiles_trunc_clear_padding, "CLRPAD") \
EM(cachefiles_trunc_dio_adjust, "DIOADJ") \
EM(cachefiles_trunc_expand_tmpfile, "EXPTMP") \
E_(cachefiles_trunc_shrink, "SHRINK")
@@ -169,6 +174,7 @@ enum cachefiles_error_trace {
E_(cachefiles_trace_read_seek_nxio, "seek-enxio")
#define cachefiles_error_traces \
+ EM(cachefiles_trace_alignment_error, "align") \
EM(cachefiles_trace_fallocate_error, "fallocate") \
EM(cachefiles_trace_getxattr_error, "getxattr") \
EM(cachefiles_trace_link_error, "link") \
@@ -379,12 +385,12 @@ TRACE_EVENT(cachefiles_rename,
TRACE_EVENT(cachefiles_coherency,
TP_PROTO(struct cachefiles_object *obj,
- ino_t ino,
+ ino_t ino, unsigned long long obj_size,
u64 disk_aux,
enum cachefiles_content content,
enum cachefiles_coherency_trace why),
- TP_ARGS(obj, ino, disk_aux, content, why),
+ TP_ARGS(obj, ino, obj_size, disk_aux, content, why),
/* Note that obj may be NULL */
TP_STRUCT__entry(
@@ -392,6 +398,7 @@ TRACE_EVENT(cachefiles_coherency,
__field(enum cachefiles_coherency_trace, why)
__field(enum cachefiles_content, content)
__field(u64, ino)
+ __field(u64, obj_size)
__field(u64, aux)
__field(u64, disk_aux)
),
@@ -401,14 +408,16 @@ TRACE_EVENT(cachefiles_coherency,
__entry->why = why;
__entry->content = content;
__entry->ino = ino;
+ __entry->obj_size = obj_size,
__entry->aux = be64_to_cpup((__be64 *)obj->cookie->inline_aux);
__entry->disk_aux = disk_aux;
),
- TP_printk("o=%08x %s B=%llx c=%u aux=%llx dsk=%llx",
+ TP_printk("o=%08x %s B=%llx oz=%llx c=%u aux=%llx dsk=%llx",
__entry->obj,
__print_symbolic(__entry->why, cachefiles_coherency_traces),
__entry->ino,
+ __entry->obj_size,
__entry->content,
__entry->aux,
__entry->disk_aux)
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 09/26] mm: Make readahead store folio count in readahead_control
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (7 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 08/26] cachefiles: Don't rely on backing fs storage map for most use cases David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 10/26] netfs: Bulk load the readahead-provided folios up front David Howells
` (16 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara, linux-mm
Make readahead store folio count in readahead_control so that the
filesystem can know in advance how many folios it needs to keep track of.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: netfs@lists.linux.dev
cc: linux-mm@kvack.org
cc: linux-fsdevel@vger.kernel.org
---
include/linux/pagemap.h | 1 +
mm/readahead.c | 4 ++++
2 files changed, 5 insertions(+)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index ec442af3f886..3c3e34e5fe8a 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -1361,6 +1361,7 @@ struct readahead_control {
struct file_ra_state *ra;
/* private: use the readahead_* accessors instead */
pgoff_t _index;
+ unsigned int _nr_folios;
unsigned int _nr_pages;
unsigned int _batch_count;
bool dropbehind;
diff --git a/mm/readahead.c b/mm/readahead.c
index 7b05082c89ea..53134c9d9fe9 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -292,6 +292,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
if (i == mark)
folio_set_readahead(folio);
ractl->_workingset |= folio_test_workingset(folio);
+ ractl->_nr_folios++;
ractl->_nr_pages += min_nrpages;
i += min_nrpages;
}
@@ -459,6 +460,7 @@ static inline int ra_alloc_folio(struct readahead_control *ractl, pgoff_t index,
return err;
}
+ ractl->_nr_folios++;
ractl->_nr_pages += 1UL << order;
ractl->_workingset |= folio_test_workingset(folio);
return 0;
@@ -802,6 +804,7 @@ void readahead_expand(struct readahead_control *ractl,
ractl->_workingset = true;
psi_memstall_enter(&ractl->_pflags);
}
+ ractl->_nr_folios++;
ractl->_nr_pages += min_nrpages;
ractl->_index = folio->index;
}
@@ -831,6 +834,7 @@ void readahead_expand(struct readahead_control *ractl,
ractl->_workingset = true;
psi_memstall_enter(&ractl->_pflags);
}
+ ractl->_nr_folios++;
ractl->_nr_pages += min_nrpages;
if (ra) {
ra->size += min_nrpages;
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 10/26] netfs: Bulk load the readahead-provided folios up front
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (8 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 09/26] mm: Make readahead store folio count in readahead_control David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 11/26] Add a function to kmap one page of a multipage bio_vec David Howells
` (15 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara, linux-mm
Load all the folios by the VM for readahead up front into the folio queue.
With the number of folios provided by the VM, the folio queue can be fully
allocated first and then the loading happen in one go inside the RCU read
lock. The folio refs acquired from readahead are dropped in bulk once the
first subrequest is dispatched as it's quite a slow operation.
This simplifies the buffer handling later and isn't noticeably slower as
the xarray doesn't need to be modified and the folios are all already
pre-locked.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: netfs@lists.linux.dev
cc: linux-mm@kvack.org
cc: linux-fsdevel@vger.kernel.org
---
fs/netfs/buffered_read.c | 95 +++++++++++++++++++++-------------
fs/netfs/rolling_buffer.c | 75 +++++++++++++++++++++++++++
include/linux/netfs.h | 1 +
include/linux/rolling_buffer.h | 3 ++
include/trace/events/netfs.h | 1 +
5 files changed, 138 insertions(+), 37 deletions(-)
diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
index aee59ccea257..abdc990faaa2 100644
--- a/fs/netfs/buffered_read.c
+++ b/fs/netfs/buffered_read.c
@@ -54,6 +54,40 @@ static void netfs_rreq_expand(struct netfs_io_request *rreq,
}
}
+/*
+ * Drop the folio refs acquired from the readahead API.
+ */
+static void netfs_bulk_drop_ra_refs(struct netfs_io_request *rreq)
+{
+ struct folio_batch fbatch;
+ struct folio *folio;
+ pgoff_t nr_pages = DIV_ROUND_UP(rreq->len, PAGE_SIZE);
+ pgoff_t first = rreq->start / PAGE_SIZE;
+ XA_STATE(xas, &rreq->mapping->i_pages, first);
+
+ folio_batch_init(&fbatch);
+
+ rcu_read_lock();
+
+ xas_for_each(&xas, folio, first + nr_pages - 1) {
+ if (xas_retry(&xas, folio))
+ continue;
+
+ if (!folio_batch_add(&fbatch, folio))
+ folio_batch_release(&fbatch);
+ }
+
+ rcu_read_unlock();
+ folio_batch_release(&fbatch);
+ trace_netfs_rreq(rreq, netfs_rreq_trace_ra_put_ref);
+}
+
+static void netfs_maybe_bulk_drop_ra_refs(struct netfs_io_request *rreq)
+{
+ if (test_and_clear_bit(NETFS_RREQ_NEED_PUT_RA_REFS, &rreq->flags))
+ netfs_bulk_drop_ra_refs(rreq);
+}
+
/*
* Begin an operation, and fetch the stored zero point value from the cookie if
* available.
@@ -74,12 +108,8 @@ static int netfs_begin_cache_read(struct netfs_io_request *rreq, struct netfs_in
*
* Returns the limited size if successful and -ENOMEM if insufficient memory
* available.
- *
- * [!] NOTE: This must be run in the same thread as ->issue_read() was called
- * in as we access the readahead_control struct.
*/
-static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq,
- struct readahead_control *ractl)
+static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq)
{
struct netfs_io_request *rreq = subreq->rreq;
size_t rsize = subreq->len;
@@ -87,28 +117,6 @@ static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq,
if (subreq->source == NETFS_DOWNLOAD_FROM_SERVER)
rsize = umin(rsize, rreq->io_streams[0].sreq_max_len);
- if (ractl) {
- /* If we don't have sufficient folios in the rolling buffer,
- * extract a folioq's worth from the readahead region at a time
- * into the buffer. Note that this acquires a ref on each page
- * that we will need to release later - but we don't want to do
- * that until after we've started the I/O.
- */
- struct folio_batch put_batch;
-
- folio_batch_init(&put_batch);
- while (rreq->submitted < subreq->start + rsize) {
- ssize_t added;
-
- added = rolling_buffer_load_from_ra(&rreq->buffer, ractl,
- &put_batch);
- if (added < 0)
- return added;
- rreq->submitted += added;
- }
- folio_batch_release(&put_batch);
- }
-
subreq->len = rsize;
if (unlikely(rreq->io_streams[0].sreq_max_segs)) {
size_t limit = netfs_limit_iter(&rreq->buffer.iter, 0, rsize,
@@ -209,8 +217,7 @@ static void netfs_issue_read(struct netfs_io_request *rreq,
* slicing up the region to be read according to available cache blocks and
* network rsize.
*/
-static void netfs_read_to_pagecache(struct netfs_io_request *rreq,
- struct readahead_control *ractl)
+static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
{
struct fscache_occupancy _occ = {
.query_from = rreq->start,
@@ -345,7 +352,7 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq,
trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
}
- slice = netfs_prepare_read_iterator(subreq, ractl);
+ slice = netfs_prepare_read_iterator(subreq);
if (slice < 0) {
ret = slice;
subreq->error = ret;
@@ -362,6 +369,7 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq,
netfs_queue_read(rreq, subreq, size <= 0);
netfs_issue_read(rreq, subreq);
+ netfs_maybe_bulk_drop_ra_refs(rreq);
cond_resched();
} while (size > 0);
@@ -395,6 +403,7 @@ void netfs_readahead(struct readahead_control *ractl)
struct netfs_io_request *rreq;
struct netfs_inode *ictx = netfs_inode(ractl->mapping->host);
unsigned long long start = readahead_pos(ractl);
+ ssize_t added;
size_t size = readahead_length(ractl);
int ret;
@@ -415,11 +424,23 @@ void netfs_readahead(struct readahead_control *ractl)
netfs_rreq_expand(rreq, ractl);
- rreq->submitted = rreq->start;
- if (rolling_buffer_init(&rreq->buffer, rreq->debug_id, ITER_DEST) < 0)
+ /* Load the folios to be read into a bvecq chain. Note that this
+ * acquires a ref on each folio that we will need to release later -
+ * but we don't want to do that until after we've started the I/O.
+ */
+ added = rolling_buffer_bulk_load_from_ra(&rreq->buffer, ractl, rreq->debug_id);
+ if (added < 0) {
+ ret = added;
goto cleanup_free;
- netfs_read_to_pagecache(rreq, ractl);
+ }
+ __set_bit(NETFS_RREQ_NEED_PUT_RA_REFS, &rreq->flags);
+
+ rreq->submitted = rreq->start + added;
+ rreq->cleaned_to = rreq->start;
+ rreq->front_folio_order = folio_order(rreq->buffer.tail->vec.folios[0]);
+ netfs_read_to_pagecache(rreq);
+ netfs_maybe_bulk_drop_ra_refs(rreq);
return netfs_put_request(rreq, netfs_rreq_trace_put_return);
cleanup_free:
@@ -511,7 +532,7 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)
iov_iter_bvec(&rreq->buffer.iter, ITER_DEST, bvec, i, rreq->len);
rreq->submitted = rreq->start + flen;
- netfs_read_to_pagecache(rreq, NULL);
+ netfs_read_to_pagecache(rreq);
if (sink)
folio_put(sink);
@@ -580,7 +601,7 @@ int netfs_read_folio(struct file *file, struct folio *folio)
if (ret < 0)
goto discard;
- netfs_read_to_pagecache(rreq, NULL);
+ netfs_read_to_pagecache(rreq);
ret = netfs_wait_for_read(rreq);
netfs_put_request(rreq, netfs_rreq_trace_put_return);
return ret < 0 ? ret : 0;
@@ -737,7 +758,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
if (ret < 0)
goto error_put;
- netfs_read_to_pagecache(rreq, NULL);
+ netfs_read_to_pagecache(rreq);
ret = netfs_wait_for_read(rreq);
if (ret < 0)
goto error;
@@ -802,7 +823,7 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
if (ret < 0)
goto error_put;
- netfs_read_to_pagecache(rreq, NULL);
+ netfs_read_to_pagecache(rreq);
ret = netfs_wait_for_read(rreq);
netfs_put_request(rreq, netfs_rreq_trace_put_return);
return ret < 0 ? ret : 0;
diff --git a/fs/netfs/rolling_buffer.c b/fs/netfs/rolling_buffer.c
index a17fbf9853a4..292011c1cacb 100644
--- a/fs/netfs/rolling_buffer.c
+++ b/fs/netfs/rolling_buffer.c
@@ -149,6 +149,81 @@ ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll,
return size;
}
+/*
+ * Decant the entire list of folios to read into a rolling buffer.
+ */
+ssize_t rolling_buffer_bulk_load_from_ra(struct rolling_buffer *roll,
+ struct readahead_control *ractl,
+ unsigned int rreq_id)
+{
+ XA_STATE(xas, &ractl->mapping->i_pages, ractl->_index);
+ struct folio_queue *fq;
+ struct folio *folio;
+ ssize_t loaded = 0;
+ int nr, slot = 0, npages = 0;
+
+ /* First allocate all the folioqs we're going to need to avoid having
+ * to deal with ENOMEM later.
+ */
+ nr = ractl->_nr_folios;
+ do {
+ fq = netfs_folioq_alloc(rreq_id, GFP_KERNEL,
+ netfs_trace_folioq_make_space);
+ if (!fq) {
+ rolling_buffer_clear(roll);
+ return -ENOMEM;
+ }
+ fq->prev = roll->head;
+ if (!roll->tail)
+ roll->tail = fq;
+ else
+ roll->head->next = fq;
+ roll->head = fq;
+
+ nr -= folioq_nr_slots(fq);
+ } while (nr > 0);
+
+ rcu_read_lock();
+
+ fq = roll->tail;
+ xas_for_each(&xas, folio, ractl->_index + ractl->_nr_pages - 1) {
+ unsigned int order;
+
+ if (xas_retry(&xas, folio))
+ continue;
+ VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
+
+ order = folio_order(folio);
+ fq->orders[slot] = order;
+ fq->vec.folios[slot] = folio;
+ loaded += PAGE_SIZE << order;
+ npages += 1 << order;
+ trace_netfs_folio(folio, netfs_folio_trace_read);
+
+ slot++;
+ if (slot >= folioq_nr_slots(fq)) {
+ fq->vec.nr = slot;
+ fq = fq->next;
+ if (!fq) {
+ WARN_ON_ONCE(npages < readahead_count(ractl));
+ break;
+ }
+ slot = 0;
+ }
+ }
+
+ rcu_read_unlock();
+
+ if (fq)
+ fq->vec.nr = slot;
+
+ WRITE_ONCE(roll->iter.count, loaded);
+ iov_iter_folio_queue(&roll->iter, ITER_DEST, roll->tail, 0, 0, loaded);
+ ractl->_index += npages;
+ ractl->_nr_pages -= npages;
+ return loaded;
+}
+
/*
* Append a folio to the rolling buffer.
*/
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index 77238bc4a712..cc56b6512769 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -280,6 +280,7 @@ struct netfs_io_request {
#define NETFS_RREQ_FOLIO_COPY_TO_CACHE 10 /* Copy current folio to cache from read */
#define NETFS_RREQ_UPLOAD_TO_SERVER 11 /* Need to write to the server */
#define NETFS_RREQ_USE_IO_ITER 12 /* Use ->io_iter rather than ->i_pages */
+#define NETFS_RREQ_NEED_PUT_RA_REFS 13 /* Need to put the folio refs RA gave us */
#define NETFS_RREQ_USE_PGPRIV2 31 /* [DEPRECATED] Use PG_private_2 to mark
* write to cache on read */
const struct netfs_request_ops *netfs_ops;
diff --git a/include/linux/rolling_buffer.h b/include/linux/rolling_buffer.h
index ac15b1ffdd83..b35ef43f325f 100644
--- a/include/linux/rolling_buffer.h
+++ b/include/linux/rolling_buffer.h
@@ -48,6 +48,9 @@ int rolling_buffer_make_space(struct rolling_buffer *roll);
ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll,
struct readahead_control *ractl,
struct folio_batch *put_batch);
+ssize_t rolling_buffer_bulk_load_from_ra(struct rolling_buffer *roll,
+ struct readahead_control *ractl,
+ unsigned int rreq_id);
ssize_t rolling_buffer_append(struct rolling_buffer *roll, struct folio *folio,
unsigned int flags);
struct folio_queue *rolling_buffer_delete_spent(struct rolling_buffer *roll);
diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
index cbe28211106c..b8236f9e940e 100644
--- a/include/trace/events/netfs.h
+++ b/include/trace/events/netfs.h
@@ -59,6 +59,7 @@
EM(netfs_rreq_trace_free, "FREE ") \
EM(netfs_rreq_trace_intr, "INTR ") \
EM(netfs_rreq_trace_ki_complete, "KI-CMPL") \
+ EM(netfs_rreq_trace_ra_put_ref, "RA-PUT ") \
EM(netfs_rreq_trace_recollect, "RECLLCT") \
EM(netfs_rreq_trace_redirty, "REDIRTY") \
EM(netfs_rreq_trace_resubmit, "RESUBMT") \
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 11/26] Add a function to kmap one page of a multipage bio_vec
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (9 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 10/26] netfs: Bulk load the readahead-provided folios up front David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 12/26] iov_iter: Add a segmented queue of bio_vec[] David Howells
` (14 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara, linux-block
Add a function to kmap one page of a multipage bio_vec by offset (which is
added to the offset in the bio_vec internally). The caller is responsible
for calculating how much of the page is then available.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: Jens Axboe <axboe@kernel.dk>
cc: linux-block@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
include/linux/bvec.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index 06fb60471aaf..9788bfd52818 100644
--- a/include/linux/bvec.h
+++ b/include/linux/bvec.h
@@ -308,4 +308,25 @@ static inline phys_addr_t bvec_phys(const struct bio_vec *bvec)
return page_to_phys(bvec->bv_page) + bvec->bv_offset;
}
+/**
+ * kmap_local_bvec - Map part of a bvec into the kernel virtual address space
+ * @bvec: bvec to map
+ * @offset: Offset into bvec
+ *
+ * Map the page containing the byte at @offset into the kernel virtual address
+ * space. The caller is responsible for making sure this doesn't overrun.
+ *
+ * Call kunmap_local on the returned address to unmap.
+ */
+static inline void *kmap_local_bvec(struct bio_vec *bvec, size_t offset)
+{
+#ifdef CONFIG_HIGHMEM
+ offset += bvec->bv_offset;
+
+ return kmap_local_page(bvec->bv_page + offset / PAGE_SIZE) + offset % PAGE_SIZE;
+#else
+ return bvec_virt(bvec) + offset;
+#endif
+}
+
#endif /* __LINUX_BVEC_H */
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 12/26] iov_iter: Add a segmented queue of bio_vec[]
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (10 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 11/26] Add a function to kmap one page of a multipage bio_vec David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 13/26] netfs: Add some tools for managing bvecq chains David Howells
` (13 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara, linux-block
Add the concept of a segmented queue of bio_vec[] arrays. This allows an
indefinite quantity of elements to be handled and allows things like
network filesystems and crypto drivers to glue bits on the ends without
having to reallocate the array.
The bvecq struct that defines each segment also carries capacity/usage
information along with flags indicating whether the constituent memory
regions need freeing or unpinning and the file position of the first
element in a segment. The bvecq structs are refcounted to allow a queue to
be extracted in batches and split between a number of subrequests.
The bvecq can have the bio_vec[] it manages allocated in with it, but this
is not required. A flag is provided for if this is the case as comparing
->bv to ->__bv is not sufficient to detect this case.
Add an iterator type ITER_BVECQ for it. This is intended to replace
ITER_FOLIOQ (and ITER_XARRAY).
Note that the prev pointer is only really needed for iov_iter_revert() and
could be dispensed with if struct iov_iter contained the head information
as well as the current point.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: Jens Axboe <axboe@kernel.dk>
cc: linux-block@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
include/linux/bvecq.h | 46 ++++++
include/linux/iov_iter.h | 63 +++++++-
include/linux/uio.h | 11 ++
lib/iov_iter.c | 288 ++++++++++++++++++++++++++++++++++++-
lib/scatterlist.c | 66 +++++++++
lib/tests/kunit_iov_iter.c | 180 +++++++++++++++++++++++
6 files changed, 649 insertions(+), 5 deletions(-)
create mode 100644 include/linux/bvecq.h
diff --git a/include/linux/bvecq.h b/include/linux/bvecq.h
new file mode 100644
index 000000000000..462125af1cc7
--- /dev/null
+++ b/include/linux/bvecq.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Implementation of a segmented queue of bio_vec[].
+ *
+ * Copyright (C) 2026 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ */
+
+#ifndef _LINUX_BVECQ_H
+#define _LINUX_BVECQ_H
+
+#include <linux/bvec.h>
+
+/*
+ * Segmented bio_vec queue.
+ *
+ * These can be linked together to form messages of indefinite length and
+ * iterated over with an ITER_BVECQ iterator. The list is non-circular; next
+ * and prev are NULL at the ends.
+ *
+ * The bv pointer points to the segment array; this may be __bv if allocated
+ * together. The caller is responsible for determining whether or not this is
+ * the case as the array pointed to by bv may be follow on directly from the
+ * bvecq by accident of allocation (ie. ->bv == ->__bv is *not* sufficient to
+ * determine this).
+ *
+ * The file position and discontiguity flag allow non-contiguous data sets to
+ * be chained together, but still teased apart without the need to convert the
+ * info in the bio_vec back into a folio pointer.
+ */
+struct bvecq {
+ struct bvecq *next; /* Next bvec in the list or NULL */
+ struct bvecq *prev; /* Prev bvec in the list or NULL */
+ unsigned long long fpos; /* File position */
+ refcount_t ref;
+ u32 priv; /* Private data */
+ u16 nr_segs; /* Number of elements in bv[] used */
+ u16 max_segs; /* Number of elements allocated in bv[] */
+ bool inline_bv:1; /* T if __bv[] is being used */
+ bool free:1; /* T if the pages need freeing */
+ bool unpin:1; /* T if the pages need unpinning, not freeing */
+ bool discontig:1; /* T if not contiguous with previous bvecq */
+ struct bio_vec *bv; /* Pointer to array of page fragments */
+ struct bio_vec __bv[]; /* Default array (if ->inline_bv) */
+};
+
+#endif /* _LINUX_BVECQ_H */
diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h
index f9a17fbbd398..999607ece481 100644
--- a/include/linux/iov_iter.h
+++ b/include/linux/iov_iter.h
@@ -9,7 +9,7 @@
#define _LINUX_IOV_ITER_H
#include <linux/uio.h>
-#include <linux/bvec.h>
+#include <linux/bvecq.h>
#include <linux/folio_queue.h>
typedef size_t (*iov_step_f)(void *iter_base, size_t progress, size_t len,
@@ -141,6 +141,59 @@ size_t iterate_bvec(struct iov_iter *iter, size_t len, void *priv, void *priv2,
return progress;
}
+/*
+ * Handle ITER_BVECQ.
+ */
+static __always_inline
+size_t iterate_bvecq(struct iov_iter *iter, size_t len, void *priv, void *priv2,
+ iov_step_f step)
+{
+ const struct bvecq *bq = iter->bvecq;
+ unsigned int slot = iter->bvecq_slot;
+ size_t progress = 0, skip = iter->iov_offset;
+
+ if (slot == bq->nr_segs) {
+ /* The iterator may have been extended. */
+ bq = bq->next;
+ slot = 0;
+ }
+
+ do {
+ const struct bio_vec *bvec = &bq->bv[slot];
+ struct page *page = bvec->bv_page + (bvec->bv_offset + skip) / PAGE_SIZE;
+ size_t part, remain, consumed;
+ size_t poff = (bvec->bv_offset + skip) % PAGE_SIZE;
+ void *base;
+
+ part = umin(umin(bvec->bv_len - skip, PAGE_SIZE - poff), len);
+ base = kmap_local_page(page) + poff;
+ remain = step(base, progress, part, priv, priv2);
+ kunmap_local(base);
+ consumed = part - remain;
+ len -= consumed;
+ progress += consumed;
+ skip += consumed;
+ if (skip >= bvec->bv_len) {
+ skip = 0;
+ slot++;
+ if (slot >= bq->nr_segs) {
+ if (!bq->next)
+ break;
+ bq = bq->next;
+ slot = 0;
+ }
+ }
+ if (remain)
+ break;
+ } while (len);
+
+ iter->bvecq_slot = slot;
+ iter->bvecq = bq;
+ iter->iov_offset = skip;
+ iter->count -= progress;
+ return progress;
+}
+
/*
* Handle ITER_FOLIOQ.
*/
@@ -306,6 +359,8 @@ size_t iterate_and_advance2(struct iov_iter *iter, size_t len, void *priv,
return iterate_bvec(iter, len, priv, priv2, step);
if (iov_iter_is_kvec(iter))
return iterate_kvec(iter, len, priv, priv2, step);
+ if (iov_iter_is_bvecq(iter))
+ return iterate_bvecq(iter, len, priv, priv2, step);
if (iov_iter_is_folioq(iter))
return iterate_folioq(iter, len, priv, priv2, step);
if (iov_iter_is_xarray(iter))
@@ -342,8 +397,8 @@ size_t iterate_and_advance(struct iov_iter *iter, size_t len, void *priv,
* buffer is presented in segments, which for kernel iteration are broken up by
* physical pages and mapped, with the mapped address being presented.
*
- * [!] Note This will only handle BVEC, KVEC, FOLIOQ, XARRAY and DISCARD-type
- * iterators; it will not handle UBUF or IOVEC-type iterators.
+ * [!] Note This will only handle BVEC, KVEC, BVECQ, FOLIOQ, XARRAY and
+ * DISCARD-type iterators; it will not handle UBUF or IOVEC-type iterators.
*
* A step functions, @step, must be provided, one for handling mapped kernel
* addresses and the other is given user addresses which have the potential to
@@ -370,6 +425,8 @@ size_t iterate_and_advance_kernel(struct iov_iter *iter, size_t len, void *priv,
return iterate_bvec(iter, len, priv, priv2, step);
if (iov_iter_is_kvec(iter))
return iterate_kvec(iter, len, priv, priv2, step);
+ if (iov_iter_is_bvecq(iter))
+ return iterate_bvecq(iter, len, priv, priv2, step);
if (iov_iter_is_folioq(iter))
return iterate_folioq(iter, len, priv, priv2, step);
if (iov_iter_is_xarray(iter))
diff --git a/include/linux/uio.h b/include/linux/uio.h
index a9bc5b3067e3..aa50d348dfcc 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -27,6 +27,7 @@ enum iter_type {
ITER_BVEC,
ITER_KVEC,
ITER_FOLIOQ,
+ ITER_BVECQ,
ITER_XARRAY,
ITER_DISCARD,
};
@@ -69,6 +70,7 @@ struct iov_iter {
const struct kvec *kvec;
const struct bio_vec *bvec;
const struct folio_queue *folioq;
+ const struct bvecq *bvecq;
struct xarray *xarray;
void __user *ubuf;
};
@@ -78,6 +80,7 @@ struct iov_iter {
union {
unsigned long nr_segs;
u8 folioq_slot;
+ u16 bvecq_slot;
loff_t xarray_start;
};
};
@@ -150,6 +153,11 @@ static inline bool iov_iter_is_folioq(const struct iov_iter *i)
return iov_iter_type(i) == ITER_FOLIOQ;
}
+static inline bool iov_iter_is_bvecq(const struct iov_iter *i)
+{
+ return iov_iter_type(i) == ITER_BVECQ;
+}
+
static inline bool iov_iter_is_xarray(const struct iov_iter *i)
{
return iov_iter_type(i) == ITER_XARRAY;
@@ -298,6 +306,9 @@ void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count);
void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction,
const struct folio_queue *folioq,
unsigned int first_slot, unsigned int offset, size_t count);
+void iov_iter_bvec_queue(struct iov_iter *i, unsigned int direction,
+ const struct bvecq *bvecq,
+ unsigned int first_slot, unsigned int offset, size_t count);
void iov_iter_xarray(struct iov_iter *i, unsigned int direction, struct xarray *xarray,
loff_t start, size_t count);
ssize_t iov_iter_get_pages2(struct iov_iter *i, struct page **pages,
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 0a63c7fba313..df8d037894b1 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -571,6 +571,39 @@ static void iov_iter_folioq_advance(struct iov_iter *i, size_t size)
i->folioq = folioq;
}
+static void iov_iter_bvecq_advance(struct iov_iter *i, size_t by)
+{
+ const struct bvecq *bq = i->bvecq;
+ unsigned int slot = i->bvecq_slot;
+
+ if (!i->count)
+ return;
+ i->count -= by;
+
+ if (slot >= bq->nr_segs) {
+ bq = bq->next;
+ slot = 0;
+ }
+
+ by += i->iov_offset; /* From beginning of current segment. */
+ do {
+ size_t len = bq->bv[slot].bv_len;
+
+ if (likely(by < len))
+ break;
+ by -= len;
+ slot++;
+ if (slot >= bq->nr_segs && bq->next) {
+ bq = bq->next;
+ slot = 0;
+ }
+ } while (by);
+
+ i->iov_offset = by;
+ i->bvecq_slot = slot;
+ i->bvecq = bq;
+}
+
void iov_iter_advance(struct iov_iter *i, size_t size)
{
if (unlikely(i->count < size))
@@ -585,6 +618,8 @@ void iov_iter_advance(struct iov_iter *i, size_t size)
iov_iter_bvec_advance(i, size);
} else if (iov_iter_is_folioq(i)) {
iov_iter_folioq_advance(i, size);
+ } else if (iov_iter_is_bvecq(i)) {
+ iov_iter_bvecq_advance(i, size);
} else if (iov_iter_is_discard(i)) {
i->count -= size;
}
@@ -617,6 +652,32 @@ static void iov_iter_folioq_revert(struct iov_iter *i, size_t unroll)
i->folioq = folioq;
}
+static void iov_iter_bvecq_revert(struct iov_iter *i, size_t unroll)
+{
+ const struct bvecq *bq = i->bvecq;
+ unsigned int slot = i->bvecq_slot;
+
+ for (;;) {
+ size_t len;
+
+ if (slot == 0) {
+ bq = bq->prev;
+ slot = bq->nr_segs;
+ }
+ slot--;
+
+ len = bq->bv[slot].bv_len;
+ if (unroll <= len) {
+ i->iov_offset = len - unroll;
+ break;
+ }
+ unroll -= len;
+ }
+
+ i->bvecq_slot = slot;
+ i->bvecq = bq;
+}
+
void iov_iter_revert(struct iov_iter *i, size_t unroll)
{
if (!unroll)
@@ -651,6 +712,9 @@ void iov_iter_revert(struct iov_iter *i, size_t unroll)
} else if (iov_iter_is_folioq(i)) {
i->iov_offset = 0;
iov_iter_folioq_revert(i, unroll);
+ } else if (iov_iter_is_bvecq(i)) {
+ i->iov_offset = 0;
+ iov_iter_bvecq_revert(i, unroll);
} else { /* same logics for iovec and kvec */
const struct iovec *iov = iter_iov(i);
while (1) {
@@ -678,9 +742,12 @@ size_t iov_iter_single_seg_count(const struct iov_iter *i)
if (iov_iter_is_bvec(i))
return min(i->count, i->bvec->bv_len - i->iov_offset);
}
+ if (!i->count)
+ return 0;
if (unlikely(iov_iter_is_folioq(i)))
- return !i->count ? 0 :
- umin(folioq_folio_size(i->folioq, i->folioq_slot), i->count);
+ return umin(folioq_folio_size(i->folioq, i->folioq_slot), i->count);
+ if (unlikely(iov_iter_is_bvecq(i)))
+ return min(i->count, i->bvecq->bv[i->bvecq_slot].bv_len - i->iov_offset);
return i->count;
}
EXPORT_SYMBOL(iov_iter_single_seg_count);
@@ -747,6 +814,35 @@ void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction,
}
EXPORT_SYMBOL(iov_iter_folio_queue);
+/**
+ * iov_iter_bvec_queue - Initialise an I/O iterator to use a segmented bvec queue
+ * @i: The iterator to initialise.
+ * @direction: The direction of the transfer.
+ * @bvecq: The starting point in the bvec queue.
+ * @first_slot: The first slot in the bvec queue to use
+ * @offset: The offset into the bvec in the first slot to start at
+ * @count: The size of the I/O buffer in bytes.
+ *
+ * Set up an I/O iterator to either draw data out of the buffers attached to an
+ * inode or to inject data into those buffers. The pages *must* be prevented
+ * from evaporation, either by the caller.
+ */
+void iov_iter_bvec_queue(struct iov_iter *i, unsigned int direction,
+ const struct bvecq *bvecq, unsigned int first_slot,
+ unsigned int offset, size_t count)
+{
+ WARN_ON(direction & ~(READ | WRITE));
+ *i = (struct iov_iter) {
+ .iter_type = ITER_BVECQ,
+ .data_source = direction,
+ .bvecq = bvecq,
+ .bvecq_slot = first_slot,
+ .count = count,
+ .iov_offset = offset,
+ };
+}
+EXPORT_SYMBOL(iov_iter_bvec_queue);
+
/**
* iov_iter_xarray - Initialise an I/O iterator to use the pages in an xarray
* @i: The iterator to initialise.
@@ -839,6 +935,37 @@ static unsigned long iov_iter_alignment_bvec(const struct iov_iter *i)
return res;
}
+static unsigned long iov_iter_alignment_bvecq(const struct iov_iter *iter)
+{
+ const struct bvecq *bq;
+ unsigned long res = 0;
+ unsigned int slot = iter->bvecq_slot;
+ size_t skip = iter->iov_offset;
+ size_t size = iter->count;
+
+ if (!size)
+ return res;
+
+ for (bq = iter->bvecq; bq; bq = bq->next) {
+ for (; slot < bq->nr_segs; slot++) {
+ const struct bio_vec *bvec = &bq->bv[slot];
+ size_t part = umin(bvec->bv_len - skip, size);
+
+ res |= bvec->bv_offset + skip;
+ res |= part;
+
+ size -= part;
+ if (size == 0)
+ return res;
+ skip = 0;
+ }
+
+ slot = 0;
+ }
+
+ return res;
+}
+
unsigned long iov_iter_alignment(const struct iov_iter *i)
{
if (likely(iter_is_ubuf(i))) {
@@ -858,6 +985,8 @@ unsigned long iov_iter_alignment(const struct iov_iter *i)
/* With both xarray and folioq types, we're dealing with whole folios. */
if (iov_iter_is_folioq(i))
return i->iov_offset | i->count;
+ if (iov_iter_is_bvecq(i))
+ return iov_iter_alignment_bvecq(i);
if (iov_iter_is_xarray(i))
return (i->xarray_start + i->iov_offset) | i->count;
@@ -1124,6 +1253,7 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i,
return iter_folioq_get_pages(i, pages, maxsize, maxpages, start);
if (iov_iter_is_xarray(i))
return iter_xarray_get_pages(i, pages, maxsize, maxpages, start);
+ WARN_ON_ONCE(iov_iter_is_bvecq(i));
return -EFAULT;
}
@@ -1192,6 +1322,36 @@ static int bvec_npages(const struct iov_iter *i, int maxpages)
return npages;
}
+static size_t iov_npages_bvecq(const struct iov_iter *iter, size_t maxpages)
+{
+ const struct bvecq *bq;
+ unsigned int slot = iter->bvecq_slot;
+ size_t npages = 0;
+ size_t skip = iter->iov_offset;
+ size_t size = iter->count;
+
+ for (bq = iter->bvecq; bq; bq = bq->next) {
+ for (; slot < bq->nr_segs; slot++) {
+ const struct bio_vec *bvec = &bq->bv[slot];
+ size_t offs = (bvec->bv_offset + skip) % PAGE_SIZE;
+ size_t part = umin(bvec->bv_len - skip, size);
+
+ npages += DIV_ROUND_UP(offs + part, PAGE_SIZE);
+ if (npages >= maxpages)
+ goto out;
+
+ size -= part;
+ if (!size)
+ goto out;
+ skip = 0;
+ }
+
+ slot = 0;
+ }
+out:
+ return umin(npages, maxpages);
+}
+
int iov_iter_npages(const struct iov_iter *i, int maxpages)
{
if (unlikely(!i->count))
@@ -1211,6 +1371,8 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages)
int npages = DIV_ROUND_UP(offset + i->count, PAGE_SIZE);
return min(npages, maxpages);
}
+ if (iov_iter_is_bvecq(i))
+ return iov_npages_bvecq(i, maxpages);
if (iov_iter_is_xarray(i)) {
unsigned offset = (i->xarray_start + i->iov_offset) % PAGE_SIZE;
int npages = DIV_ROUND_UP(offset + i->count, PAGE_SIZE);
@@ -1554,6 +1716,124 @@ static ssize_t iov_iter_extract_folioq_pages(struct iov_iter *i,
return extracted;
}
+/*
+ * Extract a list of virtually contiguous pages from an ITER_BVECQ iterator.
+ * This does not get references on the pages, nor does it get a pin on them.
+ */
+static ssize_t iov_iter_extract_bvecq_pages(struct iov_iter *iter,
+ struct page ***pages, size_t maxsize,
+ unsigned int maxpages,
+ iov_iter_extraction_t extraction_flags,
+ size_t *offset0)
+{
+ const struct bvecq *bvecq = iter->bvecq;
+ struct page **p;
+ unsigned int seg = iter->bvecq_slot, count = 0, nr = 0;
+ size_t extracted = 0, offset = iter->iov_offset;
+
+ if (seg >= bvecq->nr_segs) {
+ bvecq = bvecq->next;
+ if (WARN_ON_ONCE(!bvecq))
+ return 0;
+ seg = 0;
+ }
+
+ /* First, we count the run of virtually contiguous pages. */
+ do {
+ const struct bio_vec *bv = &bvecq->bv[seg];
+ size_t boff = bv->bv_offset, blen = bv->bv_len;
+
+ if (!bv->bv_page)
+ blen = 0;
+ if (extracted > 0 && boff % PAGE_SIZE)
+ break;
+
+ if (offset < blen) {
+ size_t part = umin(maxsize - extracted, blen - offset);
+ size_t poff = (boff + offset) % PAGE_SIZE;
+ size_t pcount = DIV_ROUND_UP(poff + blen, PAGE_SIZE);
+
+ offset += part;
+ extracted += part;
+ count += pcount;
+ if ((boff + blen) % PAGE_SIZE)
+ break;
+ }
+
+ if (offset >= blen) {
+ offset = 0;
+ seg++;
+ if (seg >= bvecq->nr_segs) {
+ if (!bvecq->next) {
+ WARN_ON_ONCE(extracted < iter->count);
+ break;
+ }
+ bvecq = bvecq->next;
+ seg = 0;
+ }
+ }
+ } while (count < maxpages && extracted < maxsize);
+
+ maxpages = umin(maxpages, count);
+
+ if (!*pages) {
+ *pages = kvmalloc_array(maxpages, sizeof(struct page *), GFP_KERNEL);
+ if (!*pages)
+ return -ENOMEM;
+ }
+
+ p = *pages;
+
+ /* Now transcribe the page pointers. */
+ extracted = 0;
+ bvecq = iter->bvecq;
+ offset = iter->iov_offset;
+ seg = iter->bvecq_slot;
+
+ do {
+ const struct bio_vec *bv = &bvecq->bv[seg];
+ size_t boff = bv->bv_offset, blen = bv->bv_len;
+
+ if (!bv->bv_page)
+ blen = 0;
+
+ if (offset < blen) {
+ size_t part = umin(maxsize - extracted, blen - offset);
+ size_t poff = (boff + offset) % PAGE_SIZE;
+ size_t pix = (boff + offset) / PAGE_SIZE;
+
+ if (poff + part > PAGE_SIZE)
+ part = PAGE_SIZE - poff;
+
+ if (!extracted)
+ *offset0 = poff;
+
+ p[nr++] = bv->bv_page + pix;
+ offset += part;
+ extracted += part;
+ }
+
+ if (offset >= blen) {
+ offset = 0;
+ seg++;
+ if (seg >= bvecq->nr_segs) {
+ if (!bvecq->next) {
+ WARN_ON_ONCE(extracted < iter->count);
+ break;
+ }
+ bvecq = bvecq->next;
+ seg = 0;
+ }
+ }
+ } while (nr < maxpages && extracted < maxsize);
+
+ iter->bvecq = bvecq;
+ iter->bvecq_slot = seg;
+ iter->iov_offset = offset;
+ iter->count -= extracted;
+ return extracted;
+}
+
/*
* Extract a list of contiguous pages from an ITER_XARRAY iterator. This does not
* get references on the pages, nor does it get a pin on them.
@@ -1838,6 +2118,10 @@ ssize_t iov_iter_extract_pages(struct iov_iter *i,
return iov_iter_extract_folioq_pages(i, pages, maxsize,
maxpages, extraction_flags,
offset0);
+ if (iov_iter_is_bvecq(i))
+ return iov_iter_extract_bvecq_pages(i, pages, maxsize,
+ maxpages, extraction_flags,
+ offset0);
if (iov_iter_is_xarray(i))
return iov_iter_extract_xarray_pages(i, pages, maxsize,
maxpages, extraction_flags,
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index d773720d11bf..03e3883a1a2d 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -10,6 +10,7 @@
#include <linux/highmem.h>
#include <linux/kmemleak.h>
#include <linux/bvec.h>
+#include <linux/bvecq.h>
#include <linux/uio.h>
#include <linux/folio_queue.h>
@@ -1328,6 +1329,68 @@ static ssize_t extract_folioq_to_sg(struct iov_iter *iter,
return ret;
}
+/*
+ * Extract up to sg_max folios from an BVECQ-type iterator and add them to
+ * the scatterlist. The pages are not pinned.
+ */
+static ssize_t extract_bvecq_to_sg(struct iov_iter *iter,
+ ssize_t maxsize,
+ struct sg_table *sgtable,
+ unsigned int sg_max,
+ iov_iter_extraction_t extraction_flags)
+{
+ const struct bvecq *bvecq = iter->bvecq;
+ struct scatterlist *sg = sgtable->sgl + sgtable->nents;
+ unsigned int seg = iter->bvecq_slot;
+ ssize_t ret = 0;
+ size_t offset = iter->iov_offset;
+
+ if (seg >= bvecq->nr_segs) {
+ bvecq = bvecq->next;
+ if (WARN_ON_ONCE(!bvecq))
+ return 0;
+ seg = 0;
+ }
+
+ do {
+ const struct bio_vec *bv = &bvecq->bv[seg];
+ size_t blen = bv->bv_len;
+
+ if (!bv->bv_page)
+ blen = 0;
+
+ if (offset < blen) {
+ size_t part = umin(maxsize - ret, blen - offset);
+
+ sg_set_page(sg, bv->bv_page, part, bv->bv_offset + offset);
+ sgtable->nents++;
+ sg++;
+ sg_max--;
+ offset += part;
+ ret += part;
+ }
+
+ if (offset >= blen) {
+ offset = 0;
+ seg++;
+ if (seg >= bvecq->nr_segs) {
+ if (!bvecq->next) {
+ WARN_ON_ONCE(ret < iter->count);
+ break;
+ }
+ bvecq = bvecq->next;
+ seg = 0;
+ }
+ }
+ } while (sg_max > 0 && ret < maxsize);
+
+ iter->bvecq = bvecq;
+ iter->bvecq_slot = seg;
+ iter->iov_offset = offset;
+ iter->count -= ret;
+ return ret;
+}
+
/*
* Extract up to sg_max folios from an XARRAY-type iterator and add them to
* the scatterlist. The pages are not pinned.
@@ -1426,6 +1489,9 @@ ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t maxsize,
case ITER_FOLIOQ:
return extract_folioq_to_sg(iter, maxsize, sgtable, sg_max,
extraction_flags);
+ case ITER_BVECQ:
+ return extract_bvecq_to_sg(iter, maxsize, sgtable, sg_max,
+ extraction_flags);
case ITER_XARRAY:
return extract_xarray_to_sg(iter, maxsize, sgtable, sg_max,
extraction_flags);
diff --git a/lib/tests/kunit_iov_iter.c b/lib/tests/kunit_iov_iter.c
index bb847e5010eb..5bc941f64343 100644
--- a/lib/tests/kunit_iov_iter.c
+++ b/lib/tests/kunit_iov_iter.c
@@ -12,6 +12,7 @@
#include <linux/mm.h>
#include <linux/uio.h>
#include <linux/bvec.h>
+#include <linux/bvecq.h>
#include <linux/folio_queue.h>
#include <kunit/test.h>
@@ -536,6 +537,183 @@ static void __init iov_kunit_copy_from_folioq(struct kunit *test)
KUNIT_SUCCEED(test);
}
+static void iov_kunit_destroy_bvecq(void *data)
+{
+ struct bvecq *bq, *next;
+
+ for (bq = data; bq; bq = next) {
+ next = bq->next;
+ for (int i = 0; i < bq->nr_segs; i++)
+ if (bq->bv[i].bv_page)
+ put_page(bq->bv[i].bv_page);
+ kfree(bq);
+ }
+}
+
+static struct bvecq *iov_kunit_alloc_bvecq(struct kunit *test, unsigned int max_segs)
+{
+ struct bvecq *bq;
+
+ bq = kzalloc(struct_size(bq, __bv, max_segs), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bq);
+ bq->max_segs = max_segs;
+ return bq;
+}
+
+static struct bvecq *iov_kunit_create_bvecq(struct kunit *test, unsigned int max_segs)
+{
+ struct bvecq *bq;
+
+ bq = iov_kunit_alloc_bvecq(test, max_segs);
+ kunit_add_action_or_reset(test, iov_kunit_destroy_bvecq, bq);
+ return bq;
+}
+
+static void __init iov_kunit_load_bvecq(struct kunit *test,
+ struct iov_iter *iter, int dir,
+ struct bvecq *bq_head,
+ struct page **pages, size_t npages)
+{
+ struct bvecq *bq = bq_head;
+ size_t size = 0;
+
+ for (int i = 0; i < npages; i++) {
+ if (bq->nr_segs >= bq->max_segs) {
+ bq->next = iov_kunit_alloc_bvecq(test, 8);
+ bq->next->prev = bq;
+ bq = bq->next;
+ }
+ bvec_set_page(&bq->bv[bq->nr_segs], pages[i], PAGE_SIZE, 0);
+ bq->nr_segs++;
+ size += PAGE_SIZE;
+ }
+ iov_iter_bvec_queue(iter, dir, bq_head, 0, 0, size);
+}
+
+/*
+ * Test copying to a ITER_BVECQ-type iterator.
+ */
+static void __init iov_kunit_copy_to_bvecq(struct kunit *test)
+{
+ const struct kvec_test_range *pr;
+ struct iov_iter iter;
+ struct bvecq *bq;
+ struct page **spages, **bpages;
+ u8 *scratch, *buffer;
+ size_t bufsize, npages, size, copied;
+ int i, patt;
+
+ bufsize = 0x100000;
+ npages = bufsize / PAGE_SIZE;
+
+ bq = iov_kunit_create_bvecq(test, 8);
+
+ scratch = iov_kunit_create_buffer(test, &spages, npages);
+ for (i = 0; i < bufsize; i++)
+ scratch[i] = pattern(i);
+
+ buffer = iov_kunit_create_buffer(test, &bpages, npages);
+ memset(buffer, 0, bufsize);
+
+ iov_kunit_load_bvecq(test, &iter, READ, bq, bpages, npages);
+
+ i = 0;
+ for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
+ size = pr->to - pr->from;
+ KUNIT_ASSERT_LE(test, pr->to, bufsize);
+
+ iov_iter_bvec_queue(&iter, READ, bq, 0, 0, pr->to);
+ iov_iter_advance(&iter, pr->from);
+ copied = copy_to_iter(scratch + i, size, &iter);
+
+ KUNIT_EXPECT_EQ(test, copied, size);
+ KUNIT_EXPECT_EQ(test, iter.count, 0);
+ i += size;
+ if (test->status == KUNIT_FAILURE)
+ goto stop;
+ }
+
+ /* Build the expected image in the scratch buffer. */
+ patt = 0;
+ memset(scratch, 0, bufsize);
+ for (pr = kvec_test_ranges; pr->from >= 0; pr++)
+ for (i = pr->from; i < pr->to; i++)
+ scratch[i] = pattern(patt++);
+
+ /* Compare the images */
+ for (i = 0; i < bufsize; i++) {
+ KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i);
+ if (buffer[i] != scratch[i])
+ return;
+ }
+
+stop:
+ KUNIT_SUCCEED(test);
+}
+
+/*
+ * Test copying from a ITER_BVECQ-type iterator.
+ */
+static void __init iov_kunit_copy_from_bvecq(struct kunit *test)
+{
+ const struct kvec_test_range *pr;
+ struct iov_iter iter;
+ struct bvecq *bq;
+ struct page **spages, **bpages;
+ u8 *scratch, *buffer;
+ size_t bufsize, npages, size, copied;
+ int i, j;
+
+ bufsize = 0x100000;
+ npages = bufsize / PAGE_SIZE;
+
+ bq = iov_kunit_create_bvecq(test, 8);
+
+ buffer = iov_kunit_create_buffer(test, &bpages, npages);
+ for (i = 0; i < bufsize; i++)
+ buffer[i] = pattern(i);
+
+ scratch = iov_kunit_create_buffer(test, &spages, npages);
+ memset(scratch, 0, bufsize);
+
+ iov_kunit_load_bvecq(test, &iter, READ, bq, bpages, npages);
+
+ i = 0;
+ for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
+ size = pr->to - pr->from;
+ KUNIT_ASSERT_LE(test, pr->to, bufsize);
+
+ iov_iter_bvec_queue(&iter, WRITE, bq, 0, 0, pr->to);
+ iov_iter_advance(&iter, pr->from);
+ copied = copy_from_iter(scratch + i, size, &iter);
+
+ KUNIT_EXPECT_EQ(test, copied, size);
+ KUNIT_EXPECT_EQ(test, iter.count, 0);
+ i += size;
+ }
+
+ /* Build the expected image in the main buffer. */
+ i = 0;
+ memset(buffer, 0, bufsize);
+ for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
+ for (j = pr->from; j < pr->to; j++) {
+ buffer[i++] = pattern(j);
+ if (i >= bufsize)
+ goto stop;
+ }
+ }
+stop:
+
+ /* Compare the images */
+ for (i = 0; i < bufsize; i++) {
+ KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=%x", i);
+ if (scratch[i] != buffer[i])
+ return;
+ }
+
+ KUNIT_SUCCEED(test);
+}
+
static void iov_kunit_destroy_xarray(void *data)
{
struct xarray *xarray = data;
@@ -1016,6 +1194,8 @@ static struct kunit_case __refdata iov_kunit_cases[] = {
KUNIT_CASE(iov_kunit_copy_from_bvec),
KUNIT_CASE(iov_kunit_copy_to_folioq),
KUNIT_CASE(iov_kunit_copy_from_folioq),
+ KUNIT_CASE(iov_kunit_copy_to_bvecq),
+ KUNIT_CASE(iov_kunit_copy_from_bvecq),
KUNIT_CASE(iov_kunit_copy_to_xarray),
KUNIT_CASE(iov_kunit_copy_from_xarray),
KUNIT_CASE(iov_kunit_extract_pages_kvec),
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 13/26] netfs: Add some tools for managing bvecq chains
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (11 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 12/26] iov_iter: Add a segmented queue of bio_vec[] David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 14/26] netfs: Add a function to extract from an iter into a bvecq David Howells
` (12 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
Provide a selection of tools for managing bvec queue chains. This
includes:
(1) Allocation, prepopulation, expansion, shortening and refcounting of
bvecqs and bvecq chains.
This can be used to do things like creating an encryption buffer in
cifs or a directory content buffer in afs. The memory segments will
be appropriate disposed off according to the flags on the bvecq.
(2) Management of a bvecq chain as a rolling buffer and the management of
positions within it.
(3) Loading folios, slicing chains and clearing content.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/netfs/Makefile | 1 +
fs/netfs/bvecq.c | 706 +++++++++++++++++++++++++++++++++++
fs/netfs/internal.h | 1 +
fs/netfs/stats.c | 4 +-
include/linux/bvecq.h | 165 +++++++-
include/linux/iov_iter.h | 4 +-
include/linux/netfs.h | 1 +
include/trace/events/netfs.h | 24 ++
lib/iov_iter.c | 16 +-
lib/scatterlist.c | 4 +-
lib/tests/kunit_iov_iter.c | 18 +-
11 files changed, 919 insertions(+), 25 deletions(-)
create mode 100644 fs/netfs/bvecq.c
diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile
index b43188d64bd8..e1f12ecb5abf 100644
--- a/fs/netfs/Makefile
+++ b/fs/netfs/Makefile
@@ -3,6 +3,7 @@
netfs-y := \
buffered_read.o \
buffered_write.o \
+ bvecq.o \
direct_read.o \
direct_write.o \
iterator.o \
diff --git a/fs/netfs/bvecq.c b/fs/netfs/bvecq.c
new file mode 100644
index 000000000000..c71646ea5243
--- /dev/null
+++ b/fs/netfs/bvecq.c
@@ -0,0 +1,706 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Buffering helpers for bvec queues
+ *
+ * Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ */
+
+#include "internal.h"
+
+void bvecq_dump(const struct bvecq *bq)
+{
+ int b = 0;
+
+ for (; bq; bq = bq->next, b++) {
+ int skipz = 0;
+
+ pr_notice("BQ[%u] %u/%u fp=%llx\n", b, bq->nr_slots, bq->max_slots, bq->fpos);
+ for (int s = 0; s < bq->nr_slots; s++) {
+ const struct bio_vec *bv = &bq->bv[s];
+
+ if (!bv->bv_page && !bv->bv_len && skipz < 2) {
+ skipz = 1;
+ continue;
+ }
+ if (skipz == 1)
+ pr_notice("BQ[%u:00-%02u] ...\n", b, s - 1);
+ skipz = 2;
+ pr_notice("BQ[%u:%02u] %10lx %04x %04x %u\n",
+ b, s,
+ bv->bv_page ? page_to_pfn(bv->bv_page) : 0,
+ bv->bv_offset, bv->bv_len,
+ bv->bv_page ? page_count(bv->bv_page) : 0);
+ }
+ }
+}
+EXPORT_SYMBOL(bvecq_dump);
+
+/**
+ * bvecq_alloc_one - Allocate a single bvecq node with unpopulated slots
+ * @nr_slots: Number of slots to allocate
+ * @gfp: The allocation constraints.
+ *
+ * Allocate a single bvecq node and initialise the header. A number of inline
+ * slots are also allocated, rounded up to fit after the header in a power-of-2
+ * slab object of up to 512 bytes (up to 29 slots on a 64-bit cpu). The slot
+ * array is not initialised.
+ *
+ * Return: The node pointer or NULL on allocation failure.
+ */
+struct bvecq *bvecq_alloc_one(size_t nr_slots, gfp_t gfp)
+{
+ struct bvecq *bq;
+ const size_t max_size = 512;
+ const size_t max_slots = (max_size - sizeof(*bq)) / sizeof(bq->__bv[0]);
+ size_t part = umin(nr_slots, max_slots);
+ size_t size = roundup_pow_of_two(struct_size(bq, __bv, part));
+
+ bq = kmalloc(size, gfp);
+ if (bq) {
+ *bq = (struct bvecq) {
+ .ref = REFCOUNT_INIT(1),
+ .bv = bq->__bv,
+ .inline_bv = true,
+ .max_slots = (size - sizeof(*bq)) / sizeof(bq->__bv[0]),
+ };
+ netfs_stat(&netfs_n_bvecq);
+ }
+ return bq;
+}
+EXPORT_SYMBOL(bvecq_alloc_one);
+
+/**
+ * bvecq_alloc_chain - Allocate an unpopulated bvecq chain
+ * @nr_slots: Number of slots to allocate
+ * @gfp: The allocation constraints.
+ *
+ * Allocate a chain of bvecq nodes providing at least the requested cumulative
+ * number of slots.
+ *
+ * Return: The first node pointer or NULL on allocation failure.
+ */
+struct bvecq *bvecq_alloc_chain(size_t nr_slots, gfp_t gfp)
+{
+ struct bvecq *head = NULL, *tail = NULL;
+
+ _enter("%zu", nr_slots);
+
+ for (;;) {
+ struct bvecq *bq;
+
+ bq = bvecq_alloc_one(nr_slots, gfp);
+ if (!bq)
+ goto oom;
+
+ if (tail) {
+ tail->next = bq;
+ bq->prev = tail;
+ } else {
+ head = bq;
+ }
+ tail = bq;
+ if (tail->max_slots >= nr_slots)
+ break;
+ nr_slots -= tail->max_slots;
+ }
+
+ return head;
+oom:
+ bvecq_put(head);
+ return NULL;
+}
+EXPORT_SYMBOL(bvecq_alloc_chain);
+
+/**
+ * bvecq_alloc_buffer - Allocate a bvecq chain and populate with buffers
+ * @size: Target size of the buffer (can be 0 for an empty buffer)
+ * @pre_slots: Number of preamble slots to set aside
+ * @gfp: The allocation constraints.
+ *
+ * Allocate a chain of bvecq nodes and populate the slots with sufficient pages
+ * to provide at least the requested amount of space, leaving the first
+ * @pre_slots slots unset. The pages allocated may be compound pages larger
+ * than PAGE_SIZE and thus occupy fewer slots. The pages have their refcounts
+ * set to 1 and can be passed to MSG_SPLICE_PAGES.
+ *
+ * Return: The first node pointer or NULL on allocation failure.
+ */
+struct bvecq *bvecq_alloc_buffer(size_t size, unsigned int pre_slots, gfp_t gfp)
+{
+ struct bvecq *head = NULL, *tail = NULL, *p = NULL;
+ size_t count = DIV_ROUND_UP(size, PAGE_SIZE);
+
+ _enter("%zx,%zx,%u", size, count, pre_slots);
+
+ do {
+ struct page **pages;
+ int want, got;
+
+ p = bvecq_alloc_one(umin(count, 32 - 3), gfp);
+ if (!p)
+ goto oom;
+
+ p->free = true;
+
+ if (tail) {
+ tail->next = p;
+ p->prev = tail;
+ } else {
+ head = p;
+ }
+ tail = p;
+ if (!count)
+ break;
+
+ pages = (struct page **)&p->bv[p->max_slots];
+ pages -= p->max_slots - pre_slots;
+ memset(pages, 0, (p->max_slots - pre_slots) * sizeof(pages[0]));
+
+ want = umin(count, p->max_slots - pre_slots);
+ got = alloc_pages_bulk(gfp, want, pages);
+ if (got < want) {
+ for (int i = 0; i < got; i++)
+ __free_page(pages[i]);
+ goto oom;
+ }
+
+ tail->nr_slots = pre_slots + got;
+ for (int i = 0; i < got; i++) {
+ int j = pre_slots + i;
+
+ set_page_count(pages[i], 1);
+ bvec_set_page(&tail->bv[j], pages[i], PAGE_SIZE, 0);
+ }
+
+ count -= got;
+ pre_slots = 0;
+ } while (count > 0);
+
+ return head;
+oom:
+ bvecq_put(head);
+ return NULL;
+}
+EXPORT_SYMBOL(bvecq_alloc_buffer);
+
+/*
+ * Free the page pointed to be a segment as necessary.
+ */
+static void bvecq_free_seg(struct bvecq *bq, unsigned int seg)
+{
+ if (!bq->free ||
+ !bq->bv[seg].bv_page)
+ return;
+
+ if (bq->unpin)
+ unpin_user_page(bq->bv[seg].bv_page);
+ else
+ __free_page(bq->bv[seg].bv_page);
+}
+
+/**
+ * bvecq_put - Put a ref on a bvec queue
+ * @bq: The start of the folio queue to free
+ *
+ * Put the ref(s) on the nodes in a bvec queue, freeing up the node and the
+ * page fragments it points to as the refcounts become zero.
+ */
+void bvecq_put(struct bvecq *bq)
+{
+ struct bvecq *next;
+
+ for (; bq; bq = next) {
+ if (!refcount_dec_and_test(&bq->ref))
+ break;
+ for (int seg = 0; seg < bq->nr_slots; seg++)
+ bvecq_free_seg(bq, seg);
+ next = bq->next;
+ netfs_stat_d(&netfs_n_bvecq);
+ kfree(bq);
+ }
+}
+EXPORT_SYMBOL(bvecq_put);
+
+/**
+ * bvecq_expand_buffer - Allocate buffer space into a bvec queue
+ * @_buffer: Pointer to the bvecq chain to expand (may point to a NULL; updated).
+ * @_cur_size: Current size of the buffer (updated).
+ * @size: Target size of the buffer.
+ * @gfp: The allocation constraints.
+ */
+int bvecq_expand_buffer(struct bvecq **_buffer, size_t *_cur_size, ssize_t size, gfp_t gfp)
+{
+ struct bvecq *tail = *_buffer;
+ const size_t max_slots = 32;
+
+ size = round_up(size, PAGE_SIZE);
+ if (*_cur_size >= size)
+ return 0;
+
+ if (tail)
+ while (tail->next)
+ tail = tail->next;
+
+ do {
+ struct page *page;
+ int order = 0;
+
+ if (!tail || bvecq_is_full(tail)) {
+ struct bvecq *p;
+
+ p = bvecq_alloc_one(max_slots, gfp);
+ if (!p)
+ return -ENOMEM;
+ if (tail) {
+ tail->next = p;
+ p->prev = tail;
+ } else {
+ *_buffer = p;
+ }
+ tail = p;
+ }
+
+ if (size - *_cur_size > PAGE_SIZE)
+ order = umin(ilog2(size - *_cur_size) - PAGE_SHIFT,
+ MAX_PAGECACHE_ORDER);
+
+ page = alloc_pages(gfp | __GFP_COMP, order);
+ if (!page && order > 0)
+ page = alloc_pages(gfp | __GFP_COMP, 0);
+ if (!page)
+ return -ENOMEM;
+
+ bvec_set_page(&tail->bv[tail->nr_slots++], page, PAGE_SIZE << order, 0);
+ *_cur_size += PAGE_SIZE << order;
+ } while (*_cur_size < size);
+
+ return 0;
+}
+EXPORT_SYMBOL(bvecq_expand_buffer);
+
+/**
+ * bvecq_shorten_buffer - Shorten a bvec queue buffer
+ * @bq: The start of the buffer to shorten
+ * @slot: The slot to start from
+ * @size: The size to retain
+ *
+ * Shorten the content of a bvec queue down to the minimum number of segments,
+ * starting at the specified segment, to retain the specified size.
+ *
+ * Return: 0 if successful; -EMSGSIZE if there is insufficient content.
+ */
+int bvecq_shorten_buffer(struct bvecq *bq, unsigned int slot, size_t size)
+{
+ ssize_t retain = size;
+
+ /* Skip through the segments we want to keep. */
+ for (; bq; bq = bq->next) {
+ for (; slot < bq->nr_slots; slot++) {
+ retain -= bq->bv[slot].bv_len;
+ if (retain < 0)
+ goto found;
+ }
+ slot = 0;
+ }
+ if (WARN_ON_ONCE(retain > 0))
+ return -EMSGSIZE;
+ return 0;
+
+found:
+ /* Shorten the entry to be retained and clean the rest of this bvecq. */
+ bq->bv[slot].bv_len += retain;
+ slot++;
+ for (int i = slot; i < bq->nr_slots; i++)
+ bvecq_free_seg(bq, i);
+ bq->nr_slots = slot;
+
+ /* Free the queue tail. */
+ bvecq_put(bq->next);
+ bq->next = NULL;
+ return 0;
+}
+EXPORT_SYMBOL(bvecq_shorten_buffer);
+
+/**
+ * bvecq_buffer_init - Initialise a buffer and set position
+ * @pos: The position to point at the new buffer.
+ * @gfp: The allocation constraints.
+ *
+ * Initialise a rolling buffer. We allocate an unpopulated bvecq node to so
+ * that the pointers can be independently driven by the producer and the
+ * consumer.
+ *
+ * Return 0 if successful; -ENOMEM on allocation failure.
+ */
+int bvecq_buffer_init(struct bvecq_pos *pos, gfp_t gfp)
+{
+ struct bvecq *bq;
+
+ bq = bvecq_alloc_one(13, gfp);
+ if (!bq)
+ return -ENOMEM;
+
+ pos->bvecq = bq; /* Comes with a ref. */
+ pos->slot = 0;
+ pos->offset = 0;
+ return 0;
+}
+
+/**
+ * bvecq_buffer_make_space - Start a new bvecq node in a buffer
+ * @pos: The position of the last node.
+ * @gfp: The allocation constraints.
+ *
+ * Add a new node on to the buffer chain at the specified position, either
+ * because the previous one is full or because we have a discontiguity to
+ * contend with, and update @pos to point to it.
+ *
+ * Return: 0 if successful; -ENOMEM on allocation failure.
+ */
+int bvecq_buffer_make_space(struct bvecq_pos *pos, gfp_t gfp)
+{
+ struct bvecq *bq, *head = pos->bvecq;
+
+ bq = bvecq_alloc_one(14, gfp);
+ if (!bq)
+ return -ENOMEM;
+ bq->prev = head;
+
+ pos->bvecq = bvecq_get(bq);
+ pos->slot = 0;
+ pos->offset = 0;
+
+ /* Make sure the initialisation is stored before the next pointer.
+ *
+ * [!] NOTE: After we set head->next, the consumer is at liberty to
+ * immediately delete the old head.
+ */
+ smp_store_release(&head->next, bq);
+ bvecq_put(head);
+ return 0;
+}
+
+/**
+ * bvecq_pos_advance - Advance a bvecq position
+ * @pos: The position to advance.
+ * @amount: The amount of bytes to advance by.
+ *
+ * Advance the specified bvecq position by @amount bytes. @pos is updated and
+ * bvecq ref counts may have been manipulated. If the position hits the end of
+ * the queue, then it is left pointing beyond the last slot of the last bvecq
+ * so that it doesn't break the chain.
+ */
+void bvecq_pos_advance(struct bvecq_pos *pos, size_t amount)
+{
+ struct bvecq *bq = pos->bvecq;
+ unsigned int slot = pos->slot;
+ size_t offset = pos->offset;
+
+ if (slot >= bq->nr_slots) {
+ bq = bq->next;
+ slot = 0;
+ }
+
+ while (amount) {
+ const struct bio_vec *bv = &bq->bv[slot];
+ size_t part = umin(bv->bv_len - offset, amount);
+
+ if (likely(part < bv->bv_len)) {
+ offset += part;
+ break;
+ }
+ amount -= part;
+ offset = 0;
+ slot++;
+ if (slot >= bq->nr_slots) {
+ if (!bq->next)
+ break;
+ bq = bq->next;
+ slot = 0;
+ }
+ }
+
+ pos->slot = slot;
+ pos->offset = offset;
+ bvecq_pos_move(pos, bq);
+}
+
+/**
+ * bvecq_zero - Clear memory starting at the bvecq position.
+ * @pos: The position in the bvecq chain to start clearing.
+ * @amount: The number of bytes to clear.
+ *
+ * Clear memory fragments pointed to by a bvec queue. @pos is updated and
+ * bvecq ref counts may have been manipulated. If the position hits the end of
+ * the queue, then it is left pointing beyond the last slot of the last bvecq
+ * so that it doesn't break the chain.
+ *
+ * Return: The number of bytes cleared.
+ */
+ssize_t bvecq_zero(struct bvecq_pos *pos, size_t amount)
+{
+ struct bvecq *bq = pos->bvecq;
+ unsigned int slot = pos->slot;
+ ssize_t cleared = 0;
+ size_t offset = pos->offset;
+
+ if (WARN_ON_ONCE(!bq))
+ return 0;
+
+ if (slot >= bq->nr_slots) {
+ bq = bq->next;
+ if (WARN_ON_ONCE(!bq))
+ return 0;
+ slot = 0;
+ }
+
+ do {
+ const struct bio_vec *bv = &bq->bv[slot];
+
+ if (offset < bv->bv_len) {
+ size_t part = umin(amount - cleared, bv->bv_len - offset);
+
+ memzero_page(bv->bv_page, bv->bv_offset + offset, part);
+
+ offset += part;
+ cleared += part;
+ }
+
+ if (offset >= bv->bv_len) {
+ offset = 0;
+ slot++;
+ if (slot >= bq->nr_slots) {
+ if (!bq->next)
+ break;
+ bq = bq->next;
+ slot = 0;
+ }
+ }
+ } while (cleared < amount);
+
+ bvecq_pos_move(pos, bq);
+ pos->slot = slot;
+ pos->offset = offset;
+ return cleared;
+}
+
+/**
+ * bvecq_slice - Find a slice of a bvecq queue
+ * @pos: The position to start at.
+ * @max_size: The maximum size of the slice (or ULONG_MAX).
+ * @max_segs: The maximum number of segments in the slice (or INT_MAX).
+ * @_nr_segs: Where to put the number of segments (updated).
+ *
+ * Determine the size and number of segments that can be obtained the next
+ * slice of bvec queue up to the maximum size and segment count specified. The
+ * slice is also limited if a discontiguity is found.
+ *
+ * @pos is updated to the end of the slice. If the position hits the end of
+ * the queue, then it is left pointing beyond the last slot of the last bvecq
+ * so that it doesn't break the chain.
+ *
+ * Return: The number of bytes in the slice.
+ */
+size_t bvecq_slice(struct bvecq_pos *pos, size_t max_size,
+ unsigned int max_segs, unsigned int *_nr_segs)
+{
+ struct bvecq *bq;
+ unsigned int slot = pos->slot, nsegs = 0;
+ size_t size = 0;
+ size_t offset = pos->offset;
+
+ bq = pos->bvecq;
+ for (;;) {
+ for (; slot < bq->nr_slots; slot++) {
+ const struct bio_vec *bvec = &bq->bv[slot];
+
+ if (offset < bvec->bv_len && bvec->bv_page) {
+ size_t part = umin(bvec->bv_len - offset, max_size);
+
+ size += part;
+ offset += part;
+ max_size -= part;
+ nsegs++;
+ if (!max_size || nsegs >= max_segs)
+ goto out;
+ }
+ offset = 0;
+ }
+
+ /* pos->bvecq isn't allowed to go NULL as the queue may get
+ * extended and we would lose our place.
+ */
+ if (!bq->next)
+ break;
+ slot = 0;
+ bq = bq->next;
+ if (bq->discontig && size > 0)
+ break;
+ }
+
+out:
+ *_nr_segs = nsegs;
+ if (slot == bq->nr_slots && bq->next) {
+ bq = bq->next;
+ slot = 0;
+ offset = 0;
+ }
+ bvecq_pos_move(pos, bq);
+ pos->slot = slot;
+ pos->offset = offset;
+ return size;
+}
+
+/**
+ * bvecq_extract - Extract a slice of a bvecq queue into a new bvecq queue
+ * @pos: The position to start at.
+ * @max_size: The maximum size of the slice (or ULONG_MAX).
+ * @max_segs: The maximum number of segments in the slice (or INT_MAX).
+ * @to: Where to put the extraction bvecq chain head (updated).
+ *
+ * Allocate a new bvecq and extract into it memory fragments from a slice of
+ * bvec queue, starting at @pos. The slice is also limited if a discontiguity
+ * is found. No refs are taken on the page.
+ *
+ * @pos is updated to the end of the slice. If the position hits the end of
+ * the queue, then it is left pointing beyond the last slot of the last bvecq
+ * so that it doesn't break the chain.
+ *
+ * If successful, *@to is set to point to the head of the newly allocated chain
+ * and the caller inherits a ref to it.
+ *
+ * Return: The number of bytes extracted; -ENOMEM on allocation failure or -EIO
+ * if no segments were available to extract.
+ */
+ssize_t bvecq_extract(struct bvecq_pos *pos, size_t max_size,
+ unsigned int max_segs, struct bvecq **to)
+{
+ struct bvecq_pos tmp_pos;
+ struct bvecq *src, *dst = NULL;
+ unsigned int slot = pos->slot, nsegs;
+ ssize_t extracted = 0;
+ size_t offset = pos->offset, amount;
+
+ *to = NULL;
+ if (WARN_ON_ONCE(!max_segs))
+ max_segs = INT_MAX;
+
+ bvecq_pos_set(&tmp_pos, pos);
+ amount = bvecq_slice(&tmp_pos, max_size, max_segs, &nsegs);
+ bvecq_pos_unset(&tmp_pos);
+ if (nsegs == 0)
+ return -EIO;
+
+ dst = bvecq_alloc_chain(nsegs, GFP_KERNEL);
+ if (!dst)
+ return -ENOMEM;
+ *to = dst;
+ max_segs = nsegs;
+ nsegs = 0;
+
+ /* Transcribe the segments */
+ src = pos->bvecq;
+ for (;;) {
+ for (; slot < src->nr_slots; slot++) {
+ const struct bio_vec *sv = &src->bv[slot];
+ struct bio_vec *dv = &dst->bv[dst->nr_slots];
+
+ _debug("EXTR BQ=%x[%x] off=%zx am=%zx p=%lx",
+ src->priv, slot, offset, amount, page_to_pfn(sv->bv_page));
+
+ if (offset < sv->bv_len && sv->bv_page) {
+ size_t part = umin(sv->bv_len - offset, amount);
+
+ bvec_set_page(dv, sv->bv_page, part,
+ sv->bv_offset + offset);
+ extracted += part;
+ amount -= part;
+ offset += part;
+ trace_netfs_bv_slot(dst, dst->nr_slots);
+ dst->nr_slots++;
+ nsegs++;
+ if (bvecq_is_full(dst))
+ dst = dst->next;
+ if (nsegs >= max_segs)
+ goto out;
+ if (amount == 0)
+ goto out;
+ }
+ offset = 0;
+ }
+
+ /* pos->bvecq isn't allowed to go NULL as the queue may get
+ * extended and we would lose our place.
+ */
+ if (!src->next)
+ break;
+ slot = 0;
+ src = src->next;
+ if (src->discontig && extracted > 0)
+ break;
+ }
+
+out:
+ if (slot == src->nr_slots && src->next) {
+ src = src->next;
+ slot = 0;
+ offset = 0;
+ }
+ bvecq_pos_move(pos, src);
+ pos->slot = slot;
+ pos->offset = offset;
+ return extracted;
+}
+
+/**
+ * bvecq_load_from_ra - Allocate a bvecq chain and load from readahead
+ * @pos: Blank position object to attach the new chain to.
+ * @ractl: The readahead control context.
+ *
+ * Decant the set of folios to be read from the readahead context into a bvecq
+ * chain. Each folio occupies one bio_vec element.
+ *
+ * Return: Amount of data loaded or -ENOMEM on allocation failure.
+ */
+ssize_t bvecq_load_from_ra(struct bvecq_pos *pos, struct readahead_control *ractl)
+{
+ XA_STATE(xas, &ractl->mapping->i_pages, ractl->_index);
+ struct folio *folio;
+ struct bvecq *bq;
+ size_t loaded = 0;
+
+ bq = bvecq_alloc_chain(ractl->_nr_folios, GFP_NOFS);
+ if (!bq)
+ return -ENOMEM;
+
+ pos->bvecq = bq;
+ pos->slot = 0;
+ pos->offset = 0;
+
+ rcu_read_lock();
+
+ xas_for_each(&xas, folio, ractl->_index + ractl->_nr_pages - 1) {
+ size_t len;
+
+ if (xas_retry(&xas, folio))
+ continue;
+ VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
+
+ len = folio_size(folio);
+ bvec_set_folio(&bq->bv[bq->nr_slots++], folio, len, 0);
+ loaded += len;
+ trace_netfs_folio(folio, netfs_folio_trace_read);
+
+ if (bq->nr_slots >= bq->max_slots) {
+ bq = bq->next;
+ if (!bq)
+ break;
+ }
+ }
+
+ rcu_read_unlock();
+
+ ractl->_index += ractl->_nr_pages;
+ ractl->_nr_pages = 0;
+ return loaded;
+}
diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
index 2fcf31de5f2c..ad47bcc1947b 100644
--- a/fs/netfs/internal.h
+++ b/fs/netfs/internal.h
@@ -168,6 +168,7 @@ extern atomic_t netfs_n_wh_retry_write_subreq;
extern atomic_t netfs_n_wb_lock_skip;
extern atomic_t netfs_n_wb_lock_wait;
extern atomic_t netfs_n_folioq;
+extern atomic_t netfs_n_bvecq;
int netfs_stats_show(struct seq_file *m, void *v);
diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c
index ab6b916addc4..84c2a4bcc762 100644
--- a/fs/netfs/stats.c
+++ b/fs/netfs/stats.c
@@ -48,6 +48,7 @@ atomic_t netfs_n_wh_retry_write_subreq;
atomic_t netfs_n_wb_lock_skip;
atomic_t netfs_n_wb_lock_wait;
atomic_t netfs_n_folioq;
+atomic_t netfs_n_bvecq;
int netfs_stats_show(struct seq_file *m, void *v)
{
@@ -90,9 +91,10 @@ int netfs_stats_show(struct seq_file *m, void *v)
atomic_read(&netfs_n_rh_retry_read_subreq),
atomic_read(&netfs_n_wh_retry_write_req),
atomic_read(&netfs_n_wh_retry_write_subreq));
- seq_printf(m, "Objs : rr=%u sr=%u foq=%u wsc=%u\n",
+ seq_printf(m, "Objs : rr=%u sr=%u bq=%u foq=%u wsc=%u\n",
atomic_read(&netfs_n_rh_rreq),
atomic_read(&netfs_n_rh_sreq),
+ atomic_read(&netfs_n_bvecq),
atomic_read(&netfs_n_folioq),
atomic_read(&netfs_n_wh_wstream_conflict));
seq_printf(m, "WbLock : skip=%u wait=%u\n",
diff --git a/include/linux/bvecq.h b/include/linux/bvecq.h
index 462125af1cc7..6c58a7fb6472 100644
--- a/include/linux/bvecq.h
+++ b/include/linux/bvecq.h
@@ -17,7 +17,7 @@
* iterated over with an ITER_BVECQ iterator. The list is non-circular; next
* and prev are NULL at the ends.
*
- * The bv pointer points to the segment array; this may be __bv if allocated
+ * The bv pointer points to the bio_vec array; this may be __bv if allocated
* together. The caller is responsible for determining whether or not this is
* the case as the array pointed to by bv may be follow on directly from the
* bvecq by accident of allocation (ie. ->bv == ->__bv is *not* sufficient to
@@ -33,8 +33,8 @@ struct bvecq {
unsigned long long fpos; /* File position */
refcount_t ref;
u32 priv; /* Private data */
- u16 nr_segs; /* Number of elements in bv[] used */
- u16 max_segs; /* Number of elements allocated in bv[] */
+ u16 nr_slots; /* Number of elements in bv[] used */
+ u16 max_slots; /* Number of elements allocated in bv[] */
bool inline_bv:1; /* T if __bv[] is being used */
bool free:1; /* T if the pages need freeing */
bool unpin:1; /* T if the pages need unpinning, not freeing */
@@ -43,4 +43,163 @@ struct bvecq {
struct bio_vec __bv[]; /* Default array (if ->inline_bv) */
};
+/*
+ * Position in a bio_vec queue. The bvecq holds a ref on the queue segment it
+ * points to.
+ */
+struct bvecq_pos {
+ struct bvecq *bvecq; /* The first bvecq */
+ unsigned int offset; /* The offset within the starting slot */
+ u16 slot; /* The starting slot */
+};
+
+void bvecq_dump(const struct bvecq *bq);
+struct bvecq *bvecq_alloc_one(size_t nr_slots, gfp_t gfp);
+struct bvecq *bvecq_alloc_chain(size_t nr_slots, gfp_t gfp);
+struct bvecq *bvecq_alloc_buffer(size_t size, unsigned int pre_slots, gfp_t gfp);
+void bvecq_put(struct bvecq *bq);
+int bvecq_expand_buffer(struct bvecq **_buffer, size_t *_cur_size, ssize_t size, gfp_t gfp);
+int bvecq_shorten_buffer(struct bvecq *bq, unsigned int slot, size_t size);
+int bvecq_buffer_init(struct bvecq_pos *pos, gfp_t gfp);
+int bvecq_buffer_make_space(struct bvecq_pos *pos, gfp_t gfp);
+void bvecq_pos_advance(struct bvecq_pos *pos, size_t amount);
+ssize_t bvecq_zero(struct bvecq_pos *pos, size_t amount);
+size_t bvecq_slice(struct bvecq_pos *pos, size_t max_size,
+ unsigned int max_segs, unsigned int *_nr_segs);
+ssize_t bvecq_extract(struct bvecq_pos *pos, size_t max_size,
+ unsigned int max_segs, struct bvecq **to);
+ssize_t bvecq_load_from_ra(struct bvecq_pos *pos, struct readahead_control *ractl);
+
+/**
+ * bvecq_get - Get a ref on a bvecq
+ * @bq: The bvecq to get a ref on
+ */
+static inline struct bvecq *bvecq_get(struct bvecq *bq)
+{
+ refcount_inc(&bq->ref);
+ return bq;
+}
+
+/**
+ * bvecq_is_full - Determine if a bvecq is full
+ * @bvecq: The object to query
+ *
+ * Return: true if full; false if not.
+ */
+static inline bool bvecq_is_full(const struct bvecq *bvecq)
+{
+ return bvecq->nr_slots >= bvecq->max_slots;
+}
+
+/**
+ * bvecq_pos_set - Set one position to be the same as another
+ * @pos: The position object to set
+ * @at: The source position.
+ *
+ * Set @pos to have the same position as @at. This may take a ref on the
+ * bvecq pointed to.
+ */
+static inline void bvecq_pos_set(struct bvecq_pos *pos, const struct bvecq_pos *at)
+{
+ *pos = *at;
+ bvecq_get(pos->bvecq);
+}
+
+/**
+ * bvecq_pos_unset - Unset a position
+ * @pos: The position object to unset
+ *
+ * Unset @pos. This does any needed ref cleanup.
+ */
+static inline void bvecq_pos_unset(struct bvecq_pos *pos)
+{
+ bvecq_put(pos->bvecq);
+ pos->bvecq = NULL;
+ pos->slot = 0;
+ pos->offset = 0;
+}
+
+/**
+ * bvecq_pos_transfer - Transfer one position to another, clearing the first
+ * @pos: The position object to set
+ * @from: The source position to clear.
+ *
+ * Set @pos to have the same position as @from and then clear @from. This may
+ * transfer a ref on the bvecq pointed to.
+ */
+static inline void bvecq_pos_transfer(struct bvecq_pos *pos, struct bvecq_pos *from)
+{
+ *pos = *from;
+ from->bvecq = NULL;
+ from->slot = 0;
+ from->offset = 0;
+}
+
+/**
+ * bvecq_pos_move - Update a position to a new bvecq
+ * @pos: The position object to update.
+ * @to: The new bvecq to point at.
+ *
+ * Update @pos to point to @to if it doesn't already do so. This may
+ * manipulate refs on the bvecqs pointed to.
+ */
+static inline void bvecq_pos_move(struct bvecq_pos *pos, struct bvecq *to)
+{
+ struct bvecq *old = pos->bvecq;
+
+ if (old != to) {
+ pos->bvecq = bvecq_get(to);
+ bvecq_put(old);
+ }
+}
+
+/**
+ * bvecq_pos_step - Step a position to the next slot if possible
+ * @pos: The position object to step.
+ *
+ * Update @pos to point to the next slot in the queue if not at the end. This
+ * may manipulate refs on the bvecqs pointed to.
+ *
+ * Return: true if successful, false if was at the end.
+ */
+static inline bool bvecq_pos_step(struct bvecq_pos *pos)
+{
+ struct bvecq *bq = pos->bvecq;
+
+ pos->slot++;
+ pos->offset = 0;
+ if (pos->slot <= bq->nr_slots)
+ return true;
+ if (!bq->next)
+ return false;
+ bvecq_pos_move(pos, bq->next);
+ return true;
+}
+
+/**
+ * bvecq_delete_spent - Delete the bvecq at the front if possible
+ * @pos: The position object to update.
+ *
+ * Delete the used up bvecq at the front of the queue that @pos points to if it
+ * is not the last node in the queue; if it is the last node in the queue, it
+ * is kept so that the queue doesn't become detached from the other end. This
+ * may manipulate refs on the bvecqs pointed to.
+ */
+static inline struct bvecq *bvecq_delete_spent(struct bvecq_pos *pos)
+{
+ struct bvecq *spent = pos->bvecq;
+ /* Read the contents of the queue node after the pointer to it. */
+ struct bvecq *next = smp_load_acquire(&spent->next);
+
+ if (!next)
+ return NULL;
+ next->prev = NULL;
+ spent->next = NULL;
+ bvecq_put(spent);
+ pos->bvecq = next; /* We take spent's ref */
+ pos->slot = 0;
+ pos->offset = 0;
+ return next;
+}
+
#endif /* _LINUX_BVECQ_H */
diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h
index 999607ece481..309642b3901f 100644
--- a/include/linux/iov_iter.h
+++ b/include/linux/iov_iter.h
@@ -152,7 +152,7 @@ size_t iterate_bvecq(struct iov_iter *iter, size_t len, void *priv, void *priv2,
unsigned int slot = iter->bvecq_slot;
size_t progress = 0, skip = iter->iov_offset;
- if (slot == bq->nr_segs) {
+ if (slot == bq->nr_slots) {
/* The iterator may have been extended. */
bq = bq->next;
slot = 0;
@@ -176,7 +176,7 @@ size_t iterate_bvecq(struct iov_iter *iter, size_t len, void *priv, void *priv2,
if (skip >= bvec->bv_len) {
skip = 0;
slot++;
- if (slot >= bq->nr_segs) {
+ if (slot >= bq->nr_slots) {
if (!bq->next)
break;
bq = bq->next;
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index cc56b6512769..5bc48aacf7f6 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -17,6 +17,7 @@
#include <linux/workqueue.h>
#include <linux/fs.h>
#include <linux/pagemap.h>
+#include <linux/bvecq.h>
#include <linux/uio.h>
#include <linux/rolling_buffer.h>
diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
index b8236f9e940e..fbb094231659 100644
--- a/include/trace/events/netfs.h
+++ b/include/trace/events/netfs.h
@@ -779,6 +779,30 @@ TRACE_EVENT(netfs_folioq,
__print_symbolic(__entry->trace, netfs_folioq_traces))
);
+TRACE_EVENT(netfs_bv_slot,
+ TP_PROTO(const struct bvecq *bq, int slot),
+
+ TP_ARGS(bq, slot),
+
+ TP_STRUCT__entry(
+ __field(unsigned long, pfn)
+ __field(unsigned int, offset)
+ __field(unsigned int, len)
+ __field(unsigned int, slot)
+ ),
+
+ TP_fast_assign(
+ __entry->slot = slot;
+ __entry->pfn = page_to_pfn(bq->bv[slot].bv_page);
+ __entry->offset = bq->bv[slot].bv_offset;
+ __entry->len = bq->bv[slot].bv_len;
+ ),
+
+ TP_printk("bq[%x] p=%lx %x-%x",
+ __entry->slot,
+ __entry->pfn, __entry->offset, __entry->offset + __entry->len)
+ );
+
#undef EM
#undef E_
#endif /* _TRACE_NETFS_H */
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index df8d037894b1..4f091e6d4a22 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -580,7 +580,7 @@ static void iov_iter_bvecq_advance(struct iov_iter *i, size_t by)
return;
i->count -= by;
- if (slot >= bq->nr_segs) {
+ if (slot >= bq->nr_slots) {
bq = bq->next;
slot = 0;
}
@@ -593,7 +593,7 @@ static void iov_iter_bvecq_advance(struct iov_iter *i, size_t by)
break;
by -= len;
slot++;
- if (slot >= bq->nr_segs && bq->next) {
+ if (slot >= bq->nr_slots && bq->next) {
bq = bq->next;
slot = 0;
}
@@ -662,7 +662,7 @@ static void iov_iter_bvecq_revert(struct iov_iter *i, size_t unroll)
if (slot == 0) {
bq = bq->prev;
- slot = bq->nr_segs;
+ slot = bq->nr_slots;
}
slot--;
@@ -947,7 +947,7 @@ static unsigned long iov_iter_alignment_bvecq(const struct iov_iter *iter)
return res;
for (bq = iter->bvecq; bq; bq = bq->next) {
- for (; slot < bq->nr_segs; slot++) {
+ for (; slot < bq->nr_slots; slot++) {
const struct bio_vec *bvec = &bq->bv[slot];
size_t part = umin(bvec->bv_len - skip, size);
@@ -1331,7 +1331,7 @@ static size_t iov_npages_bvecq(const struct iov_iter *iter, size_t maxpages)
size_t size = iter->count;
for (bq = iter->bvecq; bq; bq = bq->next) {
- for (; slot < bq->nr_segs; slot++) {
+ for (; slot < bq->nr_slots; slot++) {
const struct bio_vec *bvec = &bq->bv[slot];
size_t offs = (bvec->bv_offset + skip) % PAGE_SIZE;
size_t part = umin(bvec->bv_len - skip, size);
@@ -1731,7 +1731,7 @@ static ssize_t iov_iter_extract_bvecq_pages(struct iov_iter *iter,
unsigned int seg = iter->bvecq_slot, count = 0, nr = 0;
size_t extracted = 0, offset = iter->iov_offset;
- if (seg >= bvecq->nr_segs) {
+ if (seg >= bvecq->nr_slots) {
bvecq = bvecq->next;
if (WARN_ON_ONCE(!bvecq))
return 0;
@@ -1763,7 +1763,7 @@ static ssize_t iov_iter_extract_bvecq_pages(struct iov_iter *iter,
if (offset >= blen) {
offset = 0;
seg++;
- if (seg >= bvecq->nr_segs) {
+ if (seg >= bvecq->nr_slots) {
if (!bvecq->next) {
WARN_ON_ONCE(extracted < iter->count);
break;
@@ -1816,7 +1816,7 @@ static ssize_t iov_iter_extract_bvecq_pages(struct iov_iter *iter,
if (offset >= blen) {
offset = 0;
seg++;
- if (seg >= bvecq->nr_segs) {
+ if (seg >= bvecq->nr_slots) {
if (!bvecq->next) {
WARN_ON_ONCE(extracted < iter->count);
break;
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index 03e3883a1a2d..93a3d194a914 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -1345,7 +1345,7 @@ static ssize_t extract_bvecq_to_sg(struct iov_iter *iter,
ssize_t ret = 0;
size_t offset = iter->iov_offset;
- if (seg >= bvecq->nr_segs) {
+ if (seg >= bvecq->nr_slots) {
bvecq = bvecq->next;
if (WARN_ON_ONCE(!bvecq))
return 0;
@@ -1373,7 +1373,7 @@ static ssize_t extract_bvecq_to_sg(struct iov_iter *iter,
if (offset >= blen) {
offset = 0;
seg++;
- if (seg >= bvecq->nr_segs) {
+ if (seg >= bvecq->nr_slots) {
if (!bvecq->next) {
WARN_ON_ONCE(ret < iter->count);
break;
diff --git a/lib/tests/kunit_iov_iter.c b/lib/tests/kunit_iov_iter.c
index 5bc941f64343..ff0621636ff1 100644
--- a/lib/tests/kunit_iov_iter.c
+++ b/lib/tests/kunit_iov_iter.c
@@ -543,28 +543,28 @@ static void iov_kunit_destroy_bvecq(void *data)
for (bq = data; bq; bq = next) {
next = bq->next;
- for (int i = 0; i < bq->nr_segs; i++)
+ for (int i = 0; i < bq->nr_slots; i++)
if (bq->bv[i].bv_page)
put_page(bq->bv[i].bv_page);
kfree(bq);
}
}
-static struct bvecq *iov_kunit_alloc_bvecq(struct kunit *test, unsigned int max_segs)
+static struct bvecq *iov_kunit_alloc_bvecq(struct kunit *test, unsigned int max_slots)
{
struct bvecq *bq;
- bq = kzalloc(struct_size(bq, __bv, max_segs), GFP_KERNEL);
+ bq = kzalloc(struct_size(bq, __bv, max_slots), GFP_KERNEL);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, bq);
- bq->max_segs = max_segs;
+ bq->max_slots = max_slots;
return bq;
}
-static struct bvecq *iov_kunit_create_bvecq(struct kunit *test, unsigned int max_segs)
+static struct bvecq *iov_kunit_create_bvecq(struct kunit *test, unsigned int max_slots)
{
struct bvecq *bq;
- bq = iov_kunit_alloc_bvecq(test, max_segs);
+ bq = iov_kunit_alloc_bvecq(test, max_slots);
kunit_add_action_or_reset(test, iov_kunit_destroy_bvecq, bq);
return bq;
}
@@ -578,13 +578,13 @@ static void __init iov_kunit_load_bvecq(struct kunit *test,
size_t size = 0;
for (int i = 0; i < npages; i++) {
- if (bq->nr_segs >= bq->max_segs) {
+ if (bq->nr_slots >= bq->max_slots) {
bq->next = iov_kunit_alloc_bvecq(test, 8);
bq->next->prev = bq;
bq = bq->next;
}
- bvec_set_page(&bq->bv[bq->nr_segs], pages[i], PAGE_SIZE, 0);
- bq->nr_segs++;
+ bvec_set_page(&bq->bv[bq->nr_slots], pages[i], PAGE_SIZE, 0);
+ bq->nr_slots++;
size += PAGE_SIZE;
}
iov_iter_bvec_queue(iter, dir, bq_head, 0, 0, size);
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 14/26] netfs: Add a function to extract from an iter into a bvecq
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (12 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 13/26] netfs: Add some tools for managing bvecq chains David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 15/26] afs: Use a bvecq to hold dir content rather than folioq David Howells
` (11 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
Add a function to extract a slice of data from an iterator of any type into
a bvec queue chain.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: Steve French <sfrench@samba.org>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/netfs/iterator.c | 123 ++++++++++++++++++++++++++++++++++++++++++
include/linux/netfs.h | 3 ++
2 files changed, 126 insertions(+)
diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c
index adca78747f23..e77fd39327c2 100644
--- a/fs/netfs/iterator.c
+++ b/fs/netfs/iterator.c
@@ -13,6 +13,129 @@
#include <linux/netfs.h>
#include "internal.h"
+/**
+ * netfs_extract_iter - Extract the pages from an iterator into a bvecq
+ * @orig: The original iterator
+ * @orig_len: The amount of iterator to copy
+ * @max_segs: Maximum number of contiguous segments
+ * @fpos: Starting file position to label the bvecq with
+ * @_bvecq_head: Where to cache the bvec queue
+ * @extraction_flags: Flags to qualify the request
+ *
+ * Extract the page fragments from the given amount of the source iterator and
+ * build bvec queue that refers to all of those bits. This allows the original
+ * iterator to disposed of.
+ *
+ * @extraction_flags can have ITER_ALLOW_P2PDMA set to request peer-to-peer DMA be
+ * allowed on the pages extracted.
+ *
+ * On success, the amount of data in the bvec is returned, the original
+ * iterator will have been advanced by the amount extracted.
+ *
+ * The bvecq segments are marked with indications on how to get clean up the
+ * extracted fragments.
+ */
+ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t max_segs,
+ unsigned long long fpos, struct bvecq **_bvecq_head,
+ iov_iter_extraction_t extraction_flags)
+{
+ struct bvecq *bq_tail = NULL;
+ ssize_t ret = 0;
+ size_t extracted = 0, nr_pages;
+
+ _enter("{%u,%zx},%zx", orig->iter_type, orig->count, orig_len);
+
+ WARN_ON_ONCE(orig_len > orig->count);
+
+ nr_pages = iov_iter_npages(orig, max_segs ?: INT_MAX);
+ if (WARN_ON(nr_pages == 0) ||
+ WARN_ON(nr_pages > max_segs))
+ nr_pages = max_segs;
+ max_segs = nr_pages;
+
+ do {
+ struct bvecq *bq;
+
+ if (WARN_ON(max_segs == 0))
+ break;
+
+ bq = bvecq_alloc_one(max_segs, GFP_NOFS);
+ if (!bq) {
+ ret = -ENOMEM;
+ break;
+ }
+ bq->free = user_backed_iter(orig);
+ bq->unpin = iov_iter_extract_will_pin(orig);
+ bq->prev = bq_tail;
+ bq->fpos = fpos + extracted;
+
+ if (bq_tail)
+ bq_tail->next = bq;
+ else
+ *_bvecq_head = bq;
+ bq_tail = bq;
+
+ if (orig_len == 0)
+ break;
+
+ struct bio_vec *bv = bq->bv;
+ do {
+ struct page **pages;
+ ssize_t got;
+ size_t offset;
+ size_t space = bq->max_slots - bq->nr_slots;
+ size_t bv_size = array_size(bq->max_slots, sizeof(*bv));
+ size_t pg_size = array_size(space, sizeof(*pages));
+
+ /* Put the page list at the end of the bvec list
+ * storage. bvec elements are larger than page
+ * pointers, so as long as we work 0->last, we should
+ * be fine.
+ */
+ pages = (void *)bv + bv_size - pg_size;
+
+ got = iov_iter_extract_pages(orig, &pages, orig_len,
+ space, extraction_flags, &offset);
+ if (got < 0) {
+ ret = got;
+ goto out;
+ }
+
+ if (got == 0) {
+ pr_err("extract_pages gave nothing from %zu, %zu\n",
+ extracted, orig_len);
+ ret = -EIO;
+ goto out;
+ }
+
+ if (got > orig_len - extracted) {
+ pr_err("extract_pages rc=%zd more than %zu\n",
+ got, orig_len);
+ goto out;
+ }
+
+ extracted += got;
+ orig_len -= got;
+
+ do {
+ size_t len = umin(got, PAGE_SIZE - offset);
+
+ BUG_ON(bq->nr_slots >= bq->max_slots);
+
+ bvec_set_page(&bq->bv[bq->nr_slots],
+ *pages++, len, offset);
+ bq->nr_slots++;
+ got -= len;
+ offset = 0;
+ } while (got > 0);
+ } while (orig_len > 0 && !bvecq_is_full(bq));
+ } while (orig_len > 0 && max_segs > 0);
+
+out:
+ return extracted ?: ret;
+}
+EXPORT_SYMBOL_GPL(netfs_extract_iter);
+
/**
* netfs_extract_user_iter - Extract the pages from a user iterator into a bvec
* @orig: The original iterator
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index 5bc48aacf7f6..b4602f7b6431 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -445,6 +445,9 @@ void netfs_get_subrequest(struct netfs_io_subrequest *subreq,
enum netfs_sreq_ref_trace what);
void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
enum netfs_sreq_ref_trace what);
+ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t max_segs,
+ unsigned long long fpos, struct bvecq **_bvecq_head,
+ iov_iter_extraction_t extraction_flags);
ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
struct iov_iter *new,
iov_iter_extraction_t extraction_flags);
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 15/26] afs: Use a bvecq to hold dir content rather than folioq
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (13 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 14/26] netfs: Add a function to extract from an iter into a bvecq David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 16/26] cifs: Use a bvecq for buffering instead of a folioq David Howells
` (10 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
Use a bvecq to hold the contents of a directory rather than the folioq so
that the latter can be phased out.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: linux-afs@lists.infradead.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/afs/dir.c | 39 +++++------
fs/afs/dir_edit.c | 42 +++++------
fs/afs/dir_search.c | 33 ++++-----
fs/afs/inode.c | 20 +++---
fs/afs/internal.h | 6 +-
fs/netfs/write_issue.c | 156 ++++++-----------------------------------
6 files changed, 88 insertions(+), 208 deletions(-)
diff --git a/fs/afs/dir.c b/fs/afs/dir.c
index 78caef3f1338..6627a0d38e73 100644
--- a/fs/afs/dir.c
+++ b/fs/afs/dir.c
@@ -136,9 +136,9 @@ static void afs_dir_dump(struct afs_vnode *dvnode)
pr_warn("DIR %llx:%llx is=%llx\n",
dvnode->fid.vid, dvnode->fid.vnode, i_size);
- iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size);
- iterate_folioq(&iter, iov_iter_count(&iter), NULL, NULL,
- afs_dir_dump_step);
+ iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size);
+ iterate_bvecq(&iter, iov_iter_count(&iter), NULL, NULL,
+ afs_dir_dump_step);
}
/*
@@ -199,9 +199,9 @@ static int afs_dir_check(struct afs_vnode *dvnode)
if (unlikely(!i_size))
return 0;
- iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size);
- checked = iterate_folioq(&iter, iov_iter_count(&iter), dvnode, NULL,
- afs_dir_check_step);
+ iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size);
+ checked = iterate_bvecq(&iter, iov_iter_count(&iter), dvnode, NULL,
+ afs_dir_check_step);
if (checked != i_size) {
afs_dir_dump(dvnode);
return -EIO;
@@ -255,15 +255,14 @@ static ssize_t afs_do_read_single(struct afs_vnode *dvnode, struct file *file)
if (dvnode->directory_size < i_size) {
size_t cur_size = dvnode->directory_size;
- ret = netfs_alloc_folioq_buffer(NULL,
- &dvnode->directory, &cur_size, i_size,
- mapping_gfp_mask(dvnode->netfs.inode.i_mapping));
+ ret = bvecq_expand_buffer(&dvnode->directory, &cur_size, i_size,
+ mapping_gfp_mask(dvnode->netfs.inode.i_mapping));
dvnode->directory_size = cur_size;
if (ret < 0)
return ret;
}
- iov_iter_folio_queue(&iter, ITER_DEST, dvnode->directory, 0, 0, dvnode->directory_size);
+ iov_iter_bvec_queue(&iter, ITER_DEST, dvnode->directory, 0, 0, dvnode->directory_size);
/* AFS requires us to perform the read of a directory synchronously as
* a single unit to avoid issues with the directory contents being
@@ -282,9 +281,9 @@ static ssize_t afs_do_read_single(struct afs_vnode *dvnode, struct file *file)
if (ret2 < 0)
ret = ret2;
- } else if (i_size < folioq_folio_size(dvnode->directory, 0)) {
+ } else if (i_size < PAGE_SIZE) {
/* NUL-terminate a symlink. */
- char *symlink = kmap_local_folio(folioq_folio(dvnode->directory, 0), 0);
+ char *symlink = kmap_local_bvec(&dvnode->directory->bv[0], 0);
symlink[i_size] = 0;
kunmap_local(symlink);
@@ -305,8 +304,8 @@ ssize_t afs_read_single(struct afs_vnode *dvnode, struct file *file)
}
/*
- * Read the directory into a folio_queue buffer in one go, scrubbing the
- * previous contents. We return -ESTALE if the caller needs to call us again.
+ * Read the directory into the buffer in one go, scrubbing the previous
+ * contents. We return -ESTALE if the caller needs to call us again.
*/
ssize_t afs_read_dir(struct afs_vnode *dvnode, struct file *file)
__acquires(&dvnode->validate_lock)
@@ -487,7 +486,7 @@ static size_t afs_dir_iterate_step(void *iter_base, size_t progress, size_t len,
}
/*
- * Iterate through the directory folios.
+ * Iterate through the directory content.
*/
static int afs_dir_iterate_contents(struct inode *dir, struct dir_context *dir_ctx)
{
@@ -502,11 +501,11 @@ static int afs_dir_iterate_contents(struct inode *dir, struct dir_context *dir_c
if (i_size <= 0 || dir_ctx->pos >= i_size)
return 0;
- iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size);
+ iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0, i_size);
iov_iter_advance(&iter, round_down(dir_ctx->pos, AFS_DIR_BLOCK_SIZE));
- iterate_folioq(&iter, iov_iter_count(&iter), dvnode, &ctx,
- afs_dir_iterate_step);
+ iterate_bvecq(&iter, iov_iter_count(&iter), dvnode, &ctx,
+ afs_dir_iterate_step);
if (ctx.error == -ESTALE)
afs_invalidate_dir(dvnode, afs_dir_invalid_iter_stale);
@@ -2211,8 +2210,8 @@ int afs_single_writepages(struct address_space *mapping,
if (is_dir ?
test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) :
atomic64_read(&dvnode->cb_expires_at) != AFS_NO_CB_PROMISE) {
- iov_iter_folio_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0,
- i_size_read(&dvnode->netfs.inode));
+ iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0,
+ i_size_read(&dvnode->netfs.inode));
ret = netfs_writeback_single(mapping, wbc, &iter);
}
diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c
index fd3aa9f97ce6..59d3decf7692 100644
--- a/fs/afs/dir_edit.c
+++ b/fs/afs/dir_edit.c
@@ -110,9 +110,8 @@ static void afs_clear_contig_bits(union afs_xdr_dir_block *block,
*/
static union afs_xdr_dir_block *afs_dir_get_block(struct afs_dir_iter *iter, size_t block)
{
- struct folio_queue *fq;
struct afs_vnode *dvnode = iter->dvnode;
- struct folio *folio;
+ struct bvecq *bq;
size_t blpos = block * AFS_DIR_BLOCK_SIZE;
size_t blend = (block + 1) * AFS_DIR_BLOCK_SIZE, fpos = iter->fpos;
int ret;
@@ -120,41 +119,38 @@ static union afs_xdr_dir_block *afs_dir_get_block(struct afs_dir_iter *iter, siz
if (dvnode->directory_size < blend) {
size_t cur_size = dvnode->directory_size;
- ret = netfs_alloc_folioq_buffer(
- NULL, &dvnode->directory, &cur_size, blend,
- mapping_gfp_mask(dvnode->netfs.inode.i_mapping));
+ ret = bvecq_expand_buffer(&dvnode->directory, &cur_size, blend,
+ mapping_gfp_mask(dvnode->netfs.inode.i_mapping));
dvnode->directory_size = cur_size;
if (ret < 0)
goto fail;
}
- fq = iter->fq;
- if (!fq)
- fq = dvnode->directory;
+ bq = iter->bq;
+ if (!bq)
+ bq = dvnode->directory;
- /* Search the folio queue for the folio containing the block... */
- for (; fq; fq = fq->next) {
- for (int s = iter->fq_slot; s < folioq_count(fq); s++) {
- size_t fsize = folioq_folio_size(fq, s);
+ /* Search the contents for the region containing the block... */
+ for (; bq; bq = bq->next) {
+ for (int s = iter->bq_slot; s < bq->nr_slots; s++) {
+ struct bio_vec *bv = &bq->bv[s];
+ size_t bsize = bv->bv_len;
- if (blend <= fpos + fsize) {
+ if (blend <= fpos + bsize) {
/* ... and then return the mapped block. */
- folio = folioq_folio(fq, s);
- if (WARN_ON_ONCE(folio_pos(folio) != fpos))
- goto fail;
- iter->fq = fq;
- iter->fq_slot = s;
+ iter->bq = bq;
+ iter->bq_slot = s;
iter->fpos = fpos;
- return kmap_local_folio(folio, blpos - fpos);
+ return kmap_local_bvec(bv, blpos - fpos);
}
- fpos += fsize;
+ fpos += bsize;
}
- iter->fq_slot = 0;
+ iter->bq_slot = 0;
}
fail:
- iter->fq = NULL;
- iter->fq_slot = 0;
+ iter->bq = NULL;
+ iter->bq_slot = 0;
afs_invalidate_dir(dvnode, afs_dir_invalid_edit_get_block);
return NULL;
}
diff --git a/fs/afs/dir_search.c b/fs/afs/dir_search.c
index d2516e55b5ed..f1d2b49bc6f0 100644
--- a/fs/afs/dir_search.c
+++ b/fs/afs/dir_search.c
@@ -66,12 +66,11 @@ bool afs_dir_init_iter(struct afs_dir_iter *iter, const struct qstr *name)
*/
union afs_xdr_dir_block *afs_dir_find_block(struct afs_dir_iter *iter, size_t block)
{
- struct folio_queue *fq = iter->fq;
struct afs_vnode *dvnode = iter->dvnode;
- struct folio *folio;
+ struct bvecq *bq = iter->bq;
size_t blpos = block * AFS_DIR_BLOCK_SIZE;
size_t blend = (block + 1) * AFS_DIR_BLOCK_SIZE, fpos = iter->fpos;
- int slot = iter->fq_slot;
+ int slot = iter->bq_slot;
_enter("%zx,%d", block, slot);
@@ -83,36 +82,34 @@ union afs_xdr_dir_block *afs_dir_find_block(struct afs_dir_iter *iter, size_t bl
if (dvnode->directory_size < blend)
goto fail;
- if (!fq || blpos < fpos) {
- fq = dvnode->directory;
+ if (!bq || blpos < fpos) {
+ bq = dvnode->directory;
slot = 0;
fpos = 0;
}
/* Search the folio queue for the folio containing the block... */
- for (; fq; fq = fq->next) {
- for (; slot < folioq_count(fq); slot++) {
- size_t fsize = folioq_folio_size(fq, slot);
+ for (; bq; bq = bq->next) {
+ for (; slot < bq->nr_slots; slot++) {
+ struct bio_vec *bv = &bq->bv[slot];
+ size_t bsize = bv->bv_len;
- if (blend <= fpos + fsize) {
+ if (blend <= fpos + bsize) {
/* ... and then return the mapped block. */
- folio = folioq_folio(fq, slot);
- if (WARN_ON_ONCE(folio_pos(folio) != fpos))
- goto fail;
- iter->fq = fq;
- iter->fq_slot = slot;
+ iter->bq = bq;
+ iter->bq_slot = slot;
iter->fpos = fpos;
- iter->block = kmap_local_folio(folio, blpos - fpos);
+ iter->block = kmap_local_bvec(bv, blpos - fpos);
return iter->block;
}
- fpos += fsize;
+ fpos += bsize;
}
slot = 0;
}
fail:
- iter->fq = NULL;
- iter->fq_slot = 0;
+ iter->bq = NULL;
+ iter->bq_slot = 0;
afs_invalidate_dir(dvnode, afs_dir_invalid_edit_get_block);
return NULL;
}
diff --git a/fs/afs/inode.c b/fs/afs/inode.c
index dde1857fcabb..94e3442da849 100644
--- a/fs/afs/inode.c
+++ b/fs/afs/inode.c
@@ -31,12 +31,12 @@ void afs_init_new_symlink(struct afs_vnode *vnode, struct afs_operation *op)
size_t dsize = 0;
char *p;
- if (netfs_alloc_folioq_buffer(NULL, &vnode->directory, &dsize, size,
- mapping_gfp_mask(vnode->netfs.inode.i_mapping)) < 0)
+ if (bvecq_expand_buffer(&vnode->directory, &dsize, size,
+ mapping_gfp_mask(vnode->netfs.inode.i_mapping)) < 0)
return;
vnode->directory_size = dsize;
- p = kmap_local_folio(folioq_folio(vnode->directory, 0), 0);
+ p = kmap_local_bvec(&vnode->directory->bv[0], 0);
memcpy(p, op->create.symlink, size);
kunmap_local(p);
set_bit(AFS_VNODE_DIR_READ, &vnode->flags);
@@ -45,17 +45,17 @@ void afs_init_new_symlink(struct afs_vnode *vnode, struct afs_operation *op)
static void afs_put_link(void *arg)
{
- struct folio *folio = virt_to_folio(arg);
+ struct page *page = virt_to_page(arg);
kunmap_local(arg);
- folio_put(folio);
+ put_page(page);
}
const char *afs_get_link(struct dentry *dentry, struct inode *inode,
struct delayed_call *callback)
{
struct afs_vnode *vnode = AFS_FS_I(inode);
- struct folio *folio;
+ struct page *page;
char *content;
ssize_t ret;
@@ -84,9 +84,9 @@ const char *afs_get_link(struct dentry *dentry, struct inode *inode,
set_bit(AFS_VNODE_DIR_READ, &vnode->flags);
good:
- folio = folioq_folio(vnode->directory, 0);
- folio_get(folio);
- content = kmap_local_folio(folio, 0);
+ page = vnode->directory->bv[0].bv_page;
+ get_page(page);
+ content = kmap_local_page(page);
set_delayed_call(callback, afs_put_link, content);
return content;
}
@@ -761,7 +761,7 @@ void afs_evict_inode(struct inode *inode)
netfs_wait_for_outstanding_io(inode);
truncate_inode_pages_final(&inode->i_data);
- netfs_free_folioq_buffer(vnode->directory);
+ bvecq_put(vnode->directory);
afs_set_cache_aux(vnode, &aux);
netfs_clear_inode_writeback(inode, &aux);
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index 009064b8d661..9bf5d2f1dbc4 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -710,7 +710,7 @@ struct afs_vnode {
#define AFS_VNODE_MODIFYING 10 /* Set if we're performing a modification op */
#define AFS_VNODE_DIR_READ 11 /* Set if we've read a dir's contents */
- struct folio_queue *directory; /* Directory contents */
+ struct bvecq *directory; /* Directory contents */
struct list_head wb_keys; /* List of keys available for writeback */
struct list_head pending_locks; /* locks waiting to be granted */
struct list_head granted_locks; /* locks granted on this file */
@@ -983,9 +983,9 @@ static inline void afs_invalidate_cache(struct afs_vnode *vnode, unsigned int fl
struct afs_dir_iter {
struct afs_vnode *dvnode;
union afs_xdr_dir_block *block;
- struct folio_queue *fq;
+ struct bvecq *bq;
unsigned int fpos;
- int fq_slot;
+ int bq_slot;
unsigned int loop_check;
u8 nr_slots;
u8 bucket;
diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
index 2de6b8621e11..9ca2c780f469 100644
--- a/fs/netfs/write_issue.c
+++ b/fs/netfs/write_issue.c
@@ -700,124 +700,11 @@ ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_c
return ret;
}
-/*
- * Write some of a pending folio data back to the server and/or the cache.
- */
-static int netfs_write_folio_single(struct netfs_io_request *wreq,
- struct folio *folio)
-{
- struct netfs_io_stream *upload = &wreq->io_streams[0];
- struct netfs_io_stream *cache = &wreq->io_streams[1];
- struct netfs_io_stream *stream;
- size_t iter_off = 0;
- size_t fsize = folio_size(folio), flen;
- loff_t fpos = folio_pos(folio);
- bool to_eof = false;
- bool no_debug = false;
-
- _enter("");
-
- flen = folio_size(folio);
- if (flen > wreq->i_size - fpos) {
- flen = wreq->i_size - fpos;
- folio_zero_segment(folio, flen, fsize);
- to_eof = true;
- } else if (flen == wreq->i_size - fpos) {
- to_eof = true;
- }
-
- _debug("folio %zx/%zx", flen, fsize);
-
- if (!upload->avail && !cache->avail) {
- trace_netfs_folio(folio, netfs_folio_trace_cancel_store);
- return 0;
- }
-
- if (!upload->construct)
- trace_netfs_folio(folio, netfs_folio_trace_store);
- else
- trace_netfs_folio(folio, netfs_folio_trace_store_plus);
-
- /* Attach the folio to the rolling buffer. */
- folio_get(folio);
- rolling_buffer_append(&wreq->buffer, folio, NETFS_ROLLBUF_PUT_MARK);
-
- /* Move the submission point forward to allow for write-streaming data
- * not starting at the front of the page. We don't do write-streaming
- * with the cache as the cache requires DIO alignment.
- *
- * Also skip uploading for data that's been read and just needs copying
- * to the cache.
- */
- for (int s = 0; s < NR_IO_STREAMS; s++) {
- stream = &wreq->io_streams[s];
- stream->submit_off = 0;
- stream->submit_len = flen;
- if (!stream->avail) {
- stream->submit_off = UINT_MAX;
- stream->submit_len = 0;
- }
- }
-
- /* Attach the folio to one or more subrequests. For a big folio, we
- * could end up with thousands of subrequests if the wsize is small -
- * but we might need to wait during the creation of subrequests for
- * network resources (eg. SMB credits).
- */
- for (;;) {
- ssize_t part;
- size_t lowest_off = ULONG_MAX;
- int choose_s = -1;
-
- /* Always add to the lowest-submitted stream first. */
- for (int s = 0; s < NR_IO_STREAMS; s++) {
- stream = &wreq->io_streams[s];
- if (stream->submit_len > 0 &&
- stream->submit_off < lowest_off) {
- lowest_off = stream->submit_off;
- choose_s = s;
- }
- }
-
- if (choose_s < 0)
- break;
- stream = &wreq->io_streams[choose_s];
-
- /* Advance the iterator(s). */
- if (stream->submit_off > iter_off) {
- rolling_buffer_advance(&wreq->buffer, stream->submit_off - iter_off);
- iter_off = stream->submit_off;
- }
-
- atomic64_set(&wreq->issued_to, fpos + stream->submit_off);
- stream->submit_extendable_to = fsize - stream->submit_off;
- part = netfs_advance_write(wreq, stream, fpos + stream->submit_off,
- stream->submit_len, to_eof);
- stream->submit_off += part;
- if (part > stream->submit_len)
- stream->submit_len = 0;
- else
- stream->submit_len -= part;
- if (part > 0)
- no_debug = true;
- }
-
- wreq->buffer.iter.iov_offset = 0;
- if (fsize > iter_off)
- rolling_buffer_advance(&wreq->buffer, fsize - iter_off);
- atomic64_set(&wreq->issued_to, fpos + fsize);
-
- if (!no_debug)
- kdebug("R=%x: No submit", wreq->debug_id);
- _leave(" = 0");
- return 0;
-}
-
/**
* netfs_writeback_single - Write back a monolithic payload
* @mapping: The mapping to write from
* @wbc: Hints from the VM
- * @iter: Data to write, must be ITER_FOLIOQ.
+ * @iter: Data to write.
*
* Write a monolithic, non-pagecache object back to the server and/or
* the cache.
@@ -828,13 +715,8 @@ int netfs_writeback_single(struct address_space *mapping,
{
struct netfs_io_request *wreq;
struct netfs_inode *ictx = netfs_inode(mapping->host);
- struct folio_queue *fq;
- size_t size = iov_iter_count(iter);
int ret;
- if (WARN_ON_ONCE(!iov_iter_is_folioq(iter)))
- return -EIO;
-
if (!mutex_trylock(&ictx->wb_lock)) {
if (wbc->sync_mode == WB_SYNC_NONE) {
netfs_stat(&netfs_n_wb_lock_skip);
@@ -850,6 +732,9 @@ int netfs_writeback_single(struct address_space *mapping,
goto couldnt_start;
}
+ wreq->buffer.iter = *iter;
+ wreq->len = iov_iter_count(iter);
+
__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
trace_netfs_write(wreq, netfs_write_trace_writeback_single);
netfs_stat(&netfs_n_wh_writepages);
@@ -857,31 +742,34 @@ int netfs_writeback_single(struct address_space *mapping,
if (__test_and_set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))
wreq->netfs_ops->begin_writeback(wreq);
- for (fq = (struct folio_queue *)iter->folioq; fq; fq = fq->next) {
- for (int slot = 0; slot < folioq_count(fq); slot++) {
- struct folio *folio = folioq_folio(fq, slot);
- size_t part = umin(folioq_folio_size(fq, slot), size);
+ for (int s = 0; s < NR_IO_STREAMS; s++) {
+ struct netfs_io_subrequest *subreq;
+ struct netfs_io_stream *stream = &wreq->io_streams[s];
+
+ if (!stream->avail)
+ continue;
- _debug("wbiter %lx %llx", folio->index, atomic64_read(&wreq->issued_to));
+ netfs_prepare_write(wreq, stream, 0);
- ret = netfs_write_folio_single(wreq, folio);
- if (ret < 0)
- goto stop;
- size -= part;
- if (size <= 0)
- goto stop;
- }
+ subreq = stream->construct;
+ subreq->len = wreq->len;
+ stream->submit_len = subreq->len;
+ stream->submit_extendable_to = round_up(wreq->len, PAGE_SIZE);
+
+ netfs_issue_write(wreq, stream);
}
-stop:
- for (int s = 0; s < NR_IO_STREAMS; s++)
- netfs_issue_write(wreq, &wreq->io_streams[s]);
smp_wmb(); /* Write lists before ALL_QUEUED. */
set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
mutex_unlock(&ictx->wb_lock);
netfs_wake_collector(wreq);
+ /* TODO: Might want to be async here if WB_SYNC_NONE, but then need to
+ * wait before modifying.
+ */
+ ret = netfs_wait_for_write(wreq);
+
netfs_put_request(wreq, netfs_rreq_trace_put_return);
_leave(" = %d", ret);
return ret;
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 16/26] cifs: Use a bvecq for buffering instead of a folioq
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (14 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 15/26] afs: Use a bvecq to hold dir content rather than folioq David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 17/26] cifs: Support ITER_BVECQ in smb_extract_iter_to_rdma() David Howells
` (9 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
Use a bvecq for internal buffering for crypto purposes instead of a folioq
so that the latter can be phased out.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: Steve French <sfrench@samba.org>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/smb/client/cifsglob.h | 2 +-
fs/smb/client/smb2ops.c | 70 +++++++++++++++++++---------------------
2 files changed, 34 insertions(+), 38 deletions(-)
diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
index 6f9b6c72962b..8f3c16b57a1f 100644
--- a/fs/smb/client/cifsglob.h
+++ b/fs/smb/client/cifsglob.h
@@ -290,7 +290,7 @@ struct smb_rqst {
struct kvec *rq_iov; /* array of kvecs */
unsigned int rq_nvec; /* number of kvecs in array */
struct iov_iter rq_iter; /* Data iterator */
- struct folio_queue *rq_buffer; /* Buffer for encryption */
+ struct bvecq *rq_buffer; /* Buffer for encryption */
};
struct mid_q_entry;
diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
index 7f2d3459cbf9..173acca17af7 100644
--- a/fs/smb/client/smb2ops.c
+++ b/fs/smb/client/smb2ops.c
@@ -4517,19 +4517,17 @@ crypt_message(struct TCP_Server_Info *server, int num_rqst,
}
/*
- * Copy data from an iterator to the folios in a folio queue buffer.
+ * Copy data from an iterator to the pages in a bvec queue buffer.
*/
-static bool cifs_copy_iter_to_folioq(struct iov_iter *iter, size_t size,
- struct folio_queue *buffer)
+static bool cifs_copy_iter_to_bvecq(struct iov_iter *iter, size_t size,
+ struct bvecq *buffer)
{
for (; buffer; buffer = buffer->next) {
- for (int s = 0; s < folioq_count(buffer); s++) {
- struct folio *folio = folioq_folio(buffer, s);
- size_t part = folioq_folio_size(buffer, s);
+ for (int s = 0; s < buffer->nr_slots; s++) {
+ struct bio_vec *bv = &buffer->bv[s];
+ size_t part = umin(bv->bv_len, size);
- part = umin(part, size);
-
- if (copy_folio_from_iter(folio, 0, part, iter) != part)
+ if (copy_page_from_iter(bv->bv_page, 0, part, iter) != part)
return false;
size -= part;
}
@@ -4541,7 +4539,7 @@ void
smb3_free_compound_rqst(int num_rqst, struct smb_rqst *rqst)
{
for (int i = 0; i < num_rqst; i++)
- netfs_free_folioq_buffer(rqst[i].rq_buffer);
+ bvecq_put(rqst[i].rq_buffer);
}
/*
@@ -4568,7 +4566,7 @@ smb3_init_transform_rq(struct TCP_Server_Info *server, int num_rqst,
for (int i = 1; i < num_rqst; i++) {
struct smb_rqst *old = &old_rq[i - 1];
struct smb_rqst *new = &new_rq[i];
- struct folio_queue *buffer = NULL;
+ struct bvecq *buffer = NULL;
size_t size = iov_iter_count(&old->rq_iter);
orig_len += smb_rqst_len(server, old);
@@ -4576,17 +4574,16 @@ smb3_init_transform_rq(struct TCP_Server_Info *server, int num_rqst,
new->rq_nvec = old->rq_nvec;
if (size > 0) {
- size_t cur_size = 0;
- rc = netfs_alloc_folioq_buffer(NULL, &buffer, &cur_size,
- size, GFP_NOFS);
- if (rc < 0)
+ rc = -ENOMEM;
+ buffer = bvecq_alloc_buffer(size, 0, GFP_NOFS);
+ if (!buffer)
goto err_free;
new->rq_buffer = buffer;
- iov_iter_folio_queue(&new->rq_iter, ITER_SOURCE,
- buffer, 0, 0, size);
+ iov_iter_bvec_queue(&new->rq_iter, ITER_SOURCE,
+ buffer, 0, 0, size);
- if (!cifs_copy_iter_to_folioq(&old->rq_iter, size, buffer)) {
+ if (!cifs_copy_iter_to_bvecq(&old->rq_iter, size, buffer)) {
rc = smb_EIO1(smb_eio_trace_tx_copy_iter_to_buf, size);
goto err_free;
}
@@ -4676,16 +4673,15 @@ decrypt_raw_data(struct TCP_Server_Info *server, char *buf,
}
static int
-cifs_copy_folioq_to_iter(struct folio_queue *folioq, size_t data_size,
- size_t skip, struct iov_iter *iter)
+cifs_copy_bvecq_to_iter(struct bvecq *bq, size_t data_size,
+ size_t skip, struct iov_iter *iter)
{
- for (; folioq; folioq = folioq->next) {
- for (int s = 0; s < folioq_count(folioq); s++) {
- struct folio *folio = folioq_folio(folioq, s);
- size_t fsize = folio_size(folio);
- size_t n, len = umin(fsize - skip, data_size);
+ for (; bq; bq = bq->next) {
+ for (int s = 0; s < bq->nr_slots; s++) {
+ struct bio_vec *bv = &bq->bv[s];
+ size_t n, len = umin(bv->bv_len - skip, data_size);
- n = copy_folio_to_iter(folio, skip, len, iter);
+ n = copy_page_to_iter(bv->bv_page, bv->bv_offset + skip, len, iter);
if (n != len) {
cifs_dbg(VFS, "%s: something went wrong\n", __func__);
return smb_EIO2(smb_eio_trace_rx_copy_to_iter,
@@ -4701,7 +4697,7 @@ cifs_copy_folioq_to_iter(struct folio_queue *folioq, size_t data_size,
static int
handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
- char *buf, unsigned int buf_len, struct folio_queue *buffer,
+ char *buf, unsigned int buf_len, struct bvecq *buffer,
unsigned int buffer_len, bool is_offloaded)
{
unsigned int data_offset;
@@ -4810,8 +4806,8 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
}
/* Copy the data to the output I/O iterator. */
- rdata->result = cifs_copy_folioq_to_iter(buffer, buffer_len,
- cur_off, &rdata->subreq.io_iter);
+ rdata->result = cifs_copy_bvecq_to_iter(buffer, buffer_len,
+ cur_off, &rdata->subreq.io_iter);
if (rdata->result != 0) {
if (is_offloaded)
mid->mid_state = MID_RESPONSE_MALFORMED;
@@ -4849,7 +4845,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
struct smb2_decrypt_work {
struct work_struct decrypt;
struct TCP_Server_Info *server;
- struct folio_queue *buffer;
+ struct bvecq *buffer;
char *buf;
unsigned int len;
};
@@ -4863,7 +4859,7 @@ static void smb2_decrypt_offload(struct work_struct *work)
struct mid_q_entry *mid;
struct iov_iter iter;
- iov_iter_folio_queue(&iter, ITER_DEST, dw->buffer, 0, 0, dw->len);
+ iov_iter_bvec_queue(&iter, ITER_DEST, dw->buffer, 0, 0, dw->len);
rc = decrypt_raw_data(dw->server, dw->buf, dw->server->vals->read_rsp_size,
&iter, true);
if (rc) {
@@ -4912,7 +4908,7 @@ static void smb2_decrypt_offload(struct work_struct *work)
}
free_pages:
- netfs_free_folioq_buffer(dw->buffer);
+ bvecq_put(dw->buffer);
cifs_small_buf_release(dw->buf);
kfree(dw);
}
@@ -4950,12 +4946,12 @@ receive_encrypted_read(struct TCP_Server_Info *server, struct mid_q_entry **mid,
dw->len = len;
len = round_up(dw->len, PAGE_SIZE);
- size_t cur_size = 0;
- rc = netfs_alloc_folioq_buffer(NULL, &dw->buffer, &cur_size, len, GFP_NOFS);
- if (rc < 0)
+ rc = -ENOMEM;
+ dw->buffer = bvecq_alloc_buffer(len, 0, GFP_NOFS);
+ if (!dw->buffer)
goto discard_data;
- iov_iter_folio_queue(&iter, ITER_DEST, dw->buffer, 0, 0, len);
+ iov_iter_bvec_queue(&iter, ITER_DEST, dw->buffer, 0, 0, len);
/* Read the data into the buffer and clear excess bufferage. */
rc = cifs_read_iter_from_socket(server, &iter, dw->len);
@@ -5013,7 +5009,7 @@ receive_encrypted_read(struct TCP_Server_Info *server, struct mid_q_entry **mid,
}
free_pages:
- netfs_free_folioq_buffer(dw->buffer);
+ bvecq_put(dw->buffer);
free_dw:
kfree(dw);
return rc;
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 17/26] cifs: Support ITER_BVECQ in smb_extract_iter_to_rdma()
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (15 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 16/26] cifs: Use a bvecq for buffering instead of a folioq David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 18/26] netfs: Switch to using bvecq rather than folio_queue and rolling_buffer David Howells
` (8 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara, Shyam Prasad N,
Tom Talpey
Add support for ITER_BVECQ to smb_extract_iter_to_rdma().
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: Steve French <sfrench@samba.org>
cc: Shyam Prasad N <sprasad@microsoft.com>
cc: Tom Talpey <tom@talpey.com>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/smb/client/smbdirect.c | 60 +++++++++++++++++++++++++++++++++++++++
1 file changed, 60 insertions(+)
diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
index c79304012b08..f8a6be83db98 100644
--- a/fs/smb/client/smbdirect.c
+++ b/fs/smb/client/smbdirect.c
@@ -3298,6 +3298,63 @@ static ssize_t smb_extract_folioq_to_rdma(struct iov_iter *iter,
return ret;
}
+/*
+ * Extract memory fragments from a BVECQ-class iterator and add them to an RDMA
+ * list. The folios are not pinned.
+ */
+static ssize_t smb_extract_bvecq_to_rdma(struct iov_iter *iter,
+ struct smb_extract_to_rdma *rdma,
+ ssize_t maxsize)
+{
+ const struct bvecq *bq = iter->bvecq;
+ unsigned int slot = iter->bvecq_slot;
+ ssize_t ret = 0;
+ size_t offset = iter->iov_offset;
+
+ if (slot >= bq->nr_slots) {
+ bq = bq->next;
+ if (WARN_ON_ONCE(!bq))
+ return -EIO;
+ slot = 0;
+ }
+
+ do {
+ struct bio_vec *bv = &bq->bv[slot];
+ struct page *page = bv->bv_page;
+ size_t bsize = bv->bv_len;
+
+ if (offset < bsize) {
+ size_t part = umin(maxsize, bsize - offset);
+
+ if (!smb_set_sge(rdma, page, bv->bv_offset + offset, part))
+ return -EIO;
+
+ offset += part;
+ ret += part;
+ maxsize -= part;
+ }
+
+ if (offset >= bsize) {
+ offset = 0;
+ slot++;
+ if (slot >= bq->nr_slots) {
+ if (!bq->next) {
+ WARN_ON_ONCE(ret < iter->count);
+ break;
+ }
+ bq = bq->next;
+ slot = 0;
+ }
+ }
+ } while (rdma->nr_sge < rdma->max_sge && maxsize > 0);
+
+ iter->bvecq = bq;
+ iter->bvecq_slot = slot;
+ iter->iov_offset = offset;
+ iter->count -= ret;
+ return ret;
+}
+
/*
* Extract page fragments from up to the given amount of the source iterator
* and build up an RDMA list that refers to all of those bits. The RDMA list
@@ -3325,6 +3382,9 @@ static ssize_t smb_extract_iter_to_rdma(struct iov_iter *iter, size_t len,
case ITER_FOLIOQ:
ret = smb_extract_folioq_to_rdma(iter, rdma, len);
break;
+ case ITER_BVECQ:
+ ret = smb_extract_bvecq_to_rdma(iter, rdma, len);
+ break;
default:
WARN_ON_ONCE(1);
return -EIO;
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 18/26] netfs: Switch to using bvecq rather than folio_queue and rolling_buffer
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (16 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 17/26] cifs: Support ITER_BVECQ in smb_extract_iter_to_rdma() David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 19/26] cifs: Remove support for ITER_KVEC/BVEC/FOLIOQ from smb_extract_iter_to_rdma() David Howells
` (7 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara, Shyam Prasad N,
Tom Talpey
Switch netfslib to using bvecq, a segmented bio_vec[] queue, instead of the
folio_queue and rolling_buffer constructs, to keep track of the regions of
memory it is performing I/O upon. Each bvecq struct in the chain is marked
with the starting file position of that sequence so that discontiguities
can be handled (the contents of each individual bvecq must be contiguous).
For unbuffered/direct I/O, the iterator is extracted into the queue up
front. For buffered I/O, the folios are added to the queue as the
operation proceeds, much as it does now with folio_queues. There is one
important change for buffered writes: only the relevant part of the folio
is included; this is expanded for writes to the cache in a copy of the
bvecq segment (it is known that each bio_vec corresponds to part of a
folio in this case).
The bvecq structs are marked with information as to how the regions
contained therein should be disposed of (unlock-only, free, unpin).
When setting up a subrequest, netfslib will furnish it with a slice of the
main buffer queue as a pointer to starting bvecq, slot and offset and, for
the moment, an ITER_BVECQ iterator is set to cover the slice in
subreq->io_iter.
Notes on the implementation:
(1) This patch uses the concept of a 'bvecq position', which is a tuple of
{ bvecq, slot, offset }. This is lighter weight than using a full
iov_iter, though that would also suffice. If not NULL, the position
also holds a reference on the bvecq it is pointing to. This is
probably overkill as only the hindmost position (that of collection)
needs to hold a reference.
(2) There are three positions on the netfs_io_request struct. Not all are
used by every request type.
Firstly, there's ->load_cursor, which is used by buffered read and
write to point to the next slot to have a folio inserted into it
(either loaded from the readahead_control or from writeback_iter()).
Secondly, there's ->dispatch_cursor, which is used to provide the
position in the buffer from which we start dispatching a subrequest.
Thirdly, there's the ->collect_cursor, which is used by the collection
routines to point to the next memory region to be cleaned up.
(3) There are two positions on the netfs_io_subrequest struct.
Firstly, there's ->dispatch_pos, which indicates the position from
which a subrequest's buffer begins. This is used as the base of the
position from which to retry (advanced by ->transfer).
Secondly, there's ->content, which is normally the same as
->dispatch_pos but if the bvecq chain got duplicated or the content
got copied, then this will point to that and will that will be
disposed of on retry.
(4) Maintenance of the position structs is done with helper functions,
such as bvecq_pos_attach() to hide the refcounting.
(5) When sending a write to the cache, the bvecq will be duplicated and
the ends rounded up/down to the backing file's DIO block alignment.
(6) bvec_slice() is used to select a slice of the source buffer and assign
it to a subrequest. The source buffer position is advanced.
(7) netfs_extract_iter() is used by unbuffered/direct I/O API functions to
decant a chunk of the iov_iter supplied by the VFS into a bvecq chain
- and to label the bvecqs with appropriate disposal information
(e.g. unpin, free, nothing).
There are further options that can be explored in the future:
(1) Allow the provision of a duplicated bvecq chain for just that region
so that the filesystem can add bits on either end (such as adding
protocol headers and trailers and gluing several things together into
a compound operation).
(2) If a filesystem supports vectored/sparse read and write ops, it can be
given a chain with discontiguities in it to perform in a single op
(Ceph, for example, can do this).
(3) Because each bvecq notes the start file position of the regions
contained therein, there's no need to translate the info in the
bio_vec into folio pointers in order to unlock the page after I/O.
Instead, the inode's pagecache can be iterated over and the xarray
marks cleared en masse.
(4) Make MSG_SPLICE_PAGES handling read the disposal info in the bvecq and
use that to indicate how it should get rid of the stuff it pasted into
a sk_buff.
(5) If a bounce buffer is needed (encryption, for example), the bounce
buffer can be held in a bvecq and sliced up instead of the main buffer
queue.
(6) Get rid of subreq->io_iter and move the iov_iter stuff down into the
filesystem. The I/O iterators are normally only needed transitorily,
and the one currently in netfs_io_subrequest is unnecessary most of
the time.
folio_queue and rolling_buffer will be removed in a follow up patch.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: Steve French <sfrench@samba.org>
cc: Shyam Prasad N <sprasad@microsoft.com>
cc: Tom Talpey <tom@talpey.com>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/cachefiles/io.c | 12 ---
fs/netfs/Makefile | 1 -
fs/netfs/buffered_read.c | 115 ++++++++++++----------
fs/netfs/direct_read.c | 73 +++++---------
fs/netfs/direct_write.c | 86 +++++++++--------
fs/netfs/internal.h | 10 +-
fs/netfs/iterator.c | 2 +
fs/netfs/misc.c | 20 +---
fs/netfs/objects.c | 16 +---
fs/netfs/read_collect.c | 83 +++++++++-------
fs/netfs/read_pgpriv2.c | 68 +++++++++----
fs/netfs/read_retry.c | 80 +++++++++-------
fs/netfs/read_single.c | 12 ++-
fs/netfs/stats.c | 4 +-
fs/netfs/write_collect.c | 40 ++++----
fs/netfs/write_issue.c | 180 ++++++++++++++++++++++++++---------
fs/netfs/write_retry.c | 45 +++++----
include/linux/netfs.h | 24 ++---
include/trace/events/netfs.h | 46 ++++-----
19 files changed, 520 insertions(+), 397 deletions(-)
diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
index b5ff75697b3e..2af55a75b511 100644
--- a/fs/cachefiles/io.c
+++ b/fs/cachefiles/io.c
@@ -659,7 +659,6 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
struct netfs_cache_resources *cres = &wreq->cache_resources;
struct cachefiles_object *object = cachefiles_cres_object(cres);
struct cachefiles_cache *cache = object->volume->cache;
- struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr];
const struct cred *saved_cred;
size_t off, pre, post, len = subreq->len;
loff_t start = subreq->start;
@@ -684,17 +683,6 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
}
/* We also need to end on the cache granularity boundary */
- if (start + len == wreq->i_size) {
- size_t part = len & (cache->bsize - 1);
- size_t need = cache->bsize - part;
-
- if (part && stream->submit_extendable_to >= need) {
- len += need;
- subreq->len += need;
- subreq->io_iter.count += need;
- }
- }
-
post = len & (cache->bsize - 1);
if (post) {
len -= post;
diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile
index e1f12ecb5abf..0621e6870cbd 100644
--- a/fs/netfs/Makefile
+++ b/fs/netfs/Makefile
@@ -15,7 +15,6 @@ netfs-y := \
read_pgpriv2.o \
read_retry.o \
read_single.o \
- rolling_buffer.o \
write_collect.o \
write_issue.o \
write_retry.o
diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
index abdc990faaa2..2cfd33abfb80 100644
--- a/fs/netfs/buffered_read.c
+++ b/fs/netfs/buffered_read.c
@@ -112,26 +112,21 @@ static int netfs_begin_cache_read(struct netfs_io_request *rreq, struct netfs_in
static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq)
{
struct netfs_io_request *rreq = subreq->rreq;
+ struct netfs_io_stream *stream = &rreq->io_streams[0];
+ ssize_t extracted;
size_t rsize = subreq->len;
if (subreq->source == NETFS_DOWNLOAD_FROM_SERVER)
- rsize = umin(rsize, rreq->io_streams[0].sreq_max_len);
-
- subreq->len = rsize;
- if (unlikely(rreq->io_streams[0].sreq_max_segs)) {
- size_t limit = netfs_limit_iter(&rreq->buffer.iter, 0, rsize,
- rreq->io_streams[0].sreq_max_segs);
-
- if (limit < rsize) {
- subreq->len = limit;
- trace_netfs_sreq(subreq, netfs_sreq_trace_limited);
- }
+ rsize = umin(rsize, stream->sreq_max_len);
+
+ bvecq_pos_set(&subreq->dispatch_pos, &rreq->dispatch_cursor);
+ extracted = bvecq_slice(&rreq->dispatch_cursor, subreq->len,
+ stream->sreq_max_segs, &subreq->nr_segs);
+ if (extracted < rsize) {
+ subreq->len = extracted;
+ trace_netfs_sreq(subreq, netfs_sreq_trace_limited);
}
- subreq->io_iter = rreq->buffer.iter;
-
- iov_iter_truncate(&subreq->io_iter, subreq->len);
- rolling_buffer_advance(&rreq->buffer, subreq->len);
return subreq->len;
}
@@ -195,6 +190,10 @@ static void netfs_queue_read(struct netfs_io_request *rreq,
static void netfs_issue_read(struct netfs_io_request *rreq,
struct netfs_io_subrequest *subreq)
{
+ bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
+ iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, subreq->len);
+
switch (subreq->source) {
case NETFS_DOWNLOAD_FROM_SERVER:
rreq->netfs_ops->issue_read(subreq);
@@ -203,7 +202,8 @@ static void netfs_issue_read(struct netfs_io_request *rreq,
netfs_read_cache_to_pagecache(rreq, subreq);
break;
default:
- __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
+ bvecq_zero(&rreq->dispatch_cursor, subreq->len);
+ subreq->transferred = subreq->len;
subreq->error = 0;
iov_iter_zero(subreq->len, &subreq->io_iter);
subreq->transferred = subreq->len;
@@ -233,6 +233,11 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
ssize_t size = rreq->len;
int ret = 0;
+ _enter("R=%08x", rreq->debug_id);
+
+ bvecq_pos_set(&rreq->dispatch_cursor, &rreq->load_cursor);
+ bvecq_pos_set(&rreq->collect_cursor, &rreq->dispatch_cursor);
+
do {
int (*prepare_read)(struct netfs_io_subrequest *subreq) = NULL;
struct netfs_io_subrequest *subreq;
@@ -381,6 +386,9 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
/* Defer error return as we may need to wait for outstanding I/O. */
cmpxchg(&rreq->error, 0, ret);
+
+ bvecq_pos_unset(&rreq->load_cursor);
+ bvecq_pos_unset(&rreq->dispatch_cursor);
}
/**
@@ -428,7 +436,7 @@ void netfs_readahead(struct readahead_control *ractl)
* acquires a ref on each folio that we will need to release later -
* but we don't want to do that until after we've started the I/O.
*/
- added = rolling_buffer_bulk_load_from_ra(&rreq->buffer, ractl, rreq->debug_id);
+ added = bvecq_load_from_ra(&rreq->load_cursor, ractl);
if (added < 0) {
ret = added;
goto cleanup_free;
@@ -437,7 +445,7 @@ void netfs_readahead(struct readahead_control *ractl)
rreq->submitted = rreq->start + added;
rreq->cleaned_to = rreq->start;
- rreq->front_folio_order = folio_order(rreq->buffer.tail->vec.folios[0]);
+ rreq->front_folio_order = get_order(rreq->load_cursor.bvecq->bv[0].bv_len);
netfs_read_to_pagecache(rreq);
netfs_maybe_bulk_drop_ra_refs(rreq);
@@ -449,20 +457,19 @@ void netfs_readahead(struct readahead_control *ractl)
EXPORT_SYMBOL(netfs_readahead);
/*
- * Create a rolling buffer with a single occupying folio.
+ * Create a buffer queue with a single occupying folio.
*/
-static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct folio *folio,
- unsigned int rollbuf_flags)
+static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct folio *folio)
{
- ssize_t added;
+ struct bvecq *bq;
+ size_t fsize = folio_size(folio);
- if (rolling_buffer_init(&rreq->buffer, rreq->debug_id, ITER_DEST) < 0)
+ if (bvecq_buffer_init(&rreq->load_cursor, GFP_KERNEL) < 0)
return -ENOMEM;
- added = rolling_buffer_append(&rreq->buffer, folio, rollbuf_flags);
- if (added < 0)
- return added;
- rreq->submitted = rreq->start + added;
+ bq = rreq->load_cursor.bvecq;
+ bvec_set_folio(&bq->bv[bq->nr_slots++], folio, fsize, 0);
+ rreq->submitted = rreq->start + fsize;
return 0;
}
@@ -475,11 +482,11 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)
struct address_space *mapping = folio->mapping;
struct netfs_folio *finfo = netfs_folio_info(folio);
struct netfs_inode *ctx = netfs_inode(mapping->host);
- struct folio *sink = NULL;
- struct bio_vec *bvec;
+ struct bvecq *bq = NULL;
+ struct page *sink = NULL;
unsigned int from = finfo->dirty_offset;
unsigned int to = from + finfo->dirty_len;
- unsigned int off = 0, i = 0;
+ unsigned int off = 0;
size_t flen = folio_size(folio);
size_t nr_bvec = flen / PAGE_SIZE + 2;
size_t part;
@@ -504,38 +511,45 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)
* end get copied to, but the middle is discarded.
*/
ret = -ENOMEM;
- bvec = kmalloc_objs(*bvec, nr_bvec);
- if (!bvec)
+ bq = bvecq_alloc_one(nr_bvec, GFP_KERNEL);
+ if (!bq)
goto discard;
+ rreq->load_cursor.bvecq = bq;
- sink = folio_alloc(GFP_KERNEL, 0);
- if (!sink) {
- kfree(bvec);
+ sink = alloc_page(GFP_KERNEL);
+ if (!sink)
goto discard;
- }
trace_netfs_folio(folio, netfs_folio_trace_read_gaps);
- rreq->direct_bv = bvec;
- rreq->direct_bv_count = nr_bvec;
+ for (struct bvecq *p = bq; p; p = p->next)
+ p->free = true;
+
if (from > 0) {
- bvec_set_folio(&bvec[i++], folio, from, 0);
+ folio_get(folio);
+ bvec_set_folio(&bq->bv[bq->nr_slots++], folio, from, 0);
off = from;
}
while (off < to) {
- part = min_t(size_t, to - off, PAGE_SIZE);
- bvec_set_folio(&bvec[i++], sink, part, 0);
+ if (bvecq_is_full(bq))
+ bq = bq->next;
+ part = umin(to - off, PAGE_SIZE);
+ get_page(sink);
+ bvec_set_page(&bq->bv[bq->nr_slots++], sink, part, 0);
off += part;
}
- if (to < flen)
- bvec_set_folio(&bvec[i++], folio, flen - to, to);
- iov_iter_bvec(&rreq->buffer.iter, ITER_DEST, bvec, i, rreq->len);
+ if (to < flen) {
+ if (bvecq_is_full(bq))
+ bq = bq->next;
+ folio_get(folio);
+ bvec_set_folio(&bq->bv[bq->nr_slots++], folio, flen - to, to);
+ }
+
rreq->submitted = rreq->start + flen;
netfs_read_to_pagecache(rreq);
- if (sink)
- folio_put(sink);
+ put_page(sink);
ret = netfs_wait_for_read(rreq);
if (ret >= 0) {
@@ -547,6 +561,8 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)
return ret < 0 ? ret : 0;
discard:
+ if (sink)
+ put_page(sink);
netfs_put_failed_request(rreq);
alloc_error:
folio_unlock(folio);
@@ -597,7 +613,7 @@ int netfs_read_folio(struct file *file, struct folio *folio)
trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage);
/* Set up the output buffer */
- ret = netfs_create_singular_buffer(rreq, folio, 0);
+ ret = netfs_create_singular_buffer(rreq, folio);
if (ret < 0)
goto discard;
@@ -754,7 +770,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
trace_netfs_read(rreq, pos, len, netfs_read_trace_write_begin);
/* Set up the output buffer */
- ret = netfs_create_singular_buffer(rreq, folio, 0);
+ ret = netfs_create_singular_buffer(rreq, folio);
if (ret < 0)
goto error_put;
@@ -819,9 +835,10 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
trace_netfs_read(rreq, start, flen, netfs_read_trace_prefetch_for_write);
/* Set up the output buffer */
- ret = netfs_create_singular_buffer(rreq, folio, NETFS_ROLLBUF_PAGECACHE_MARK);
+ ret = netfs_create_singular_buffer(rreq, folio);
if (ret < 0)
goto error_put;
+ rreq->load_cursor.bvecq->free = true;
netfs_read_to_pagecache(rreq);
ret = netfs_wait_for_read(rreq);
diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c
index f72e6da88cca..05d09ba3d0d0 100644
--- a/fs/netfs/direct_read.c
+++ b/fs/netfs/direct_read.c
@@ -16,31 +16,6 @@
#include <linux/netfs.h>
#include "internal.h"
-static void netfs_prepare_dio_read_iterator(struct netfs_io_subrequest *subreq)
-{
- struct netfs_io_request *rreq = subreq->rreq;
- size_t rsize;
-
- rsize = umin(subreq->len, rreq->io_streams[0].sreq_max_len);
- subreq->len = rsize;
-
- if (unlikely(rreq->io_streams[0].sreq_max_segs)) {
- size_t limit = netfs_limit_iter(&rreq->buffer.iter, 0, rsize,
- rreq->io_streams[0].sreq_max_segs);
-
- if (limit < rsize) {
- subreq->len = limit;
- trace_netfs_sreq(subreq, netfs_sreq_trace_limited);
- }
- }
-
- trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
-
- subreq->io_iter = rreq->buffer.iter;
- iov_iter_truncate(&subreq->io_iter, subreq->len);
- iov_iter_advance(&rreq->buffer.iter, subreq->len);
-}
-
/*
* Perform a read to a buffer from the server, slicing up the region to be read
* according to the network rsize.
@@ -52,9 +27,10 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
ssize_t size = rreq->len;
int ret = 0;
+ bvecq_pos_set(&rreq->dispatch_cursor, &rreq->load_cursor);
+
do {
struct netfs_io_subrequest *subreq;
- ssize_t slice;
subreq = netfs_alloc_subrequest(rreq);
if (!subreq) {
@@ -89,16 +65,24 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
}
}
- netfs_prepare_dio_read_iterator(subreq);
- slice = subreq->len;
- size -= slice;
- start += slice;
- rreq->submitted += slice;
+ bvecq_pos_set(&subreq->dispatch_pos, &rreq->dispatch_cursor);
+ bvecq_pos_set(&subreq->content, &rreq->dispatch_cursor);
+ subreq->len = bvecq_slice(&rreq->dispatch_cursor,
+ umin(size, stream->sreq_max_len),
+ stream->sreq_max_segs,
+ &subreq->nr_segs);
+
+ size -= subreq->len;
+ start += subreq->len;
+ rreq->submitted += subreq->len;
if (size <= 0) {
smp_wmb(); /* Write lists before ALL_QUEUED. */
set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
}
+ iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, subreq->len);
+
rreq->netfs_ops->issue_read(subreq);
if (test_bit(NETFS_RREQ_PAUSE, &rreq->flags))
@@ -114,6 +98,7 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
netfs_wake_collector(rreq);
}
+ bvecq_pos_unset(&rreq->dispatch_cursor);
return ret;
}
@@ -197,25 +182,15 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
* buffer for ourselves as the caller's iterator will be trashed when
* we return.
*
- * In such a case, extract an iterator to represent as much of the the
- * output buffer as we can manage. Note that the extraction might not
- * be able to allocate a sufficiently large bvec array and may shorten
- * the request.
+ * Extract a buffer queue to represent as much of the output buffer as
+ * we can manage. The fragments are extracted into a bvecq which will
+ * have sufficient nodes allocated to hold all the data, though this
+ * may end up truncated if ENOMEM is encountered.
*/
- if (user_backed_iter(iter)) {
- ret = netfs_extract_user_iter(iter, rreq->len, &rreq->buffer.iter, 0);
- if (ret < 0)
- goto error_put;
- rreq->direct_bv = (struct bio_vec *)rreq->buffer.iter.bvec;
- rreq->direct_bv_count = ret;
- rreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);
- rreq->len = iov_iter_count(&rreq->buffer.iter);
- } else {
- rreq->buffer.iter = *iter;
- rreq->len = orig_count;
- rreq->direct_bv_unpin = false;
- iov_iter_advance(iter, orig_count);
- }
+ ret = netfs_extract_iter(iter, rreq->len, INT_MAX, iocb->ki_pos,
+ &rreq->load_cursor.bvecq, 0);
+ if (ret < 0)
+ goto error_put;
// TODO: Set up bounce buffer if needed
diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
index f9ab69de3e29..a61c6d6fd17f 100644
--- a/fs/netfs/direct_write.c
+++ b/fs/netfs/direct_write.c
@@ -73,7 +73,11 @@ static void netfs_unbuffered_write_collect(struct netfs_io_request *wreq,
spin_unlock(&wreq->lock);
wreq->transferred += subreq->transferred;
- iov_iter_advance(&wreq->buffer.iter, subreq->transferred);
+ if (subreq->transferred < subreq->len) {
+ bvecq_pos_unset(&wreq->dispatch_cursor);
+ bvecq_pos_transfer(&wreq->dispatch_cursor, &subreq->dispatch_pos);
+ bvecq_pos_advance(&wreq->dispatch_cursor, subreq->transferred);
+ }
stream->collected_to = subreq->start + subreq->transferred;
wreq->collected_to = stream->collected_to;
@@ -99,6 +103,9 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)
_enter("%llx", wreq->len);
+ bvecq_pos_set(&wreq->dispatch_cursor, &wreq->load_cursor);
+ bvecq_pos_set(&wreq->collect_cursor, &wreq->dispatch_cursor);
+
if (wreq->origin == NETFS_DIO_WRITE)
inode_dio_begin(wreq->inode);
@@ -111,6 +118,8 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)
netfs_prepare_write(wreq, stream, wreq->start + wreq->transferred);
subreq = stream->construct;
stream->construct = NULL;
+ } else {
+ bvecq_pos_set(&subreq->dispatch_pos, &wreq->dispatch_cursor);
}
/* Check if (re-)preparation failed. */
@@ -120,16 +129,18 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)
break;
}
- iov_iter_truncate(&subreq->io_iter, wreq->len - wreq->transferred);
+ subreq->len = bvecq_slice(&wreq->dispatch_cursor, stream->sreq_max_len,
+ stream->sreq_max_segs, &subreq->nr_segs);
+ bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
+
+ iov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE,
+ subreq->content.bvecq, subreq->content.slot,
+ subreq->content.offset,
+ subreq->len);
+
if (!iov_iter_count(&subreq->io_iter))
break;
- subreq->len = netfs_limit_iter(&subreq->io_iter, 0,
- stream->sreq_max_len,
- stream->sreq_max_segs);
- iov_iter_truncate(&subreq->io_iter, subreq->len);
- stream->submit_extendable_to = subreq->len;
-
trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
stream->issue_write(subreq);
@@ -166,8 +177,15 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)
*/
subreq->error = -EAGAIN;
trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
- if (subreq->transferred > 0)
- iov_iter_advance(&wreq->buffer.iter, subreq->transferred);
+
+ bvecq_pos_unset(&subreq->content);
+ bvecq_pos_unset(&wreq->dispatch_cursor);
+ bvecq_pos_transfer(&wreq->dispatch_cursor, &subreq->dispatch_pos);
+
+ if (subreq->transferred > 0) {
+ wreq->transferred += subreq->transferred;
+ bvecq_pos_advance(&wreq->dispatch_cursor, subreq->transferred);
+ }
if (stream->source == NETFS_UPLOAD_TO_SERVER &&
wreq->netfs_ops->retry_request)
@@ -176,7 +194,6 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)
__clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
__clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags);
__clear_bit(NETFS_SREQ_FAILED, &subreq->flags);
- subreq->io_iter = wreq->buffer.iter;
subreq->start = wreq->start + wreq->transferred;
subreq->len = wreq->len - wreq->transferred;
subreq->transferred = 0;
@@ -186,19 +203,14 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)
netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
- if (stream->prepare_write) {
+ if (stream->prepare_write)
stream->prepare_write(subreq);
- __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
- netfs_stat(&netfs_n_wh_retry_write_subreq);
- } else {
- struct iov_iter source;
-
- netfs_reset_iter(subreq);
- source = subreq->io_iter;
- netfs_reissue_write(stream, subreq, &source);
- }
+ __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
+ netfs_stat(&netfs_n_wh_retry_write_subreq);
}
+ bvecq_pos_unset(&wreq->dispatch_cursor);
+ bvecq_pos_unset(&wreq->load_cursor);
netfs_unbuffered_write_done(wreq);
_leave(" = %d", ret);
return ret;
@@ -217,12 +229,12 @@ static void netfs_unbuffered_write_async(struct work_struct *work)
* encrypted file. This can also be used for direct I/O writes.
*/
ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *iter,
- struct netfs_group *netfs_group)
+ struct netfs_group *netfs_group)
{
struct netfs_io_request *wreq;
unsigned long long start = iocb->ki_pos;
unsigned long long end = start + iov_iter_count(iter);
- ssize_t ret, n;
+ ssize_t ret;
size_t len = iov_iter_count(iter);
bool async = !is_sync_kiocb(iocb);
@@ -256,25 +268,17 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
* allocate a sufficiently large bvec array and may shorten the
* request.
*/
- if (user_backed_iter(iter)) {
- n = netfs_extract_user_iter(iter, len, &wreq->buffer.iter, 0);
- if (n < 0) {
- ret = n;
- goto error_put;
- }
- wreq->direct_bv = (struct bio_vec *)wreq->buffer.iter.bvec;
- wreq->direct_bv_count = n;
- wreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);
- } else {
- /* If this is a kernel-generated async DIO request,
- * assume that any resources the iterator points to
- * (eg. a bio_vec array) will persist till the end of
- * the op.
- */
- wreq->buffer.iter = *iter;
- }
+ ssize_t n = netfs_extract_iter(iter, len, INT_MAX, iocb->ki_pos,
+ &wreq->load_cursor.bvecq, 0);
- wreq->len = iov_iter_count(&wreq->buffer.iter);
+ if (n < 0) {
+ ret = n;
+ goto error_put;
+ }
+ wreq->len = n;
+ _debug("dio-write %zx/%zx %u/%u",
+ n, len, wreq->load_cursor.bvecq->nr_slots,
+ wreq->load_cursor.bvecq->max_slots);
}
__set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags);
diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
index ad47bcc1947b..ddae82f94ce0 100644
--- a/fs/netfs/internal.h
+++ b/fs/netfs/internal.h
@@ -7,7 +7,6 @@
#include <linux/slab.h>
#include <linux/seq_file.h>
-#include <linux/folio_queue.h>
#include <linux/netfs.h>
#include <linux/fscache.h>
#include <linux/fscache-cache.h>
@@ -67,9 +66,8 @@ static inline void netfs_proc_del_rreq(struct netfs_io_request *rreq) {}
/*
* misc.c
*/
-struct folio_queue *netfs_buffer_make_space(struct netfs_io_request *rreq,
- enum netfs_folioq_trace trace);
-void netfs_reset_iter(struct netfs_io_subrequest *subreq);
+struct bvecq *netfs_buffer_make_space(struct netfs_io_request *rreq,
+ enum netfs_bvecq_trace trace);
void netfs_wake_collector(struct netfs_io_request *rreq);
void netfs_subreq_clear_in_progress(struct netfs_io_subrequest *subreq);
void netfs_wait_for_in_progress_stream(struct netfs_io_request *rreq,
@@ -167,7 +165,6 @@ extern atomic_t netfs_n_wh_retry_write_req;
extern atomic_t netfs_n_wh_retry_write_subreq;
extern atomic_t netfs_n_wb_lock_skip;
extern atomic_t netfs_n_wb_lock_wait;
-extern atomic_t netfs_n_folioq;
extern atomic_t netfs_n_bvecq;
int netfs_stats_show(struct seq_file *m, void *v);
@@ -205,8 +202,7 @@ void netfs_prepare_write(struct netfs_io_request *wreq,
struct netfs_io_stream *stream,
loff_t start);
void netfs_reissue_write(struct netfs_io_stream *stream,
- struct netfs_io_subrequest *subreq,
- struct iov_iter *source);
+ struct netfs_io_subrequest *subreq);
void netfs_issue_write(struct netfs_io_request *wreq,
struct netfs_io_stream *stream);
size_t netfs_advance_write(struct netfs_io_request *wreq,
diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c
index e77fd39327c2..581dbf650a19 100644
--- a/fs/netfs/iterator.c
+++ b/fs/netfs/iterator.c
@@ -136,6 +136,7 @@ ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t max_se
}
EXPORT_SYMBOL_GPL(netfs_extract_iter);
+#if 0
/**
* netfs_extract_user_iter - Extract the pages from a user iterator into a bvec
* @orig: The original iterator
@@ -421,3 +422,4 @@ size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,
BUG();
}
EXPORT_SYMBOL(netfs_limit_iter);
+#endif
diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c
index 6df89c92b10b..ab142cbaad35 100644
--- a/fs/netfs/misc.c
+++ b/fs/netfs/misc.c
@@ -8,6 +8,7 @@
#include <linux/swap.h>
#include "internal.h"
+#if 0
/**
* netfs_alloc_folioq_buffer - Allocate buffer space into a folio queue
* @mapping: Address space to set on the folio (or NULL).
@@ -103,24 +104,7 @@ void netfs_free_folioq_buffer(struct folio_queue *fq)
folio_batch_release(&fbatch);
}
EXPORT_SYMBOL(netfs_free_folioq_buffer);
-
-/*
- * Reset the subrequest iterator to refer just to the region remaining to be
- * read. The iterator may or may not have been advanced by socket ops or
- * extraction ops to an extent that may or may not match the amount actually
- * read.
- */
-void netfs_reset_iter(struct netfs_io_subrequest *subreq)
-{
- struct iov_iter *io_iter = &subreq->io_iter;
- size_t remain = subreq->len - subreq->transferred;
-
- if (io_iter->count > remain)
- iov_iter_advance(io_iter, io_iter->count - remain);
- else if (io_iter->count < remain)
- iov_iter_revert(io_iter, remain - io_iter->count);
- iov_iter_truncate(&subreq->io_iter, remain);
-}
+#endif
/**
* netfs_dirty_folio - Mark folio dirty and pin a cache object for writeback
diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c
index b8c4918d3dcd..eff431cd7d6a 100644
--- a/fs/netfs/objects.c
+++ b/fs/netfs/objects.c
@@ -119,7 +119,6 @@ static void netfs_free_request_rcu(struct rcu_head *rcu)
static void netfs_deinit_request(struct netfs_io_request *rreq)
{
struct netfs_inode *ictx = netfs_inode(rreq->inode);
- unsigned int i;
trace_netfs_rreq(rreq, netfs_rreq_trace_free);
@@ -134,16 +133,9 @@ static void netfs_deinit_request(struct netfs_io_request *rreq)
rreq->netfs_ops->free_request(rreq);
if (rreq->cache_resources.ops)
rreq->cache_resources.ops->end_operation(&rreq->cache_resources);
- if (rreq->direct_bv) {
- for (i = 0; i < rreq->direct_bv_count; i++) {
- if (rreq->direct_bv[i].bv_page) {
- if (rreq->direct_bv_unpin)
- unpin_user_page(rreq->direct_bv[i].bv_page);
- }
- }
- kvfree(rreq->direct_bv);
- }
- rolling_buffer_clear(&rreq->buffer);
+ bvecq_pos_unset(&rreq->load_cursor);
+ bvecq_pos_unset(&rreq->dispatch_cursor);
+ bvecq_pos_unset(&rreq->collect_cursor);
if (atomic_dec_and_test(&ictx->io_count))
wake_up_var(&ictx->io_count);
@@ -236,6 +228,8 @@ static void netfs_free_subrequest(struct netfs_io_subrequest *subreq)
trace_netfs_sreq(subreq, netfs_sreq_trace_free);
if (rreq->netfs_ops->free_subrequest)
rreq->netfs_ops->free_subrequest(subreq);
+ bvecq_pos_unset(&subreq->dispatch_pos);
+ bvecq_pos_unset(&subreq->content);
mempool_free(subreq, rreq->netfs_ops->subrequest_pool ?: &netfs_subrequest_pool);
netfs_stat_d(&netfs_n_rh_sreq);
netfs_put_request(rreq, netfs_rreq_trace_put_subreq);
diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
index e5f6665b3341..c7180680226c 100644
--- a/fs/netfs/read_collect.c
+++ b/fs/netfs/read_collect.c
@@ -27,9 +27,13 @@
*/
static void netfs_clear_unread(struct netfs_io_subrequest *subreq)
{
- netfs_reset_iter(subreq);
- WARN_ON_ONCE(subreq->len - subreq->transferred != iov_iter_count(&subreq->io_iter));
- iov_iter_zero(iov_iter_count(&subreq->io_iter), &subreq->io_iter);
+ struct iov_iter iter;
+
+ iov_iter_bvec_queue(&iter, ITER_DEST, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, subreq->len);
+ iov_iter_advance(&iter, subreq->transferred);
+ iov_iter_zero(subreq->len, &iter);
+
if (subreq->start + subreq->transferred >= subreq->rreq->i_size)
__set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags);
}
@@ -40,11 +44,11 @@ static void netfs_clear_unread(struct netfs_io_subrequest *subreq)
* dirty and let writeback handle it.
*/
static void netfs_unlock_read_folio(struct netfs_io_request *rreq,
- struct folio_queue *folioq,
+ struct bvecq *bvecq,
int slot)
{
struct netfs_folio *finfo;
- struct folio *folio = folioq_folio(folioq, slot);
+ struct folio *folio = page_folio(bvecq->bv[slot].bv_page);
if (unlikely(folio_pos(folio) < rreq->abandon_to)) {
trace_netfs_folio(folio, netfs_folio_trace_abandon);
@@ -75,7 +79,7 @@ static void netfs_unlock_read_folio(struct netfs_io_request *rreq,
trace_netfs_folio(folio, netfs_folio_trace_read_done);
}
- folioq_clear(folioq, slot);
+ bvecq->bv[slot].bv_page = NULL;
} else {
// TODO: Use of PG_private_2 is deprecated.
if (test_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags))
@@ -91,7 +95,7 @@ static void netfs_unlock_read_folio(struct netfs_io_request *rreq,
folio_unlock(folio);
}
- folioq_clear(folioq, slot);
+ bvecq->bv[slot].bv_page = NULL;
}
/*
@@ -100,18 +104,24 @@ static void netfs_unlock_read_folio(struct netfs_io_request *rreq,
static void netfs_read_unlock_folios(struct netfs_io_request *rreq,
unsigned int *notes)
{
- struct folio_queue *folioq = rreq->buffer.tail;
+ struct bvecq *bvecq = rreq->collect_cursor.bvecq;
unsigned long long collected_to = rreq->collected_to;
- unsigned int slot = rreq->buffer.first_tail_slot;
+ unsigned int slot = rreq->collect_cursor.slot;
if (rreq->cleaned_to >= rreq->collected_to)
return;
// TODO: Begin decryption
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = rolling_buffer_delete_spent(&rreq->buffer);
- if (!folioq) {
+ if (slot >= bvecq->nr_slots) {
+ /* We need to be very careful - the cleanup can catch the
+ * dispatcher, which could lead to us having nothing left in
+ * the queue, causing the front and back pointers to end up on
+ * different tracks. To avoid this, we must always keep at
+ * least one segment in the queue.
+ */
+ bvecq = bvecq_delete_spent(&rreq->collect_cursor);
+ if (!bvecq) {
rreq->front_folio_order = 0;
return;
}
@@ -127,13 +137,13 @@ static void netfs_read_unlock_folios(struct netfs_io_request *rreq,
if (*notes & COPY_TO_CACHE)
set_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags);
- folio = folioq_folio(folioq, slot);
+ folio = page_folio(bvecq->bv[slot].bv_page);
if (WARN_ONCE(!folio_test_locked(folio),
"R=%08x: folio %lx is not locked\n",
rreq->debug_id, folio->index))
trace_netfs_folio(folio, netfs_folio_trace_not_locked);
- order = folioq_folio_order(folioq, slot);
+ order = folio_order(folio);
rreq->front_folio_order = order;
fsize = PAGE_SIZE << order;
fpos = folio_pos(folio);
@@ -145,33 +155,32 @@ static void netfs_read_unlock_folios(struct netfs_io_request *rreq,
if (collected_to < fend)
break;
- netfs_unlock_read_folio(rreq, folioq, slot);
+ netfs_unlock_read_folio(rreq, bvecq, slot);
WRITE_ONCE(rreq->cleaned_to, fpos + fsize);
*notes |= MADE_PROGRESS;
clear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags);
- /* Clean up the head folioq. If we clear an entire folioq, then
- * we can get rid of it provided it's not also the tail folioq
- * being filled by the issuer.
+ /* Clean up the head bvecq segment. If we clear an entire
+ * segment, then we can get rid of it provided it's not also
+ * the tail segment being filled by the issuer.
*/
- folioq_clear(folioq, slot);
slot++;
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = rolling_buffer_delete_spent(&rreq->buffer);
- if (!folioq)
+ if (slot >= bvecq->nr_slots) {
+ bvecq = bvecq_delete_spent(&rreq->collect_cursor);
+ if (!bvecq)
goto done;
slot = 0;
- trace_netfs_folioq(folioq, netfs_trace_folioq_read_progress);
+ //trace_netfs_bvecq(bvecq, netfs_trace_folioq_read_progress);
}
if (fpos + fsize >= collected_to)
break;
}
- rreq->buffer.tail = folioq;
+ bvecq_pos_move(&rreq->collect_cursor, bvecq);
done:
- rreq->buffer.first_tail_slot = slot;
+ rreq->collect_cursor.slot = slot;
}
/*
@@ -346,12 +355,14 @@ static void netfs_rreq_assess_dio(struct netfs_io_request *rreq)
if (rreq->origin == NETFS_UNBUFFERED_READ ||
rreq->origin == NETFS_DIO_READ) {
- for (i = 0; i < rreq->direct_bv_count; i++) {
- flush_dcache_page(rreq->direct_bv[i].bv_page);
- // TODO: cifs marks pages in the destination buffer
- // dirty under some circumstances after a read. Do we
- // need to do that too?
- set_page_dirty(rreq->direct_bv[i].bv_page);
+ for (struct bvecq *bq = rreq->collect_cursor.bvecq; bq; bq = bq->next) {
+ for (i = 0; i < bq->nr_slots; i++) {
+ flush_dcache_page(bq->bv[i].bv_page);
+ // TODO: cifs marks pages in the destination buffer
+ // dirty under some circumstances after a read. Do we
+ // need to do that too?
+ set_page_dirty(bq->bv[i].bv_page);
+ }
}
}
@@ -442,7 +453,15 @@ bool netfs_read_collection(struct netfs_io_request *rreq)
trace_netfs_rreq(rreq, netfs_rreq_trace_done);
netfs_clear_subrequests(rreq);
- netfs_unlock_abandoned_read_pages(rreq);
+ switch (rreq->origin) {
+ case NETFS_READAHEAD:
+ case NETFS_READPAGE:
+ case NETFS_READ_FOR_WRITE:
+ netfs_unlock_abandoned_read_pages(rreq);
+ break;
+ default:
+ break;
+ }
if (unlikely(rreq->copy_to_cache))
netfs_pgpriv2_end_copy_to_cache(rreq);
return true;
diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c
index a1489aa29f78..fb783318318e 100644
--- a/fs/netfs/read_pgpriv2.c
+++ b/fs/netfs/read_pgpriv2.c
@@ -19,6 +19,9 @@
static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio *folio)
{
struct netfs_io_stream *cache = &creq->io_streams[1];
+ struct bvecq *queue;
+ unsigned int slot;
+ size_t dio_size = PAGE_SIZE;
size_t fsize = folio_size(folio), flen = fsize;
loff_t fpos = folio_pos(folio), i_size;
bool to_eof = false;
@@ -48,17 +51,40 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio
to_eof = true;
}
+ flen = round_up(flen, dio_size);
+
_debug("folio %zx %zx", flen, fsize);
trace_netfs_folio(folio, netfs_folio_trace_store_copy);
- /* Attach the folio to the rolling buffer. */
- if (rolling_buffer_append(&creq->buffer, folio, 0) < 0) {
- clear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &creq->flags);
- return;
+
+ /* Institute a new bvec queue segment if the current one is full or if
+ * we encounter a discontiguity. The discontiguity break is important
+ * when it comes to bulk unlocking folios by file range.
+ */
+ queue = creq->load_cursor.bvecq;
+ if (bvecq_is_full(queue) ||
+ (fpos != creq->last_end && creq->last_end > 0)) {
+ if (bvecq_buffer_make_space(&creq->load_cursor, GFP_KERNEL) < 0) {
+ clear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &creq->flags);
+ return;
+ }
+
+ queue = creq->load_cursor.bvecq;
+ queue->fpos = fpos;
+ if (fpos != creq->last_end)
+ queue->discontig = true;
}
- cache->submit_extendable_to = fsize;
+ /* Attach the folio to the rolling buffer. */
+ slot = queue->nr_slots;
+ bvec_set_folio(&queue->bv[slot], folio, fsize, 0);
+ /* Order incrementing the slot counter after the slot is filled. */
+ smp_store_release(&queue->nr_slots, slot + 1);
+ creq->load_cursor.slot = slot + 1;
+ creq->load_cursor.offset = 0;
+ trace_netfs_bv_slot(queue, slot);
+
cache->submit_off = 0;
cache->submit_len = flen;
@@ -70,10 +96,9 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio
do {
ssize_t part;
- creq->buffer.iter.iov_offset = cache->submit_off;
+ creq->dispatch_cursor.offset = cache->submit_off;
atomic64_set(&creq->issued_to, fpos + cache->submit_off);
- cache->submit_extendable_to = fsize - cache->submit_off;
part = netfs_advance_write(creq, cache, fpos + cache->submit_off,
cache->submit_len, to_eof);
cache->submit_off += part;
@@ -83,8 +108,7 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio
cache->submit_len -= part;
} while (cache->submit_len > 0);
- creq->buffer.iter.iov_offset = 0;
- rolling_buffer_advance(&creq->buffer, fsize);
+ bvecq_pos_step(&creq->dispatch_cursor);
atomic64_set(&creq->issued_to, fpos + fsize);
if (flen < fsize)
@@ -110,6 +134,10 @@ static struct netfs_io_request *netfs_pgpriv2_begin_copy_to_cache(
if (!creq->io_streams[1].avail)
goto cancel_put;
+ bvecq_buffer_init(&creq->load_cursor, GFP_KERNEL);
+ bvecq_pos_set(&creq->dispatch_cursor, &creq->load_cursor);
+ bvecq_pos_set(&creq->collect_cursor, &creq->dispatch_cursor);
+
__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &creq->flags);
trace_netfs_copy2cache(rreq, creq);
trace_netfs_write(creq, netfs_write_trace_copy_to_cache);
@@ -170,22 +198,23 @@ void netfs_pgpriv2_end_copy_to_cache(struct netfs_io_request *rreq)
*/
bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *creq)
{
- struct folio_queue *folioq = creq->buffer.tail;
+ struct bvecq *bq = creq->collect_cursor.bvecq;
unsigned long long collected_to = creq->collected_to;
- unsigned int slot = creq->buffer.first_tail_slot;
+ unsigned int slot;
bool made_progress = false;
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = rolling_buffer_delete_spent(&creq->buffer);
+ if (bvecq_is_full(bq)) {
+ bq = bvecq_delete_spent(&creq->collect_cursor);
slot = 0;
}
+ slot = creq->collect_cursor.slot;
for (;;) {
struct folio *folio;
unsigned long long fpos, fend;
size_t fsize, flen;
- folio = folioq_folio(folioq, slot);
+ folio = page_folio(bq->bv[slot].bv_page);
if (WARN_ONCE(!folio_test_private_2(folio),
"R=%08x: folio %lx is not marked private_2\n",
creq->debug_id, folio->index))
@@ -212,11 +241,11 @@ bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *creq)
* we can get rid of it provided it's not also the tail folioq
* being filled by the issuer.
*/
- folioq_clear(folioq, slot);
+ bq->bv[slot].bv_page = NULL;
slot++;
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = rolling_buffer_delete_spent(&creq->buffer);
- if (!folioq)
+ if (slot >= bq->nr_slots) {
+ bq = bvecq_delete_spent(&creq->collect_cursor);
+ if (!bq)
goto done;
slot = 0;
}
@@ -225,8 +254,7 @@ bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *creq)
break;
}
- creq->buffer.tail = folioq;
done:
- creq->buffer.first_tail_slot = slot;
+ creq->collect_cursor.slot = slot;
return made_progress;
}
diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c
index 68fc869513ef..6f2eb14aac72 100644
--- a/fs/netfs/read_retry.c
+++ b/fs/netfs/read_retry.c
@@ -12,6 +12,11 @@
static void netfs_reissue_read(struct netfs_io_request *rreq,
struct netfs_io_subrequest *subreq)
{
+ bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
+ iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, subreq->len);
+ iov_iter_advance(&subreq->io_iter, subreq->transferred);
+
subreq->error = 0;
__clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
__set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
@@ -27,6 +32,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
{
struct netfs_io_subrequest *subreq;
struct netfs_io_stream *stream = &rreq->io_streams[0];
+ struct bvecq_pos dispatch_cursor = {};
struct list_head *next;
_enter("R=%x", rreq->debug_id);
@@ -48,7 +54,6 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) {
__clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
subreq->retry_count++;
- netfs_reset_iter(subreq);
netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
netfs_reissue_read(rreq, subreq);
}
@@ -74,11 +79,12 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
do {
struct netfs_io_subrequest *from, *to, *tmp;
- struct iov_iter source;
unsigned long long start, len;
size_t part;
bool boundary = false, subreq_superfluous = false;
+ bvecq_pos_unset(&dispatch_cursor);
+
/* Go through the subreqs and find the next span of contiguous
* buffer that we then rejig (cifs, for example, needs the
* rsize renegotiating) and reissue.
@@ -113,9 +119,8 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
/* Determine the set of buffers we're going to use. Each
* subreq gets a subset of a single overall contiguous buffer.
*/
- netfs_reset_iter(from);
- source = from->io_iter;
- source.count = len;
+ bvecq_pos_transfer(&dispatch_cursor, &from->dispatch_pos);
+ bvecq_pos_advance(&dispatch_cursor, from->transferred);
/* Work through the sublist. */
subreq = from;
@@ -131,10 +136,14 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
__clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
subreq->retry_count++;
+ bvecq_pos_unset(&subreq->dispatch_pos);
+ bvecq_pos_unset(&subreq->content);
+
trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
/* Renegotiate max_len (rsize) */
stream->sreq_max_len = subreq->len;
+ stream->sreq_max_segs = INT_MAX;
if (rreq->netfs_ops->prepare_read &&
rreq->netfs_ops->prepare_read(subreq) < 0) {
trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed);
@@ -142,13 +151,13 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
goto abandon;
}
- part = umin(len, stream->sreq_max_len);
- if (unlikely(stream->sreq_max_segs))
- part = netfs_limit_iter(&source, 0, part, stream->sreq_max_segs);
+ bvecq_pos_set(&subreq->dispatch_pos, &dispatch_cursor);
+ part = bvecq_slice(&dispatch_cursor,
+ umin(len, stream->sreq_max_len),
+ stream->sreq_max_segs,
+ &subreq->nr_segs);
subreq->len = subreq->transferred + part;
- subreq->io_iter = source;
- iov_iter_truncate(&subreq->io_iter, part);
- iov_iter_advance(&source, part);
+
len -= part;
start += part;
if (!len) {
@@ -208,9 +217,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
stream->sreq_max_len = umin(len, rreq->rsize);
- stream->sreq_max_segs = 0;
- if (unlikely(stream->sreq_max_segs))
- part = netfs_limit_iter(&source, 0, part, stream->sreq_max_segs);
+ stream->sreq_max_segs = INT_MAX;
netfs_stat(&netfs_n_rh_download);
if (rreq->netfs_ops->prepare_read(subreq) < 0) {
@@ -219,11 +226,12 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
goto abandon;
}
- part = umin(len, stream->sreq_max_len);
+ bvecq_pos_set(&subreq->dispatch_pos, &dispatch_cursor);
+ part = bvecq_slice(&dispatch_cursor,
+ umin(len, stream->sreq_max_len),
+ stream->sreq_max_segs,
+ &subreq->nr_segs);
subreq->len = subreq->transferred + part;
- subreq->io_iter = source;
- iov_iter_truncate(&subreq->io_iter, part);
- iov_iter_advance(&source, part);
len -= part;
start += part;
@@ -237,6 +245,8 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
} while (!list_is_head(next, &stream->subrequests));
+out:
+ bvecq_pos_unset(&dispatch_cursor);
return;
/* If we hit an error, fail all remaining incomplete subrequests */
@@ -253,6 +263,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
__set_bit(NETFS_SREQ_FAILED, &subreq->flags);
__clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
}
+ goto out;
}
/*
@@ -281,23 +292,24 @@ void netfs_retry_reads(struct netfs_io_request *rreq)
*/
void netfs_unlock_abandoned_read_pages(struct netfs_io_request *rreq)
{
- struct folio_queue *p;
-
- for (p = rreq->buffer.tail; p; p = p->next) {
- for (int slot = 0; slot < folioq_count(p); slot++) {
- struct folio *folio = folioq_folio(p, slot);
-
- if (folio && !folioq_is_marked2(p, slot)) {
- if (folio->index == rreq->no_unlock_folio &&
- test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO,
- &rreq->flags)) {
- _debug("no unlock");
- } else {
- trace_netfs_folio(folio,
- netfs_folio_trace_abandon);
- folio_unlock(folio);
- }
+ struct bvecq *p;
+
+ for (p = rreq->collect_cursor.bvecq; p; p = p->next) {
+ if (!p->free)
+ continue;
+ for (int slot = 0; slot < p->nr_slots; slot++) {
+ if (!p->bv[slot].bv_page)
+ continue;
+
+ struct folio *folio = page_folio(p->bv[slot].bv_page);
+
+ if (folio->index == rreq->no_unlock_folio &&
+ test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) {
+ _debug("no unlock");
+ continue;
}
+ trace_netfs_folio(folio, netfs_folio_trace_abandon);
+ folio_unlock(folio);
}
}
}
diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c
index d87a03859ebd..b386cae77ece 100644
--- a/fs/netfs/read_single.c
+++ b/fs/netfs/read_single.c
@@ -94,7 +94,12 @@ static int netfs_single_dispatch_read(struct netfs_io_request *rreq)
subreq->source = NETFS_DOWNLOAD_FROM_SERVER;
subreq->start = 0;
subreq->len = rreq->len;
- subreq->io_iter = rreq->buffer.iter;
+
+ bvecq_pos_set(&subreq->dispatch_pos, &rreq->dispatch_cursor);
+ bvecq_pos_set(&subreq->content, &rreq->dispatch_cursor);
+
+ iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, subreq->len);
/* Try to use the cache if the cache content matches the size of the
* remote file.
@@ -180,6 +185,10 @@ ssize_t netfs_read_single(struct inode *inode, struct file *file, struct iov_ite
if (IS_ERR(rreq))
return PTR_ERR(rreq);
+ ret = netfs_extract_iter(iter, rreq->len, INT_MAX, 0, &rreq->dispatch_cursor.bvecq, 0);
+ if (ret < 0)
+ goto cleanup_free;
+
ret = netfs_single_begin_cache_read(rreq, ictx);
if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS)
goto cleanup_free;
@@ -187,7 +196,6 @@ ssize_t netfs_read_single(struct inode *inode, struct file *file, struct iov_ite
netfs_stat(&netfs_n_rh_read_single);
trace_netfs_read(rreq, 0, rreq->len, netfs_read_trace_read_single);
- rreq->buffer.iter = *iter;
netfs_single_dispatch_read(rreq);
ret = netfs_wait_for_read(rreq);
diff --git a/fs/netfs/stats.c b/fs/netfs/stats.c
index 84c2a4bcc762..1dfb5667b931 100644
--- a/fs/netfs/stats.c
+++ b/fs/netfs/stats.c
@@ -47,7 +47,6 @@ atomic_t netfs_n_wh_retry_write_req;
atomic_t netfs_n_wh_retry_write_subreq;
atomic_t netfs_n_wb_lock_skip;
atomic_t netfs_n_wb_lock_wait;
-atomic_t netfs_n_folioq;
atomic_t netfs_n_bvecq;
int netfs_stats_show(struct seq_file *m, void *v)
@@ -91,11 +90,10 @@ int netfs_stats_show(struct seq_file *m, void *v)
atomic_read(&netfs_n_rh_retry_read_subreq),
atomic_read(&netfs_n_wh_retry_write_req),
atomic_read(&netfs_n_wh_retry_write_subreq));
- seq_printf(m, "Objs : rr=%u sr=%u bq=%u foq=%u wsc=%u\n",
+ seq_printf(m, "Objs : rr=%u sr=%u bq=%u wsc=%u\n",
atomic_read(&netfs_n_rh_rreq),
atomic_read(&netfs_n_rh_sreq),
atomic_read(&netfs_n_bvecq),
- atomic_read(&netfs_n_folioq),
atomic_read(&netfs_n_wh_wstream_conflict));
seq_printf(m, "WbLock : skip=%u wait=%u\n",
atomic_read(&netfs_n_wb_lock_skip),
diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c
index a839735d5675..fb8daf50c86d 100644
--- a/fs/netfs/write_collect.c
+++ b/fs/netfs/write_collect.c
@@ -111,12 +111,12 @@ int netfs_folio_written_back(struct folio *folio)
static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq,
unsigned int *notes)
{
- struct folio_queue *folioq = wreq->buffer.tail;
+ struct bvecq *bvecq = wreq->collect_cursor.bvecq;
unsigned long long collected_to = wreq->collected_to;
- unsigned int slot = wreq->buffer.first_tail_slot;
+ unsigned int slot = wreq->collect_cursor.slot;
- if (WARN_ON_ONCE(!folioq)) {
- pr_err("[!] Writeback unlock found empty rolling buffer!\n");
+ if (WARN_ON_ONCE(!bvecq)) {
+ pr_err("[!] Writeback unlock found empty buffer!\n");
netfs_dump_request(wreq);
return;
}
@@ -127,9 +127,15 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq,
return;
}
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = rolling_buffer_delete_spent(&wreq->buffer);
- if (!folioq)
+ if (slot >= bvecq->nr_slots) {
+ /* We need to be very careful - the cleanup can catch the
+ * dispatcher, which could lead to us having nothing left in
+ * the queue, causing the front and back pointers to end up on
+ * different tracks. To avoid this, we must always keep at
+ * least one segment in the queue.
+ */
+ bvecq = bvecq_delete_spent(&wreq->collect_cursor);
+ if (!bvecq)
return;
slot = 0;
}
@@ -140,7 +146,7 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq,
unsigned long long fpos, fend;
size_t fsize, flen;
- folio = folioq_folio(folioq, slot);
+ folio = page_folio(bvecq->bv[slot].bv_page);
if (WARN_ONCE(!folio_test_writeback(folio),
"R=%08x: folio %lx is not under writeback\n",
wreq->debug_id, folio->index))
@@ -163,15 +169,15 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq,
wreq->cleaned_to = fpos + fsize;
*notes |= MADE_PROGRESS;
- /* Clean up the head folioq. If we clear an entire folioq, then
- * we can get rid of it provided it's not also the tail folioq
+ /* Clean up the head bvecq. If we clear an entire bvecq, then
+ * we can get rid of it provided it's not also the tail bvecq
* being filled by the issuer.
*/
- folioq_clear(folioq, slot);
+ bvecq->bv[slot].bv_page = NULL;
slot++;
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = rolling_buffer_delete_spent(&wreq->buffer);
- if (!folioq)
+ if (slot >= bvecq->nr_slots) {
+ bvecq = bvecq_delete_spent(&wreq->collect_cursor);
+ if (!bvecq)
goto done;
slot = 0;
}
@@ -180,9 +186,8 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq,
break;
}
- wreq->buffer.tail = folioq;
done:
- wreq->buffer.first_tail_slot = slot;
+ wreq->collect_cursor.slot = slot;
}
static void netfs_cache_collect(struct netfs_io_request *wreq,
@@ -217,7 +222,8 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
trace_netfs_rreq(wreq, netfs_rreq_trace_collect);
reassess_streams:
- issued_to = atomic64_read(&wreq->issued_to);
+ /* Order reading the issued_to point before reading the queue it refers to. */
+ issued_to = atomic64_read_acquire(&wreq->issued_to);
smp_rmb();
collected_to = ULLONG_MAX;
if (wreq->origin == NETFS_WRITEBACK ||
diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
index 9ca2c780f469..d4c4bee4299e 100644
--- a/fs/netfs/write_issue.c
+++ b/fs/netfs/write_issue.c
@@ -108,8 +108,6 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
ictx = netfs_inode(wreq->inode);
if (is_cacheable && netfs_is_cache_enabled(ictx))
fscache_begin_write_operation(&wreq->cache_resources, netfs_i_cookie(ictx));
- if (rolling_buffer_init(&wreq->buffer, wreq->debug_id, ITER_SOURCE) < 0)
- goto nomem;
wreq->cleaned_to = wreq->start;
if (wreq->cache_resources.dio_size > 1)
@@ -134,9 +132,6 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
}
return wreq;
-nomem:
- netfs_put_failed_request(wreq);
- return ERR_PTR(-ENOMEM);
}
/**
@@ -161,21 +156,13 @@ void netfs_prepare_write(struct netfs_io_request *wreq,
loff_t start)
{
struct netfs_io_subrequest *subreq;
- struct iov_iter *wreq_iter = &wreq->buffer.iter;
-
- /* Make sure we don't point the iterator at a used-up folio_queue
- * struct being used as a placeholder to prevent the queue from
- * collapsing. In such a case, extend the queue.
- */
- if (iov_iter_is_folioq(wreq_iter) &&
- wreq_iter->folioq_slot >= folioq_nr_slots(wreq_iter->folioq))
- rolling_buffer_make_space(&wreq->buffer);
subreq = netfs_alloc_subrequest(wreq);
subreq->source = stream->source;
subreq->start = start;
subreq->stream_nr = stream->stream_nr;
- subreq->io_iter = *wreq_iter;
+
+ bvecq_pos_set(&subreq->dispatch_pos, &wreq->dispatch_cursor);
_enter("R=%x[%x]", wreq->debug_id, subreq->debug_index);
@@ -240,15 +227,15 @@ static void netfs_do_issue_write(struct netfs_io_stream *stream,
}
void netfs_reissue_write(struct netfs_io_stream *stream,
- struct netfs_io_subrequest *subreq,
- struct iov_iter *source)
+ struct netfs_io_subrequest *subreq)
{
- size_t size = subreq->len - subreq->transferred;
-
// TODO: Use encrypted buffer
- subreq->io_iter = *source;
- iov_iter_advance(source, size);
- iov_iter_truncate(&subreq->io_iter, size);
+ bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
+ iov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE,
+ subreq->content.bvecq, subreq->content.slot,
+ subreq->content.offset,
+ subreq->len);
+ iov_iter_advance(&subreq->io_iter, subreq->transferred);
subreq->retry_count++;
subreq->error = 0;
@@ -266,8 +253,57 @@ void netfs_issue_write(struct netfs_io_request *wreq,
if (!subreq)
return;
+ /* If we have a write to the cache, we need to round out the first and
+ * last entries (only those as the data will be on virtually contiguous
+ * folios) to cache DIO boundaries.
+ */
+ if (subreq->source == NETFS_WRITE_TO_CACHE) {
+ struct bvecq_pos tmp_pos;
+ struct bio_vec *bv;
+ struct bvecq *bq;
+ size_t dio_size = wreq->cache_resources.dio_size;
+ size_t disp, len;
+ int ret;
+
+ bvecq_pos_set(&tmp_pos, &subreq->dispatch_pos);
+ ret = bvecq_extract(&tmp_pos, subreq->len, INT_MAX, &subreq->content.bvecq);
+ bvecq_pos_unset(&tmp_pos);
+ if (ret < 0) {
+ netfs_write_subrequest_terminated(subreq, -ENOMEM);
+ return;
+ }
+
+ /* Round the first entry down. */
+ bq = subreq->content.bvecq;
+ bv = &bq->bv[0];
+ disp = bv->bv_offset & (dio_size - 1);
+ if (disp) {
+ bv->bv_offset -= disp;
+ bv->bv_len += disp;
+ bq->fpos -= disp;
+ subreq->start -= disp;
+ subreq->len += disp;
+ }
+
+ /* Round the end of the last entry up. */
+ while (bq->next)
+ bq = bq->next;
+ bv = &bq->bv[bq->nr_slots - 1];
+ len = round_up(bv->bv_len, dio_size);
+ if (len > bv->bv_len) {
+ subreq->len += len - bv->bv_len;
+ bv->bv_len = len;
+ }
+ } else {
+ bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
+ }
+
+ iov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE,
+ subreq->content.bvecq, subreq->content.slot,
+ subreq->content.offset,
+ subreq->len);
+
stream->construct = NULL;
- subreq->io_iter.count = subreq->len;
netfs_do_issue_write(stream, subreq);
}
@@ -304,7 +340,6 @@ size_t netfs_advance_write(struct netfs_io_request *wreq,
_debug("part %zx/%zx %zx/%zx", subreq->len, stream->sreq_max_len, part, len);
subreq->len += part;
subreq->nr_segs++;
- stream->submit_extendable_to -= part;
if (subreq->len >= stream->sreq_max_len ||
subreq->nr_segs >= stream->sreq_max_segs ||
@@ -328,16 +363,35 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
struct netfs_io_stream *stream;
struct netfs_group *fgroup; /* TODO: Use this with ceph */
struct netfs_folio *finfo;
- size_t iter_off = 0;
+ struct bvecq *queue = wreq->load_cursor.bvecq;
+ unsigned int slot;
size_t fsize = folio_size(folio), flen = fsize, foff = 0;
loff_t fpos = folio_pos(folio), i_size;
bool to_eof = false, streamw = false;
bool debug = false;
+ int ret;
_enter("");
- if (rolling_buffer_make_space(&wreq->buffer) < 0)
- return -ENOMEM;
+ /* Institute a new bvec queue segment if the current one is full or if
+ * we encounter a discontiguity. The discontiguity break is important
+ * when it comes to bulk unlocking folios by file range.
+ */
+ if (bvecq_is_full(queue) ||
+ (fpos != wreq->last_end && wreq->last_end > 0)) {
+ ret = bvecq_buffer_make_space(&wreq->load_cursor, GFP_NOFS);
+ if (ret < 0) {
+ folio_unlock(folio);
+ return ret;
+ }
+
+ queue = wreq->load_cursor.bvecq;
+ queue->fpos = fpos;
+ if (fpos != wreq->last_end)
+ queue->discontig = true;
+ bvecq_pos_move(&wreq->dispatch_cursor, queue);
+ wreq->dispatch_cursor.slot = 0;
+ }
/* netfs_perform_write() may shift i_size around the page or from out
* of the page to beyond it, but cannot move i_size into or through the
@@ -443,7 +497,13 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
}
/* Attach the folio to the rolling buffer. */
- rolling_buffer_append(&wreq->buffer, folio, 0);
+ slot = queue->nr_slots;
+ bvec_set_folio(&queue->bv[slot], folio, flen, 0);
+ queue->nr_slots = slot + 1;
+ wreq->load_cursor.slot = slot + 1;
+ wreq->load_cursor.offset = 0;
+ wreq->last_end = fpos + foff + flen;
+ trace_netfs_bv_slot(queue, slot);
/* Move the submission point forward to allow for write-streaming data
* not starting at the front of the page. We don't do write-streaming
@@ -454,7 +514,7 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
*/
for (int s = 0; s < NR_IO_STREAMS; s++) {
stream = &wreq->io_streams[s];
- stream->submit_off = foff;
+ stream->submit_off = 0;
stream->submit_len = flen;
if (!stream->avail ||
(stream->source == NETFS_WRITE_TO_CACHE && streamw) ||
@@ -489,15 +549,11 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
break;
stream = &wreq->io_streams[choose_s];
- /* Advance the iterator(s). */
- if (stream->submit_off > iter_off) {
- rolling_buffer_advance(&wreq->buffer, stream->submit_off - iter_off);
- iter_off = stream->submit_off;
- }
+ /* Advance the cursor. */
+ wreq->dispatch_cursor.offset = stream->submit_off;
- atomic64_set(&wreq->issued_to, fpos + stream->submit_off);
- stream->submit_extendable_to = fsize - stream->submit_off;
- part = netfs_advance_write(wreq, stream, fpos + stream->submit_off,
+ atomic64_set(&wreq->issued_to, fpos + foff + stream->submit_off);
+ part = netfs_advance_write(wreq, stream, fpos + foff + stream->submit_off,
stream->submit_len, to_eof);
stream->submit_off += part;
if (part > stream->submit_len)
@@ -508,9 +564,9 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
debug = true;
}
- if (fsize > iter_off)
- rolling_buffer_advance(&wreq->buffer, fsize - iter_off);
- atomic64_set(&wreq->issued_to, fpos + fsize);
+ bvecq_pos_step(&wreq->dispatch_cursor);
+ /* Order loading the queue before updating the issue_to point */
+ atomic64_set_release(&wreq->issued_to, fpos + fsize);
if (!debug)
kdebug("R=%x: No submit", wreq->debug_id);
@@ -578,6 +634,11 @@ int netfs_writepages(struct address_space *mapping,
goto couldnt_start;
}
+ if (bvecq_buffer_init(&wreq->load_cursor, GFP_NOFS) < 0)
+ goto nomem;
+ bvecq_pos_set(&wreq->dispatch_cursor, &wreq->load_cursor);
+ bvecq_pos_set(&wreq->collect_cursor, &wreq->dispatch_cursor);
+
__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
trace_netfs_write(wreq, netfs_write_trace_writeback);
netfs_stat(&netfs_n_wh_writepages);
@@ -602,12 +663,17 @@ int netfs_writepages(struct address_space *mapping,
netfs_end_issue_write(wreq);
mutex_unlock(&ictx->wb_lock);
+ bvecq_pos_unset(&wreq->load_cursor);
+ bvecq_pos_unset(&wreq->dispatch_cursor);
netfs_wake_collector(wreq);
netfs_put_request(wreq, netfs_rreq_trace_put_return);
_leave(" = %d", error);
return error;
+nomem:
+ error = -ENOMEM;
+ netfs_put_failed_request(wreq);
couldnt_start:
netfs_kill_dirty_pages(mapping, wbc, folio);
out:
@@ -634,6 +700,15 @@ struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len
return wreq;
}
+ if (bvecq_buffer_init(&wreq->load_cursor, GFP_NOFS) < 0) {
+ netfs_put_failed_request(wreq);
+ mutex_unlock(&ictx->wb_lock);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ bvecq_pos_set(&wreq->dispatch_cursor, &wreq->load_cursor);
+ bvecq_pos_set(&wreq->collect_cursor, &wreq->dispatch_cursor);
+
wreq->io_streams[0].avail = true;
trace_netfs_write(wreq, netfs_write_trace_writethrough);
return wreq;
@@ -649,8 +724,8 @@ int netfs_advance_writethrough(struct netfs_io_request *wreq, struct writeback_c
struct folio *folio, size_t copied, bool to_page_end,
struct folio **writethrough_cache)
{
- _enter("R=%x ic=%zu ws=%u cp=%zu tp=%u",
- wreq->debug_id, wreq->buffer.iter.count, wreq->wsize, copied, to_page_end);
+ _enter("R=%x ws=%u cp=%zu tp=%u",
+ wreq->debug_id, wreq->wsize, copied, to_page_end);
if (!*writethrough_cache) {
if (folio_test_dirty(folio))
@@ -692,6 +767,9 @@ ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_c
mutex_unlock(&ictx->wb_lock);
+ bvecq_pos_unset(&wreq->load_cursor);
+ bvecq_pos_unset(&wreq->dispatch_cursor);
+
if (wreq->iocb)
ret = -EIOCBQUEUED;
else
@@ -707,7 +785,7 @@ ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_c
* @iter: Data to write.
*
* Write a monolithic, non-pagecache object back to the server and/or
- * the cache.
+ * the cache. There's a maximum of one subrequest per stream.
*/
int netfs_writeback_single(struct address_space *mapping,
struct writeback_control *wbc,
@@ -731,10 +809,18 @@ int netfs_writeback_single(struct address_space *mapping,
ret = PTR_ERR(wreq);
goto couldnt_start;
}
-
- wreq->buffer.iter = *iter;
wreq->len = iov_iter_count(iter);
+ ret = netfs_extract_iter(iter, wreq->len, INT_MAX, 0, &wreq->dispatch_cursor.bvecq, 0);
+ if (ret < 0)
+ goto cleanup_free;
+ if (ret < wreq->len) {
+ ret = -EIO;
+ goto cleanup_free;
+ }
+
+ bvecq_pos_set(&wreq->collect_cursor, &wreq->dispatch_cursor);
+
__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
trace_netfs_write(wreq, netfs_write_trace_writeback_single);
netfs_stat(&netfs_n_wh_writepages);
@@ -754,11 +840,11 @@ int netfs_writeback_single(struct address_space *mapping,
subreq = stream->construct;
subreq->len = wreq->len;
stream->submit_len = subreq->len;
- stream->submit_extendable_to = round_up(wreq->len, PAGE_SIZE);
netfs_issue_write(wreq, stream);
}
+ wreq->submitted = wreq->len;
smp_wmb(); /* Write lists before ALL_QUEUED. */
set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
@@ -774,6 +860,8 @@ int netfs_writeback_single(struct address_space *mapping,
_leave(" = %d", ret);
return ret;
+cleanup_free:
+ netfs_put_failed_request(wreq);
couldnt_start:
mutex_unlock(&ictx->wb_lock);
_leave(" = %d", ret);
diff --git a/fs/netfs/write_retry.c b/fs/netfs/write_retry.c
index 29489a23a220..5df5c34d4610 100644
--- a/fs/netfs/write_retry.c
+++ b/fs/netfs/write_retry.c
@@ -17,6 +17,7 @@
static void netfs_retry_write_stream(struct netfs_io_request *wreq,
struct netfs_io_stream *stream)
{
+ struct bvecq_pos dispatch_cursor = {};
struct list_head *next;
_enter("R=%x[%x:]", wreq->debug_id, stream->stream_nr);
@@ -39,12 +40,8 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
if (test_bit(NETFS_SREQ_FAILED, &subreq->flags))
break;
if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) {
- struct iov_iter source;
-
- netfs_reset_iter(subreq);
- source = subreq->io_iter;
netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
- netfs_reissue_write(stream, subreq, &source);
+ netfs_reissue_write(stream, subreq);
}
}
return;
@@ -54,11 +51,12 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
do {
struct netfs_io_subrequest *subreq = NULL, *from, *to, *tmp;
- struct iov_iter source;
unsigned long long start, len;
size_t part;
bool boundary = false;
+ bvecq_pos_unset(&dispatch_cursor);
+
/* Go through the stream and find the next span of contiguous
* data that we then rejig (cifs, for example, needs the wsize
* renegotiating) and reissue.
@@ -70,7 +68,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
if (test_bit(NETFS_SREQ_FAILED, &from->flags) ||
!test_bit(NETFS_SREQ_NEED_RETRY, &from->flags))
- return;
+ goto out;
list_for_each_continue(next, &stream->subrequests) {
subreq = list_entry(next, struct netfs_io_subrequest, rreq_link);
@@ -85,9 +83,8 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
/* Determine the set of buffers we're going to use. Each
* subreq gets a subset of a single overall contiguous buffer.
*/
- netfs_reset_iter(from);
- source = from->io_iter;
- source.count = len;
+ bvecq_pos_transfer(&dispatch_cursor, &from->dispatch_pos);
+ bvecq_pos_advance(&dispatch_cursor, from->transferred);
/* Work through the sublist. */
subreq = from;
@@ -100,14 +97,20 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
__clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
+ bvecq_pos_unset(&subreq->dispatch_pos);
+ bvecq_pos_unset(&subreq->content);
+
/* Renegotiate max_len (wsize) */
stream->sreq_max_len = len;
+ stream->sreq_max_segs = INT_MAX;
stream->prepare_write(subreq);
- part = umin(len, stream->sreq_max_len);
- if (unlikely(stream->sreq_max_segs))
- part = netfs_limit_iter(&source, 0, part, stream->sreq_max_segs);
- subreq->len = part;
+ bvecq_pos_set(&subreq->dispatch_pos, &dispatch_cursor);
+ part = bvecq_slice(&dispatch_cursor,
+ umin(len, stream->sreq_max_len),
+ stream->sreq_max_segs,
+ &subreq->nr_segs);
+ subreq->len = subreq->transferred + part;
subreq->transferred = 0;
len -= part;
start += part;
@@ -116,7 +119,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
boundary = true;
netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
- netfs_reissue_write(stream, subreq, &source);
+ netfs_reissue_write(stream, subreq);
if (subreq == to)
break;
}
@@ -173,8 +176,13 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
stream->prepare_write(subreq);
- part = umin(len, stream->sreq_max_len);
+ bvecq_pos_set(&subreq->dispatch_pos, &dispatch_cursor);
+ part = bvecq_slice(&dispatch_cursor,
+ umin(len, stream->sreq_max_len),
+ stream->sreq_max_segs,
+ &subreq->nr_segs);
subreq->len = subreq->transferred + part;
+
len -= part;
start += part;
if (!len && boundary) {
@@ -182,13 +190,16 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
boundary = false;
}
- netfs_reissue_write(stream, subreq, &source);
+ netfs_reissue_write(stream, subreq);
if (!len)
break;
} while (len);
} while (!list_is_head(next, &stream->subrequests));
+
+out:
+ bvecq_pos_unset(&dispatch_cursor);
}
/*
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index b4602f7b6431..3345c88bbd8e 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -19,12 +19,13 @@
#include <linux/pagemap.h>
#include <linux/bvecq.h>
#include <linux/uio.h>
-#include <linux/rolling_buffer.h>
enum netfs_sreq_ref_trace;
typedef struct mempool mempool_t;
+struct readahead_control;
+struct netfs_io_request;
+struct netfs_io_subrequest;
struct fscache_occupancy;
-struct folio_queue;
/**
* folio_start_private_2 - Start an fscache write on a folio. [DEPRECATED]
@@ -137,7 +138,6 @@ struct netfs_io_stream {
unsigned int sreq_max_segs; /* 0 or max number of segments in an iterator */
unsigned int submit_off; /* Folio offset we're submitting from */
unsigned int submit_len; /* Amount of data left to submit */
- unsigned int submit_extendable_to; /* Amount I/O can be rounded up to */
void (*prepare_write)(struct netfs_io_subrequest *subreq);
void (*issue_write)(struct netfs_io_subrequest *subreq);
/* Collection tracking */
@@ -178,6 +178,8 @@ struct netfs_io_subrequest {
struct netfs_io_request *rreq; /* Supervising I/O request */
struct work_struct work;
struct list_head rreq_link; /* Link in rreq->subrequests */
+ struct bvecq_pos dispatch_pos; /* Bookmark in the combined queue of the start */
+ struct bvecq_pos content; /* The (copied) content of the subrequest */
struct iov_iter io_iter; /* Iterator for this subrequest */
unsigned long long start; /* Where to start the I/O */
size_t len; /* Size of the I/O */
@@ -239,13 +241,13 @@ struct netfs_io_request {
struct netfs_io_stream io_streams[2]; /* Streams of parallel I/O operations */
#define NR_IO_STREAMS 2 //wreq->nr_io_streams
struct netfs_group *group; /* Writeback group being written back */
- struct rolling_buffer buffer; /* Unencrypted buffer */
-#define NETFS_ROLLBUF_PUT_MARK ROLLBUF_MARK_1
-#define NETFS_ROLLBUF_PAGECACHE_MARK ROLLBUF_MARK_2
+ struct bvecq_pos collect_cursor; /* Clear-up point of I/O buffer */
+ struct bvecq_pos load_cursor; /* Point at which new folios are loaded in */
+ struct bvecq_pos dispatch_cursor; /* Point from which buffers are dispatched */
wait_queue_head_t waitq; /* Processor waiter */
void *netfs_priv; /* Private data for the netfs */
void *netfs_priv2; /* Private data for the netfs */
- struct bio_vec *direct_bv; /* DIO buffer list (when handling iovec-iter) */
+ unsigned long long last_end; /* End pos of last folio submitted */
unsigned long long submitted; /* Amount submitted for I/O so far */
unsigned long long len; /* Length of the request */
size_t transferred; /* Amount to be indicated as transferred */
@@ -258,7 +260,6 @@ struct netfs_io_request {
unsigned long long cleaned_to; /* Position we've cleaned folios to */
unsigned long long abandon_to; /* Position to abandon folios to */
pgoff_t no_unlock_folio; /* Don't unlock this folio after read */
- unsigned int direct_bv_count; /* Number of elements in direct_bv[] */
unsigned int debug_id;
unsigned int rsize; /* Maximum read size (0 for none) */
unsigned int wsize; /* Maximum write size (0 for none) */
@@ -267,7 +268,6 @@ struct netfs_io_request {
spinlock_t lock; /* Lock for queuing subreqs */
unsigned char front_folio_order; /* Order (size) of front folio */
enum netfs_io_origin origin; /* Origin of the request */
- bool direct_bv_unpin; /* T if direct_bv[] must be unpinned */
refcount_t ref;
unsigned long flags;
#define NETFS_RREQ_IN_PROGRESS 0 /* Unlocked when the request completes (has ref) */
@@ -463,12 +463,6 @@ void netfs_end_io_write(struct inode *inode);
int netfs_start_io_direct(struct inode *inode);
void netfs_end_io_direct(struct inode *inode);
-/* Miscellaneous APIs. */
-struct folio_queue *netfs_folioq_alloc(unsigned int rreq_id, gfp_t gfp,
- unsigned int trace /*enum netfs_folioq_trace*/);
-void netfs_folioq_free(struct folio_queue *folioq,
- unsigned int trace /*enum netfs_trace_folioq*/);
-
/* Buffer wrangling helpers API. */
int netfs_alloc_folioq_buffer(struct address_space *mapping,
struct folio_queue **_buffer,
diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
index fbb094231659..df3d440563ec 100644
--- a/include/trace/events/netfs.h
+++ b/include/trace/events/netfs.h
@@ -213,7 +213,9 @@
EM(netfs_folio_trace_store_copy, "store-copy") \
EM(netfs_folio_trace_store_plus, "store+") \
EM(netfs_folio_trace_wthru, "wthru") \
- E_(netfs_folio_trace_wthru_plus, "wthru+")
+ EM(netfs_folio_trace_wthru_plus, "wthru+") \
+ EM(netfs_folio_trace_zero, "zero") \
+ E_(netfs_folio_trace_zero_ra, "zero-ra")
#define netfs_collect_contig_traces \
EM(netfs_contig_trace_collect, "Collect") \
@@ -226,13 +228,13 @@
EM(netfs_trace_donate_to_next, "to-next") \
E_(netfs_trace_donate_to_deferred_next, "defer-next")
-#define netfs_folioq_traces \
- EM(netfs_trace_folioq_alloc_buffer, "alloc-buf") \
- EM(netfs_trace_folioq_clear, "clear") \
- EM(netfs_trace_folioq_delete, "delete") \
- EM(netfs_trace_folioq_make_space, "make-space") \
- EM(netfs_trace_folioq_rollbuf_init, "roll-init") \
- E_(netfs_trace_folioq_read_progress, "r-progress")
+#define netfs_bvecq_traces \
+ EM(netfs_trace_bvecq_alloc_buffer, "alloc-buf") \
+ EM(netfs_trace_bvecq_clear, "clear") \
+ EM(netfs_trace_bvecq_delete, "delete") \
+ EM(netfs_trace_bvecq_make_space, "make-space") \
+ EM(netfs_trace_bvecq_rollbuf_init, "roll-init") \
+ E_(netfs_trace_bvecq_read_progress, "r-progress")
#ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
#define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
@@ -252,7 +254,7 @@ enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __mode(byte);
enum netfs_folio_trace { netfs_folio_traces } __mode(byte);
enum netfs_collect_contig_trace { netfs_collect_contig_traces } __mode(byte);
enum netfs_donate_trace { netfs_donate_traces } __mode(byte);
-enum netfs_folioq_trace { netfs_folioq_traces } __mode(byte);
+enum netfs_bvecq_trace { netfs_bvecq_traces } __mode(byte);
#endif
@@ -276,7 +278,7 @@ netfs_sreq_ref_traces;
netfs_folio_traces;
netfs_collect_contig_traces;
netfs_donate_traces;
-netfs_folioq_traces;
+netfs_bvecq_traces;
/*
* Now redefine the EM() and E_() macros to map the enums to the strings that
@@ -378,10 +380,10 @@ TRACE_EVENT(netfs_sreq,
__entry->len = sreq->len;
__entry->transferred = sreq->transferred;
__entry->start = sreq->start;
- __entry->slot = sreq->io_iter.folioq_slot;
+ __entry->slot = sreq->dispatch_pos.slot;
),
- TP_printk("R=%08x[%x] %s %s f=%03x s=%llx %zx/%zx s=%u e=%d",
+ TP_printk("R=%08x[%x] %s %s f=%03x s=%llx %zx/%zx qs=%u e=%d",
__entry->rreq, __entry->index,
__print_symbolic(__entry->source, netfs_sreq_sources),
__print_symbolic(__entry->what, netfs_sreq_traces),
@@ -756,27 +758,25 @@ TRACE_EVENT(netfs_collect_stream,
__entry->collected_to, __entry->issued_to)
);
-TRACE_EVENT(netfs_folioq,
- TP_PROTO(const struct folio_queue *fq,
- enum netfs_folioq_trace trace),
+TRACE_EVENT(netfs_bvecq,
+ TP_PROTO(const struct bvecq *bq,
+ enum netfs_bvecq_trace trace),
- TP_ARGS(fq, trace),
+ TP_ARGS(bq, trace),
TP_STRUCT__entry(
- __field(unsigned int, rreq)
__field(unsigned int, id)
- __field(enum netfs_folioq_trace, trace)
+ __field(enum netfs_bvecq_trace, trace)
),
TP_fast_assign(
- __entry->rreq = fq ? fq->rreq_id : 0;
- __entry->id = fq ? fq->debug_id : 0;
+ __entry->id = bq ? bq->priv : 0;
__entry->trace = trace;
),
- TP_printk("R=%08x fq=%x %s",
- __entry->rreq, __entry->id,
- __print_symbolic(__entry->trace, netfs_folioq_traces))
+ TP_printk("fq=%x %s",
+ __entry->id,
+ __print_symbolic(__entry->trace, netfs_bvecq_traces))
);
TRACE_EVENT(netfs_bv_slot,
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 19/26] cifs: Remove support for ITER_KVEC/BVEC/FOLIOQ from smb_extract_iter_to_rdma()
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (17 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 18/26] netfs: Switch to using bvecq rather than folio_queue and rolling_buffer David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 20/26] netfs: Remove netfs_alloc/free_folioq_buffer() David Howells
` (6 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara, Shyam Prasad N,
Tom Talpey
netfslib now only presents an bvecq queue and an associated ITER_BVECQ
iterator to the filesystem, so it isn't going to see ITER_KVEC, ITER_BVEC
or ITER_FOLIOQ iterators. So remove that code.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Steve French <sfrench@samba.org>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Shyam Prasad N <sprasad@microsoft.com>
cc: Tom Talpey <tom@talpey.com>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/smb/client/smbdirect.c | 165 --------------------------------------
1 file changed, 165 deletions(-)
diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
index f8a6be83db98..d9e026d5e9f9 100644
--- a/fs/smb/client/smbdirect.c
+++ b/fs/smb/client/smbdirect.c
@@ -3142,162 +3142,6 @@ static bool smb_set_sge(struct smb_extract_to_rdma *rdma,
return true;
}
-/*
- * Extract page fragments from a BVEC-class iterator and add them to an RDMA
- * element list. The pages are not pinned.
- */
-static ssize_t smb_extract_bvec_to_rdma(struct iov_iter *iter,
- struct smb_extract_to_rdma *rdma,
- ssize_t maxsize)
-{
- const struct bio_vec *bv = iter->bvec;
- unsigned long start = iter->iov_offset;
- unsigned int i;
- ssize_t ret = 0;
-
- for (i = 0; i < iter->nr_segs; i++) {
- size_t off, len;
-
- len = bv[i].bv_len;
- if (start >= len) {
- start -= len;
- continue;
- }
-
- len = min_t(size_t, maxsize, len - start);
- off = bv[i].bv_offset + start;
-
- if (!smb_set_sge(rdma, bv[i].bv_page, off, len))
- return -EIO;
-
- ret += len;
- maxsize -= len;
- if (rdma->nr_sge >= rdma->max_sge || maxsize <= 0)
- break;
- start = 0;
- }
-
- if (ret > 0)
- iov_iter_advance(iter, ret);
- return ret;
-}
-
-/*
- * Extract fragments from a KVEC-class iterator and add them to an RDMA list.
- * This can deal with vmalloc'd buffers as well as kmalloc'd or static buffers.
- * The pages are not pinned.
- */
-static ssize_t smb_extract_kvec_to_rdma(struct iov_iter *iter,
- struct smb_extract_to_rdma *rdma,
- ssize_t maxsize)
-{
- const struct kvec *kv = iter->kvec;
- unsigned long start = iter->iov_offset;
- unsigned int i;
- ssize_t ret = 0;
-
- for (i = 0; i < iter->nr_segs; i++) {
- struct page *page;
- unsigned long kaddr;
- size_t off, len, seg;
-
- len = kv[i].iov_len;
- if (start >= len) {
- start -= len;
- continue;
- }
-
- kaddr = (unsigned long)kv[i].iov_base + start;
- off = kaddr & ~PAGE_MASK;
- len = min_t(size_t, maxsize, len - start);
- kaddr &= PAGE_MASK;
-
- maxsize -= len;
- do {
- seg = min_t(size_t, len, PAGE_SIZE - off);
-
- if (is_vmalloc_or_module_addr((void *)kaddr))
- page = vmalloc_to_page((void *)kaddr);
- else
- page = virt_to_page((void *)kaddr);
-
- if (!smb_set_sge(rdma, page, off, seg))
- return -EIO;
-
- ret += seg;
- len -= seg;
- kaddr += PAGE_SIZE;
- off = 0;
- } while (len > 0 && rdma->nr_sge < rdma->max_sge);
-
- if (rdma->nr_sge >= rdma->max_sge || maxsize <= 0)
- break;
- start = 0;
- }
-
- if (ret > 0)
- iov_iter_advance(iter, ret);
- return ret;
-}
-
-/*
- * Extract folio fragments from a FOLIOQ-class iterator and add them to an RDMA
- * list. The folios are not pinned.
- */
-static ssize_t smb_extract_folioq_to_rdma(struct iov_iter *iter,
- struct smb_extract_to_rdma *rdma,
- ssize_t maxsize)
-{
- const struct folio_queue *folioq = iter->folioq;
- unsigned int slot = iter->folioq_slot;
- ssize_t ret = 0;
- size_t offset = iter->iov_offset;
-
- BUG_ON(!folioq);
-
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = folioq->next;
- if (WARN_ON_ONCE(!folioq))
- return -EIO;
- slot = 0;
- }
-
- do {
- struct folio *folio = folioq_folio(folioq, slot);
- size_t fsize = folioq_folio_size(folioq, slot);
-
- if (offset < fsize) {
- size_t part = umin(maxsize, fsize - offset);
-
- if (!smb_set_sge(rdma, folio_page(folio, 0), offset, part))
- return -EIO;
-
- offset += part;
- ret += part;
- maxsize -= part;
- }
-
- if (offset >= fsize) {
- offset = 0;
- slot++;
- if (slot >= folioq_nr_slots(folioq)) {
- if (!folioq->next) {
- WARN_ON_ONCE(ret < iter->count);
- break;
- }
- folioq = folioq->next;
- slot = 0;
- }
- }
- } while (rdma->nr_sge < rdma->max_sge && maxsize > 0);
-
- iter->folioq = folioq;
- iter->folioq_slot = slot;
- iter->iov_offset = offset;
- iter->count -= ret;
- return ret;
-}
-
/*
* Extract memory fragments from a BVECQ-class iterator and add them to an RDMA
* list. The folios are not pinned.
@@ -3373,15 +3217,6 @@ static ssize_t smb_extract_iter_to_rdma(struct iov_iter *iter, size_t len,
int before = rdma->nr_sge;
switch (iov_iter_type(iter)) {
- case ITER_BVEC:
- ret = smb_extract_bvec_to_rdma(iter, rdma, len);
- break;
- case ITER_KVEC:
- ret = smb_extract_kvec_to_rdma(iter, rdma, len);
- break;
- case ITER_FOLIOQ:
- ret = smb_extract_folioq_to_rdma(iter, rdma, len);
- break;
case ITER_BVECQ:
ret = smb_extract_bvecq_to_rdma(iter, rdma, len);
break;
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 20/26] netfs: Remove netfs_alloc/free_folioq_buffer()
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (18 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 19/26] cifs: Remove support for ITER_KVEC/BVEC/FOLIOQ from smb_extract_iter_to_rdma() David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 21/26] netfs: Remove netfs_extract_user_iter() David Howells
` (5 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
Remove netfs_alloc/free_folioq_buffer() as these have been replaced with
netfs_alloc/free_bvecq_buffer().
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: Steve French <sfrench@samba.org>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/afs/dir_edit.c | 1 -
fs/netfs/misc.c | 98 ---------------------------------------
fs/smb/client/smb2ops.c | 1 -
fs/smb/client/smbdirect.c | 1 -
include/linux/netfs.h | 6 ---
5 files changed, 107 deletions(-)
diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c
index 59d3decf7692..d6a9bb4e2039 100644
--- a/fs/afs/dir_edit.c
+++ b/fs/afs/dir_edit.c
@@ -10,7 +10,6 @@
#include <linux/namei.h>
#include <linux/pagemap.h>
#include <linux/iversion.h>
-#include <linux/folio_queue.h>
#include "internal.h"
#include "xdr_fs.h"
diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c
index ab142cbaad35..a19724389147 100644
--- a/fs/netfs/misc.c
+++ b/fs/netfs/misc.c
@@ -8,104 +8,6 @@
#include <linux/swap.h>
#include "internal.h"
-#if 0
-/**
- * netfs_alloc_folioq_buffer - Allocate buffer space into a folio queue
- * @mapping: Address space to set on the folio (or NULL).
- * @_buffer: Pointer to the folio queue to add to (may point to a NULL; updated).
- * @_cur_size: Current size of the buffer (updated).
- * @size: Target size of the buffer.
- * @gfp: The allocation constraints.
- */
-int netfs_alloc_folioq_buffer(struct address_space *mapping,
- struct folio_queue **_buffer,
- size_t *_cur_size, ssize_t size, gfp_t gfp)
-{
- struct folio_queue *tail = *_buffer, *p;
-
- size = round_up(size, PAGE_SIZE);
- if (*_cur_size >= size)
- return 0;
-
- if (tail)
- while (tail->next)
- tail = tail->next;
-
- do {
- struct folio *folio;
- int order = 0, slot;
-
- if (!tail || folioq_full(tail)) {
- p = netfs_folioq_alloc(0, GFP_NOFS, netfs_trace_folioq_alloc_buffer);
- if (!p)
- return -ENOMEM;
- if (tail) {
- tail->next = p;
- p->prev = tail;
- } else {
- *_buffer = p;
- }
- tail = p;
- }
-
- if (size - *_cur_size > PAGE_SIZE)
- order = umin(ilog2(size - *_cur_size) - PAGE_SHIFT,
- MAX_PAGECACHE_ORDER);
-
- folio = folio_alloc(gfp, order);
- if (!folio && order > 0)
- folio = folio_alloc(gfp, 0);
- if (!folio)
- return -ENOMEM;
-
- folio->mapping = mapping;
- folio->index = *_cur_size / PAGE_SIZE;
- trace_netfs_folio(folio, netfs_folio_trace_alloc_buffer);
- slot = folioq_append_mark(tail, folio);
- *_cur_size += folioq_folio_size(tail, slot);
- } while (*_cur_size < size);
-
- return 0;
-}
-EXPORT_SYMBOL(netfs_alloc_folioq_buffer);
-
-/**
- * netfs_free_folioq_buffer - Free a folio queue.
- * @fq: The start of the folio queue to free
- *
- * Free up a chain of folio_queues and, if marked, the marked folios they point
- * to.
- */
-void netfs_free_folioq_buffer(struct folio_queue *fq)
-{
- struct folio_queue *next;
- struct folio_batch fbatch;
-
- folio_batch_init(&fbatch);
-
- for (; fq; fq = next) {
- for (int slot = 0; slot < folioq_count(fq); slot++) {
- struct folio *folio = folioq_folio(fq, slot);
-
- if (!folio ||
- !folioq_is_marked(fq, slot))
- continue;
-
- trace_netfs_folio(folio, netfs_folio_trace_put);
- if (folio_batch_add(&fbatch, folio))
- folio_batch_release(&fbatch);
- }
-
- netfs_stat_d(&netfs_n_folioq);
- next = fq->next;
- kfree(fq);
- }
-
- folio_batch_release(&fbatch);
-}
-EXPORT_SYMBOL(netfs_free_folioq_buffer);
-#endif
-
/**
* netfs_dirty_folio - Mark folio dirty and pin a cache object for writeback
* @mapping: The mapping the folio belongs to.
diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
index 173acca17af7..0d19c8fc4c3d 100644
--- a/fs/smb/client/smb2ops.c
+++ b/fs/smb/client/smb2ops.c
@@ -13,7 +13,6 @@
#include <linux/sort.h>
#include <crypto/aead.h>
#include <linux/fiemap.h>
-#include <linux/folio_queue.h>
#include <uapi/linux/magic.h>
#include "cifsfs.h"
#include "cifsglob.h"
diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
index d9e026d5e9f9..252e7757d21c 100644
--- a/fs/smb/client/smbdirect.c
+++ b/fs/smb/client/smbdirect.c
@@ -6,7 +6,6 @@
*/
#include <linux/module.h>
#include <linux/highmem.h>
-#include <linux/folio_queue.h>
#define __SMBDIRECT_SOCKET_DISCONNECT(__sc) smbd_disconnect_rdma_connection(__sc)
#include "../common/smbdirect/smbdirect_pdu.h"
#include "smbdirect.h"
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index 3345c88bbd8e..9d8576a62868 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -463,12 +463,6 @@ void netfs_end_io_write(struct inode *inode);
int netfs_start_io_direct(struct inode *inode);
void netfs_end_io_direct(struct inode *inode);
-/* Buffer wrangling helpers API. */
-int netfs_alloc_folioq_buffer(struct address_space *mapping,
- struct folio_queue **_buffer,
- size_t *_cur_size, ssize_t size, gfp_t gfp);
-void netfs_free_folioq_buffer(struct folio_queue *fq);
-
/**
* netfs_inode - Get the netfs inode context from the inode
* @inode: The inode to query
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 21/26] netfs: Remove netfs_extract_user_iter()
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (19 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 20/26] netfs: Remove netfs_alloc/free_folioq_buffer() David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 22/26] iov_iter: Remove ITER_FOLIOQ David Howells
` (4 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
Remove netfs_extract_user_iter() as it has been replaced with
netfs_extract_iter().
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: Steve French <sfrench@samba.org>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/netfs/iterator.c | 96 -------------------------------------------
include/linux/netfs.h | 3 --
2 files changed, 99 deletions(-)
diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c
index 581dbf650a19..442f893a0d65 100644
--- a/fs/netfs/iterator.c
+++ b/fs/netfs/iterator.c
@@ -137,102 +137,6 @@ ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t max_se
EXPORT_SYMBOL_GPL(netfs_extract_iter);
#if 0
-/**
- * netfs_extract_user_iter - Extract the pages from a user iterator into a bvec
- * @orig: The original iterator
- * @orig_len: The amount of iterator to copy
- * @new: The iterator to be set up
- * @extraction_flags: Flags to qualify the request
- *
- * Extract the page fragments from the given amount of the source iterator and
- * build up a second iterator that refers to all of those bits. This allows
- * the original iterator to be disposed of.
- *
- * @extraction_flags can have ITER_ALLOW_P2PDMA set to request peer-to-peer DMA be
- * allowed on the pages extracted.
- *
- * On success, the number of elements in the bvec is returned, the original
- * iterator will have been advanced by the amount extracted.
- *
- * The iov_iter_extract_mode() function should be used to query how cleanup
- * should be performed.
- */
-ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
- struct iov_iter *new,
- iov_iter_extraction_t extraction_flags)
-{
- struct bio_vec *bv = NULL;
- struct page **pages;
- unsigned int cur_npages;
- unsigned int max_pages;
- unsigned int npages = 0;
- unsigned int i;
- ssize_t ret;
- size_t count = orig_len, offset, len;
- size_t bv_size, pg_size;
-
- if (WARN_ON_ONCE(!iter_is_ubuf(orig) && !iter_is_iovec(orig)))
- return -EIO;
-
- max_pages = iov_iter_npages(orig, INT_MAX);
- bv_size = array_size(max_pages, sizeof(*bv));
- bv = kvmalloc(bv_size, GFP_KERNEL);
- if (!bv)
- return -ENOMEM;
-
- /* Put the page list at the end of the bvec list storage. bvec
- * elements are larger than page pointers, so as long as we work
- * 0->last, we should be fine.
- */
- pg_size = array_size(max_pages, sizeof(*pages));
- pages = (void *)bv + bv_size - pg_size;
-
- while (count && npages < max_pages) {
- ret = iov_iter_extract_pages(orig, &pages, count,
- max_pages - npages, extraction_flags,
- &offset);
- if (unlikely(ret <= 0)) {
- ret = ret ?: -EIO;
- break;
- }
-
- if (ret > count) {
- pr_err("get_pages rc=%zd more than %zu\n", ret, count);
- break;
- }
-
- count -= ret;
- ret += offset;
- cur_npages = DIV_ROUND_UP(ret, PAGE_SIZE);
-
- if (npages + cur_npages > max_pages) {
- pr_err("Out of bvec array capacity (%u vs %u)\n",
- npages + cur_npages, max_pages);
- break;
- }
-
- for (i = 0; i < cur_npages; i++) {
- len = ret > PAGE_SIZE ? PAGE_SIZE : ret;
- bvec_set_page(bv + npages + i, *pages++, len - offset, offset);
- ret -= len;
- offset = 0;
- }
-
- npages += cur_npages;
- }
-
- if (ret < 0 && (ret == -ENOMEM || npages == 0)) {
- for (i = 0; i < npages; i++)
- unpin_user_page(bv[i].bv_page);
- kvfree(bv);
- return ret;
- }
-
- iov_iter_bvec(new, orig->data_source, bv, npages, orig_len - count);
- return npages;
-}
-EXPORT_SYMBOL_GPL(netfs_extract_user_iter);
-
/*
* Select the span of a bvec iterator we're going to use. Limit it by both maximum
* size and maximum number of segments. Returns the size of the span in bytes.
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index 9d8576a62868..65e39f9b0c10 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -448,9 +448,6 @@ void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t max_segs,
unsigned long long fpos, struct bvecq **_bvecq_head,
iov_iter_extraction_t extraction_flags);
-ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
- struct iov_iter *new,
- iov_iter_extraction_t extraction_flags);
size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,
size_t max_size, size_t max_segs);
void netfs_prepare_write_failed(struct netfs_io_subrequest *subreq);
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 22/26] iov_iter: Remove ITER_FOLIOQ
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (20 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 21/26] netfs: Remove netfs_extract_user_iter() David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 23/26] netfs: Remove folio_queue and rolling_buffer David Howells
` (3 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
Remove ITER_FOLIOQ as it's no longer used.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: Steve French <sfrench@samba.org>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
include/linux/iov_iter.h | 65 +---------
include/linux/uio.h | 12 --
lib/iov_iter.c | 235 +--------------------------------
lib/scatterlist.c | 67 +---------
lib/tests/kunit_iov_iter.c | 257 -------------------------------------
5 files changed, 5 insertions(+), 631 deletions(-)
diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h
index 309642b3901f..9f3a4497c5c3 100644
--- a/include/linux/iov_iter.h
+++ b/include/linux/iov_iter.h
@@ -10,7 +10,6 @@
#include <linux/uio.h>
#include <linux/bvecq.h>
-#include <linux/folio_queue.h>
typedef size_t (*iov_step_f)(void *iter_base, size_t progress, size_t len,
void *priv, void *priv2);
@@ -194,62 +193,6 @@ size_t iterate_bvecq(struct iov_iter *iter, size_t len, void *priv, void *priv2,
return progress;
}
-/*
- * Handle ITER_FOLIOQ.
- */
-static __always_inline
-size_t iterate_folioq(struct iov_iter *iter, size_t len, void *priv, void *priv2,
- iov_step_f step)
-{
- const struct folio_queue *folioq = iter->folioq;
- unsigned int slot = iter->folioq_slot;
- size_t progress = 0, skip = iter->iov_offset;
-
- if (slot == folioq_nr_slots(folioq)) {
- /* The iterator may have been extended. */
- folioq = folioq->next;
- slot = 0;
- }
-
- do {
- struct folio *folio = folioq_folio(folioq, slot);
- size_t part, remain = 0, consumed;
- size_t fsize;
- void *base;
-
- if (!folio)
- break;
-
- fsize = folioq_folio_size(folioq, slot);
- if (skip < fsize) {
- base = kmap_local_folio(folio, skip);
- part = umin(len, PAGE_SIZE - skip % PAGE_SIZE);
- remain = step(base, progress, part, priv, priv2);
- kunmap_local(base);
- consumed = part - remain;
- len -= consumed;
- progress += consumed;
- skip += consumed;
- }
- if (skip >= fsize) {
- skip = 0;
- slot++;
- if (slot == folioq_nr_slots(folioq) && folioq->next) {
- folioq = folioq->next;
- slot = 0;
- }
- }
- if (remain)
- break;
- } while (len);
-
- iter->folioq_slot = slot;
- iter->folioq = folioq;
- iter->iov_offset = skip;
- iter->count -= progress;
- return progress;
-}
-
/*
* Handle ITER_XARRAY.
*/
@@ -361,8 +304,6 @@ size_t iterate_and_advance2(struct iov_iter *iter, size_t len, void *priv,
return iterate_kvec(iter, len, priv, priv2, step);
if (iov_iter_is_bvecq(iter))
return iterate_bvecq(iter, len, priv, priv2, step);
- if (iov_iter_is_folioq(iter))
- return iterate_folioq(iter, len, priv, priv2, step);
if (iov_iter_is_xarray(iter))
return iterate_xarray(iter, len, priv, priv2, step);
return iterate_discard(iter, len, priv, priv2, step);
@@ -397,8 +338,8 @@ size_t iterate_and_advance(struct iov_iter *iter, size_t len, void *priv,
* buffer is presented in segments, which for kernel iteration are broken up by
* physical pages and mapped, with the mapped address being presented.
*
- * [!] Note This will only handle BVEC, KVEC, BVECQ, FOLIOQ, XARRAY and
- * DISCARD-type iterators; it will not handle UBUF or IOVEC-type iterators.
+ * [!] Note This will only handle BVEC, KVEC, BVECQ, XARRAY and DISCARD-type
+ * iterators; it will not handle UBUF or IOVEC-type iterators.
*
* A step functions, @step, must be provided, one for handling mapped kernel
* addresses and the other is given user addresses which have the potential to
@@ -427,8 +368,6 @@ size_t iterate_and_advance_kernel(struct iov_iter *iter, size_t len, void *priv,
return iterate_kvec(iter, len, priv, priv2, step);
if (iov_iter_is_bvecq(iter))
return iterate_bvecq(iter, len, priv, priv2, step);
- if (iov_iter_is_folioq(iter))
- return iterate_folioq(iter, len, priv, priv2, step);
if (iov_iter_is_xarray(iter))
return iterate_xarray(iter, len, priv, priv2, step);
return iterate_discard(iter, len, priv, priv2, step);
diff --git a/include/linux/uio.h b/include/linux/uio.h
index aa50d348dfcc..e84a0c4f28c6 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -11,7 +11,6 @@
#include <uapi/linux/uio.h>
struct page;
-struct folio_queue;
typedef unsigned int __bitwise iov_iter_extraction_t;
@@ -26,7 +25,6 @@ enum iter_type {
ITER_IOVEC,
ITER_BVEC,
ITER_KVEC,
- ITER_FOLIOQ,
ITER_BVECQ,
ITER_XARRAY,
ITER_DISCARD,
@@ -69,7 +67,6 @@ struct iov_iter {
const struct iovec *__iov;
const struct kvec *kvec;
const struct bio_vec *bvec;
- const struct folio_queue *folioq;
const struct bvecq *bvecq;
struct xarray *xarray;
void __user *ubuf;
@@ -79,7 +76,6 @@ struct iov_iter {
};
union {
unsigned long nr_segs;
- u8 folioq_slot;
u16 bvecq_slot;
loff_t xarray_start;
};
@@ -148,11 +144,6 @@ static inline bool iov_iter_is_discard(const struct iov_iter *i)
return iov_iter_type(i) == ITER_DISCARD;
}
-static inline bool iov_iter_is_folioq(const struct iov_iter *i)
-{
- return iov_iter_type(i) == ITER_FOLIOQ;
-}
-
static inline bool iov_iter_is_bvecq(const struct iov_iter *i)
{
return iov_iter_type(i) == ITER_BVECQ;
@@ -303,9 +294,6 @@ void iov_iter_kvec(struct iov_iter *i, unsigned int direction, const struct kvec
void iov_iter_bvec(struct iov_iter *i, unsigned int direction, const struct bio_vec *bvec,
unsigned long nr_segs, size_t count);
void iov_iter_discard(struct iov_iter *i, unsigned int direction, size_t count);
-void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction,
- const struct folio_queue *folioq,
- unsigned int first_slot, unsigned int offset, size_t count);
void iov_iter_bvec_queue(struct iov_iter *i, unsigned int direction,
const struct bvecq *bvecq,
unsigned int first_slot, unsigned int offset, size_t count);
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 4f091e6d4a22..d203088dbf5a 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -538,39 +538,6 @@ static void iov_iter_iovec_advance(struct iov_iter *i, size_t size)
i->__iov = iov;
}
-static void iov_iter_folioq_advance(struct iov_iter *i, size_t size)
-{
- const struct folio_queue *folioq = i->folioq;
- unsigned int slot = i->folioq_slot;
-
- if (!i->count)
- return;
- i->count -= size;
-
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = folioq->next;
- slot = 0;
- }
-
- size += i->iov_offset; /* From beginning of current segment. */
- do {
- size_t fsize = folioq_folio_size(folioq, slot);
-
- if (likely(size < fsize))
- break;
- size -= fsize;
- slot++;
- if (slot >= folioq_nr_slots(folioq) && folioq->next) {
- folioq = folioq->next;
- slot = 0;
- }
- } while (size);
-
- i->iov_offset = size;
- i->folioq_slot = slot;
- i->folioq = folioq;
-}
-
static void iov_iter_bvecq_advance(struct iov_iter *i, size_t by)
{
const struct bvecq *bq = i->bvecq;
@@ -616,8 +583,6 @@ void iov_iter_advance(struct iov_iter *i, size_t size)
iov_iter_iovec_advance(i, size);
} else if (iov_iter_is_bvec(i)) {
iov_iter_bvec_advance(i, size);
- } else if (iov_iter_is_folioq(i)) {
- iov_iter_folioq_advance(i, size);
} else if (iov_iter_is_bvecq(i)) {
iov_iter_bvecq_advance(i, size);
} else if (iov_iter_is_discard(i)) {
@@ -626,32 +591,6 @@ void iov_iter_advance(struct iov_iter *i, size_t size)
}
EXPORT_SYMBOL(iov_iter_advance);
-static void iov_iter_folioq_revert(struct iov_iter *i, size_t unroll)
-{
- const struct folio_queue *folioq = i->folioq;
- unsigned int slot = i->folioq_slot;
-
- for (;;) {
- size_t fsize;
-
- if (slot == 0) {
- folioq = folioq->prev;
- slot = folioq_nr_slots(folioq);
- }
- slot--;
-
- fsize = folioq_folio_size(folioq, slot);
- if (unroll <= fsize) {
- i->iov_offset = fsize - unroll;
- break;
- }
- unroll -= fsize;
- }
-
- i->folioq_slot = slot;
- i->folioq = folioq;
-}
-
static void iov_iter_bvecq_revert(struct iov_iter *i, size_t unroll)
{
const struct bvecq *bq = i->bvecq;
@@ -709,9 +648,6 @@ void iov_iter_revert(struct iov_iter *i, size_t unroll)
}
unroll -= n;
}
- } else if (iov_iter_is_folioq(i)) {
- i->iov_offset = 0;
- iov_iter_folioq_revert(i, unroll);
} else if (iov_iter_is_bvecq(i)) {
i->iov_offset = 0;
iov_iter_bvecq_revert(i, unroll);
@@ -744,8 +680,6 @@ size_t iov_iter_single_seg_count(const struct iov_iter *i)
}
if (!i->count)
return 0;
- if (unlikely(iov_iter_is_folioq(i)))
- return umin(folioq_folio_size(i->folioq, i->folioq_slot), i->count);
if (unlikely(iov_iter_is_bvecq(i)))
return min(i->count, i->bvecq->bv[i->bvecq_slot].bv_len - i->iov_offset);
return i->count;
@@ -784,36 +718,6 @@ void iov_iter_bvec(struct iov_iter *i, unsigned int direction,
}
EXPORT_SYMBOL(iov_iter_bvec);
-/**
- * iov_iter_folio_queue - Initialise an I/O iterator to use the folios in a folio queue
- * @i: The iterator to initialise.
- * @direction: The direction of the transfer.
- * @folioq: The starting point in the folio queue.
- * @first_slot: The first slot in the folio queue to use
- * @offset: The offset into the folio in the first slot to start at
- * @count: The size of the I/O buffer in bytes.
- *
- * Set up an I/O iterator to either draw data out of the pages attached to an
- * inode or to inject data into those pages. The pages *must* be prevented
- * from evaporation, either by taking a ref on them or locking them by the
- * caller.
- */
-void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction,
- const struct folio_queue *folioq, unsigned int first_slot,
- unsigned int offset, size_t count)
-{
- BUG_ON(direction & ~1);
- *i = (struct iov_iter) {
- .iter_type = ITER_FOLIOQ,
- .data_source = direction,
- .folioq = folioq,
- .folioq_slot = first_slot,
- .count = count,
- .iov_offset = offset,
- };
-}
-EXPORT_SYMBOL(iov_iter_folio_queue);
-
/**
* iov_iter_bvec_queue - Initialise an I/O iterator to use a segmented bvec queue
* @i: The iterator to initialise.
@@ -982,9 +886,6 @@ unsigned long iov_iter_alignment(const struct iov_iter *i)
if (iov_iter_is_bvec(i))
return iov_iter_alignment_bvec(i);
- /* With both xarray and folioq types, we're dealing with whole folios. */
- if (iov_iter_is_folioq(i))
- return i->iov_offset | i->count;
if (iov_iter_is_bvecq(i))
return iov_iter_alignment_bvecq(i);
if (iov_iter_is_xarray(i))
@@ -1039,65 +940,6 @@ static int want_pages_array(struct page ***res, size_t size,
return count;
}
-static ssize_t iter_folioq_get_pages(struct iov_iter *iter,
- struct page ***ppages, size_t maxsize,
- unsigned maxpages, size_t *_start_offset)
-{
- const struct folio_queue *folioq = iter->folioq;
- struct page **pages;
- unsigned int slot = iter->folioq_slot;
- size_t extracted = 0, count = iter->count, iov_offset = iter->iov_offset;
-
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = folioq->next;
- slot = 0;
- if (WARN_ON(iov_offset != 0))
- return -EIO;
- }
-
- maxpages = want_pages_array(ppages, maxsize, iov_offset & ~PAGE_MASK, maxpages);
- if (!maxpages)
- return -ENOMEM;
- *_start_offset = iov_offset & ~PAGE_MASK;
- pages = *ppages;
-
- for (;;) {
- struct folio *folio = folioq_folio(folioq, slot);
- size_t offset = iov_offset, fsize = folioq_folio_size(folioq, slot);
- size_t part = PAGE_SIZE - offset % PAGE_SIZE;
-
- if (offset < fsize) {
- part = umin(part, umin(maxsize - extracted, fsize - offset));
- count -= part;
- iov_offset += part;
- extracted += part;
-
- *pages = folio_page(folio, offset / PAGE_SIZE);
- get_page(*pages);
- pages++;
- maxpages--;
- }
-
- if (maxpages == 0 || extracted >= maxsize)
- break;
-
- if (iov_offset >= fsize) {
- iov_offset = 0;
- slot++;
- if (slot == folioq_nr_slots(folioq) && folioq->next) {
- folioq = folioq->next;
- slot = 0;
- }
- }
- }
-
- iter->count = count;
- iter->iov_offset = iov_offset;
- iter->folioq = folioq;
- iter->folioq_slot = slot;
- return extracted;
-}
-
static ssize_t iter_xarray_populate_pages(struct page **pages, struct xarray *xa,
pgoff_t index, unsigned int nr_pages)
{
@@ -1249,8 +1091,6 @@ static ssize_t __iov_iter_get_pages_alloc(struct iov_iter *i,
}
return maxsize;
}
- if (iov_iter_is_folioq(i))
- return iter_folioq_get_pages(i, pages, maxsize, maxpages, start);
if (iov_iter_is_xarray(i))
return iter_xarray_get_pages(i, pages, maxsize, maxpages, start);
WARN_ON_ONCE(iov_iter_is_bvecq(i));
@@ -1366,11 +1206,6 @@ int iov_iter_npages(const struct iov_iter *i, int maxpages)
return iov_npages(i, maxpages);
if (iov_iter_is_bvec(i))
return bvec_npages(i, maxpages);
- if (iov_iter_is_folioq(i)) {
- unsigned offset = i->iov_offset % PAGE_SIZE;
- int npages = DIV_ROUND_UP(offset + i->count, PAGE_SIZE);
- return min(npages, maxpages);
- }
if (iov_iter_is_bvecq(i))
return iov_npages_bvecq(i, maxpages);
if (iov_iter_is_xarray(i)) {
@@ -1654,68 +1489,6 @@ void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state)
i->nr_segs = state->nr_segs;
}
-/*
- * Extract a list of contiguous pages from an ITER_FOLIOQ iterator. This does
- * not get references on the pages, nor does it get a pin on them.
- */
-static ssize_t iov_iter_extract_folioq_pages(struct iov_iter *i,
- struct page ***pages, size_t maxsize,
- unsigned int maxpages,
- iov_iter_extraction_t extraction_flags,
- size_t *offset0)
-{
- const struct folio_queue *folioq = i->folioq;
- struct page **p;
- unsigned int nr = 0;
- size_t extracted = 0, offset, slot = i->folioq_slot;
-
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = folioq->next;
- slot = 0;
- if (WARN_ON(i->iov_offset != 0))
- return -EIO;
- }
-
- offset = i->iov_offset & ~PAGE_MASK;
- *offset0 = offset;
-
- maxpages = want_pages_array(pages, maxsize, offset, maxpages);
- if (!maxpages)
- return -ENOMEM;
- p = *pages;
-
- for (;;) {
- struct folio *folio = folioq_folio(folioq, slot);
- size_t offset = i->iov_offset, fsize = folioq_folio_size(folioq, slot);
- size_t part = PAGE_SIZE - offset % PAGE_SIZE;
-
- if (offset < fsize) {
- part = umin(part, umin(maxsize - extracted, fsize - offset));
- i->count -= part;
- i->iov_offset += part;
- extracted += part;
-
- p[nr++] = folio_page(folio, offset / PAGE_SIZE);
- }
-
- if (nr >= maxpages || extracted >= maxsize)
- break;
-
- if (i->iov_offset >= fsize) {
- i->iov_offset = 0;
- slot++;
- if (slot == folioq_nr_slots(folioq) && folioq->next) {
- folioq = folioq->next;
- slot = 0;
- }
- }
- }
-
- i->folioq = folioq;
- i->folioq_slot = slot;
- return extracted;
-}
-
/*
* Extract a list of virtually contiguous pages from an ITER_BVECQ iterator.
* This does not get references on the pages, nor does it get a pin on them.
@@ -2078,8 +1851,8 @@ static ssize_t iov_iter_extract_user_pages(struct iov_iter *i,
* added to the pages, but refs will not be taken.
* iov_iter_extract_will_pin() will return true.
*
- * (*) If the iterator is ITER_KVEC, ITER_BVEC, ITER_FOLIOQ or ITER_XARRAY, the
- * pages are merely listed; no extra refs or pins are obtained.
+ * (*) If the iterator is ITER_KVEC, ITER_BVEC, ITER_XARRAY, the pages are
+ * merely listed; no extra refs or pins are obtained.
* iov_iter_extract_will_pin() will return 0.
*
* Note also:
@@ -2114,10 +1887,6 @@ ssize_t iov_iter_extract_pages(struct iov_iter *i,
return iov_iter_extract_bvec_pages(i, pages, maxsize,
maxpages, extraction_flags,
offset0);
- if (iov_iter_is_folioq(i))
- return iov_iter_extract_folioq_pages(i, pages, maxsize,
- maxpages, extraction_flags,
- offset0);
if (iov_iter_is_bvecq(i))
return iov_iter_extract_bvecq_pages(i, pages, maxsize,
maxpages, extraction_flags,
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index 93a3d194a914..25f64272839e 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -12,7 +12,6 @@
#include <linux/bvec.h>
#include <linux/bvecq.h>
#include <linux/uio.h>
-#include <linux/folio_queue.h>
/**
* sg_nents - return total count of entries in scatterlist
@@ -1268,67 +1267,6 @@ static ssize_t extract_kvec_to_sg(struct iov_iter *iter,
return ret;
}
-/*
- * Extract up to sg_max folios from an FOLIOQ-type iterator and add them to
- * the scatterlist. The pages are not pinned.
- */
-static ssize_t extract_folioq_to_sg(struct iov_iter *iter,
- ssize_t maxsize,
- struct sg_table *sgtable,
- unsigned int sg_max,
- iov_iter_extraction_t extraction_flags)
-{
- const struct folio_queue *folioq = iter->folioq;
- struct scatterlist *sg = sgtable->sgl + sgtable->nents;
- unsigned int slot = iter->folioq_slot;
- ssize_t ret = 0;
- size_t offset = iter->iov_offset;
-
- BUG_ON(!folioq);
-
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = folioq->next;
- if (WARN_ON_ONCE(!folioq))
- return 0;
- slot = 0;
- }
-
- do {
- struct folio *folio = folioq_folio(folioq, slot);
- size_t fsize = folioq_folio_size(folioq, slot);
-
- if (offset < fsize) {
- size_t part = umin(maxsize - ret, fsize - offset);
-
- sg_set_page(sg, folio_page(folio, 0), part, offset);
- sgtable->nents++;
- sg++;
- sg_max--;
- offset += part;
- ret += part;
- }
-
- if (offset >= fsize) {
- offset = 0;
- slot++;
- if (slot >= folioq_nr_slots(folioq)) {
- if (!folioq->next) {
- WARN_ON_ONCE(ret < iter->count);
- break;
- }
- folioq = folioq->next;
- slot = 0;
- }
- }
- } while (sg_max > 0 && ret < maxsize);
-
- iter->folioq = folioq;
- iter->folioq_slot = slot;
- iter->iov_offset = offset;
- iter->count -= ret;
- return ret;
-}
-
/*
* Extract up to sg_max folios from an BVECQ-type iterator and add them to
* the scatterlist. The pages are not pinned.
@@ -1453,7 +1391,7 @@ static ssize_t extract_xarray_to_sg(struct iov_iter *iter,
* addition of @sg_max elements.
*
* The pages referred to by UBUF- and IOVEC-type iterators are extracted and
- * pinned; BVEC-, KVEC-, FOLIOQ- and XARRAY-type are extracted but aren't
+ * pinned; BVEC-, KVEC-, BVECQ- and XARRAY-type are extracted but aren't
* pinned; DISCARD-type is not supported.
*
* No end mark is placed on the scatterlist; that's left to the caller.
@@ -1486,9 +1424,6 @@ ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t maxsize,
case ITER_KVEC:
return extract_kvec_to_sg(iter, maxsize, sgtable, sg_max,
extraction_flags);
- case ITER_FOLIOQ:
- return extract_folioq_to_sg(iter, maxsize, sgtable, sg_max,
- extraction_flags);
case ITER_BVECQ:
return extract_bvecq_to_sg(iter, maxsize, sgtable, sg_max,
extraction_flags);
diff --git a/lib/tests/kunit_iov_iter.c b/lib/tests/kunit_iov_iter.c
index ff0621636ff1..7011f0ff7396 100644
--- a/lib/tests/kunit_iov_iter.c
+++ b/lib/tests/kunit_iov_iter.c
@@ -11,9 +11,7 @@
#include <linux/vmalloc.h>
#include <linux/mm.h>
#include <linux/uio.h>
-#include <linux/bvec.h>
#include <linux/bvecq.h>
-#include <linux/folio_queue.h>
#include <kunit/test.h>
MODULE_DESCRIPTION("iov_iter testing");
@@ -364,179 +362,6 @@ static void __init iov_kunit_copy_from_bvec(struct kunit *test)
KUNIT_SUCCEED(test);
}
-static void iov_kunit_destroy_folioq(void *data)
-{
- struct folio_queue *folioq, *next;
-
- for (folioq = data; folioq; folioq = next) {
- next = folioq->next;
- for (int i = 0; i < folioq_nr_slots(folioq); i++)
- if (folioq_folio(folioq, i))
- folio_put(folioq_folio(folioq, i));
- kfree(folioq);
- }
-}
-
-static void __init iov_kunit_load_folioq(struct kunit *test,
- struct iov_iter *iter, int dir,
- struct folio_queue *folioq,
- struct page **pages, size_t npages)
-{
- struct folio_queue *p = folioq;
- size_t size = 0;
- int i;
-
- for (i = 0; i < npages; i++) {
- if (folioq_full(p)) {
- p->next = kzalloc_obj(struct folio_queue);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, p->next);
- folioq_init(p->next, 0);
- p->next->prev = p;
- p = p->next;
- }
- folioq_append(p, page_folio(pages[i]));
- size += PAGE_SIZE;
- }
- iov_iter_folio_queue(iter, dir, folioq, 0, 0, size);
-}
-
-static struct folio_queue *iov_kunit_create_folioq(struct kunit *test)
-{
- struct folio_queue *folioq;
-
- folioq = kzalloc_obj(struct folio_queue);
- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, folioq);
- kunit_add_action_or_reset(test, iov_kunit_destroy_folioq, folioq);
- folioq_init(folioq, 0);
- return folioq;
-}
-
-/*
- * Test copying to a ITER_FOLIOQ-type iterator.
- */
-static void __init iov_kunit_copy_to_folioq(struct kunit *test)
-{
- const struct kvec_test_range *pr;
- struct iov_iter iter;
- struct folio_queue *folioq;
- struct page **spages, **bpages;
- u8 *scratch, *buffer;
- size_t bufsize, npages, size, copied;
- int i, patt;
-
- bufsize = 0x100000;
- npages = bufsize / PAGE_SIZE;
-
- folioq = iov_kunit_create_folioq(test);
-
- scratch = iov_kunit_create_buffer(test, &spages, npages);
- for (i = 0; i < bufsize; i++)
- scratch[i] = pattern(i);
-
- buffer = iov_kunit_create_buffer(test, &bpages, npages);
- memset(buffer, 0, bufsize);
-
- iov_kunit_load_folioq(test, &iter, READ, folioq, bpages, npages);
-
- i = 0;
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
- size = pr->to - pr->from;
- KUNIT_ASSERT_LE(test, pr->to, bufsize);
-
- iov_iter_folio_queue(&iter, READ, folioq, 0, 0, pr->to);
- iov_iter_advance(&iter, pr->from);
- copied = copy_to_iter(scratch + i, size, &iter);
-
- KUNIT_EXPECT_EQ(test, copied, size);
- KUNIT_EXPECT_EQ(test, iter.count, 0);
- KUNIT_EXPECT_EQ(test, iter.iov_offset, pr->to % PAGE_SIZE);
- i += size;
- if (test->status == KUNIT_FAILURE)
- goto stop;
- }
-
- /* Build the expected image in the scratch buffer. */
- patt = 0;
- memset(scratch, 0, bufsize);
- for (pr = kvec_test_ranges; pr->from >= 0; pr++)
- for (i = pr->from; i < pr->to; i++)
- scratch[i] = pattern(patt++);
-
- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, buffer[i], scratch[i], "at i=%x", i);
- if (buffer[i] != scratch[i])
- return;
- }
-
-stop:
- KUNIT_SUCCEED(test);
-}
-
-/*
- * Test copying from a ITER_FOLIOQ-type iterator.
- */
-static void __init iov_kunit_copy_from_folioq(struct kunit *test)
-{
- const struct kvec_test_range *pr;
- struct iov_iter iter;
- struct folio_queue *folioq;
- struct page **spages, **bpages;
- u8 *scratch, *buffer;
- size_t bufsize, npages, size, copied;
- int i, j;
-
- bufsize = 0x100000;
- npages = bufsize / PAGE_SIZE;
-
- folioq = iov_kunit_create_folioq(test);
-
- buffer = iov_kunit_create_buffer(test, &bpages, npages);
- for (i = 0; i < bufsize; i++)
- buffer[i] = pattern(i);
-
- scratch = iov_kunit_create_buffer(test, &spages, npages);
- memset(scratch, 0, bufsize);
-
- iov_kunit_load_folioq(test, &iter, READ, folioq, bpages, npages);
-
- i = 0;
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
- size = pr->to - pr->from;
- KUNIT_ASSERT_LE(test, pr->to, bufsize);
-
- iov_iter_folio_queue(&iter, WRITE, folioq, 0, 0, pr->to);
- iov_iter_advance(&iter, pr->from);
- copied = copy_from_iter(scratch + i, size, &iter);
-
- KUNIT_EXPECT_EQ(test, copied, size);
- KUNIT_EXPECT_EQ(test, iter.count, 0);
- KUNIT_EXPECT_EQ(test, iter.iov_offset, pr->to % PAGE_SIZE);
- i += size;
- }
-
- /* Build the expected image in the main buffer. */
- i = 0;
- memset(buffer, 0, bufsize);
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
- for (j = pr->from; j < pr->to; j++) {
- buffer[i++] = pattern(j);
- if (i >= bufsize)
- goto stop;
- }
- }
-stop:
-
- /* Compare the images */
- for (i = 0; i < bufsize; i++) {
- KUNIT_EXPECT_EQ_MSG(test, scratch[i], buffer[i], "at i=%x", i);
- if (scratch[i] != buffer[i])
- return;
- }
-
- KUNIT_SUCCEED(test);
-}
-
static void iov_kunit_destroy_bvecq(void *data)
{
struct bvecq *bq, *next;
@@ -1029,85 +854,6 @@ static void __init iov_kunit_extract_pages_bvec(struct kunit *test)
KUNIT_SUCCEED(test);
}
-/*
- * Test the extraction of ITER_FOLIOQ-type iterators.
- */
-static void __init iov_kunit_extract_pages_folioq(struct kunit *test)
-{
- const struct kvec_test_range *pr;
- struct folio_queue *folioq;
- struct iov_iter iter;
- struct page **bpages, *pagelist[8], **pages = pagelist;
- ssize_t len;
- size_t bufsize, size = 0, npages;
- int i, from;
-
- bufsize = 0x100000;
- npages = bufsize / PAGE_SIZE;
-
- folioq = iov_kunit_create_folioq(test);
-
- iov_kunit_create_buffer(test, &bpages, npages);
- iov_kunit_load_folioq(test, &iter, READ, folioq, bpages, npages);
-
- for (pr = kvec_test_ranges; pr->from >= 0; pr++) {
- from = pr->from;
- size = pr->to - from;
- KUNIT_ASSERT_LE(test, pr->to, bufsize);
-
- iov_iter_folio_queue(&iter, WRITE, folioq, 0, 0, pr->to);
- iov_iter_advance(&iter, from);
-
- do {
- size_t offset0 = LONG_MAX;
-
- for (i = 0; i < ARRAY_SIZE(pagelist); i++)
- pagelist[i] = (void *)(unsigned long)0xaa55aa55aa55aa55ULL;
-
- len = iov_iter_extract_pages(&iter, &pages, 100 * 1024,
- ARRAY_SIZE(pagelist), 0, &offset0);
- KUNIT_EXPECT_GE(test, len, 0);
- if (len < 0)
- break;
- KUNIT_EXPECT_LE(test, len, size);
- KUNIT_EXPECT_EQ(test, iter.count, size - len);
- if (len == 0)
- break;
- size -= len;
- KUNIT_EXPECT_GE(test, (ssize_t)offset0, 0);
- KUNIT_EXPECT_LT(test, offset0, PAGE_SIZE);
-
- for (i = 0; i < ARRAY_SIZE(pagelist); i++) {
- struct page *p;
- ssize_t part = min_t(ssize_t, len, PAGE_SIZE - offset0);
- int ix;
-
- KUNIT_ASSERT_GE(test, part, 0);
- ix = from / PAGE_SIZE;
- KUNIT_ASSERT_LT(test, ix, npages);
- p = bpages[ix];
- KUNIT_EXPECT_PTR_EQ(test, pagelist[i], p);
- KUNIT_EXPECT_EQ(test, offset0, from % PAGE_SIZE);
- from += part;
- len -= part;
- KUNIT_ASSERT_GE(test, len, 0);
- if (len == 0)
- break;
- offset0 = 0;
- }
-
- if (test->status == KUNIT_FAILURE)
- goto stop;
- } while (iov_iter_count(&iter) > 0);
-
- KUNIT_EXPECT_EQ(test, size, 0);
- KUNIT_EXPECT_EQ(test, iter.count, 0);
- }
-
-stop:
- KUNIT_SUCCEED(test);
-}
-
/*
* Test the extraction of ITER_XARRAY-type iterators.
*/
@@ -1192,15 +938,12 @@ static struct kunit_case __refdata iov_kunit_cases[] = {
KUNIT_CASE(iov_kunit_copy_from_kvec),
KUNIT_CASE(iov_kunit_copy_to_bvec),
KUNIT_CASE(iov_kunit_copy_from_bvec),
- KUNIT_CASE(iov_kunit_copy_to_folioq),
- KUNIT_CASE(iov_kunit_copy_from_folioq),
KUNIT_CASE(iov_kunit_copy_to_bvecq),
KUNIT_CASE(iov_kunit_copy_from_bvecq),
KUNIT_CASE(iov_kunit_copy_to_xarray),
KUNIT_CASE(iov_kunit_copy_from_xarray),
KUNIT_CASE(iov_kunit_extract_pages_kvec),
KUNIT_CASE(iov_kunit_extract_pages_bvec),
- KUNIT_CASE(iov_kunit_extract_pages_folioq),
KUNIT_CASE(iov_kunit_extract_pages_xarray),
{}
};
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 23/26] netfs: Remove folio_queue and rolling_buffer
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (21 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 22/26] iov_iter: Remove ITER_FOLIOQ David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 24/26] netfs: Check for too much data being read David Howells
` (2 subsequent siblings)
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
Remove folio_queue and rolling_buffer as they're no longer used.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: Steve French <sfrench@samba.org>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
Documentation/core-api/folio_queue.rst | 209 -----------------
Documentation/core-api/index.rst | 1 -
fs/netfs/iterator.c | 192 ----------------
fs/netfs/rolling_buffer.c | 297 -------------------------
include/linux/folio_queue.h | 282 -----------------------
include/linux/rolling_buffer.h | 64 ------
6 files changed, 1045 deletions(-)
delete mode 100644 Documentation/core-api/folio_queue.rst
delete mode 100644 fs/netfs/rolling_buffer.c
delete mode 100644 include/linux/folio_queue.h
delete mode 100644 include/linux/rolling_buffer.h
diff --git a/Documentation/core-api/folio_queue.rst b/Documentation/core-api/folio_queue.rst
deleted file mode 100644
index b7628896d2b6..000000000000
--- a/Documentation/core-api/folio_queue.rst
+++ /dev/null
@@ -1,209 +0,0 @@
-.. SPDX-License-Identifier: GPL-2.0+
-
-===========
-Folio Queue
-===========
-
-:Author: David Howells <dhowells@redhat.com>
-
-.. Contents:
-
- * Overview
- * Initialisation
- * Adding and removing folios
- * Querying information about a folio
- * Querying information about a folio_queue
- * Folio queue iteration
- * Folio marks
- * Lockless simultaneous production/consumption issues
-
-
-Overview
-========
-
-The folio_queue struct forms a single segment in a segmented list of folios
-that can be used to form an I/O buffer. As such, the list can be iterated over
-using the ITER_FOLIOQ iov_iter type.
-
-The publicly accessible members of the structure are::
-
- struct folio_queue {
- struct folio_queue *next;
- struct folio_queue *prev;
- ...
- };
-
-A pair of pointers are provided, ``next`` and ``prev``, that point to the
-segments on either side of the segment being accessed. Whilst this is a
-doubly-linked list, it is intentionally not a circular list; the outward
-sibling pointers in terminal segments should be NULL.
-
-Each segment in the list also stores:
-
- * an ordered sequence of folio pointers,
- * the size of each folio and
- * three 1-bit marks per folio,
-
-but these should not be accessed directly as the underlying data structure may
-change, but rather the access functions outlined below should be used.
-
-The facility can be made accessible by::
-
- #include <linux/folio_queue.h>
-
-and to use the iterator::
-
- #include <linux/uio.h>
-
-
-Initialisation
-==============
-
-A segment should be initialised by calling::
-
- void folioq_init(struct folio_queue *folioq);
-
-with a pointer to the segment to be initialised. Note that this will not
-necessarily initialise all the folio pointers, so care must be taken to check
-the number of folios added.
-
-
-Adding and removing folios
-==========================
-
-Folios can be set in the next unused slot in a segment struct by calling one
-of::
-
- unsigned int folioq_append(struct folio_queue *folioq,
- struct folio *folio);
-
- unsigned int folioq_append_mark(struct folio_queue *folioq,
- struct folio *folio);
-
-Both functions update the stored folio count, store the folio and note its
-size. The second function also sets the first mark for the folio added. Both
-functions return the number of the slot used. [!] Note that no attempt is made
-to check that the capacity wasn't overrun and the list will not be extended
-automatically.
-
-A folio can be excised by calling::
-
- void folioq_clear(struct folio_queue *folioq, unsigned int slot);
-
-This clears the slot in the array and also clears all the marks for that folio,
-but doesn't change the folio count - so future accesses of that slot must check
-if the slot is occupied.
-
-
-Querying information about a folio
-==================================
-
-Information about the folio in a particular slot may be queried by the
-following function::
-
- struct folio *folioq_folio(const struct folio_queue *folioq,
- unsigned int slot);
-
-If a folio has not yet been set in that slot, this may yield an undefined
-pointer. The size of the folio in a slot may be queried with either of::
-
- unsigned int folioq_folio_order(const struct folio_queue *folioq,
- unsigned int slot);
-
- size_t folioq_folio_size(const struct folio_queue *folioq,
- unsigned int slot);
-
-The first function returns the size as an order and the second as a number of
-bytes.
-
-
-Querying information about a folio_queue
-========================================
-
-Information may be retrieved about a particular segment with the following
-functions::
-
- unsigned int folioq_nr_slots(const struct folio_queue *folioq);
-
- unsigned int folioq_count(struct folio_queue *folioq);
-
- bool folioq_full(struct folio_queue *folioq);
-
-The first function returns the maximum capacity of a segment. It must not be
-assumed that this won't vary between segments. The second returns the number
-of folios added to a segments and the third is a shorthand to indicate if the
-segment has been filled to capacity.
-
-Not that the count and fullness are not affected by clearing folios from the
-segment. These are more about indicating how many slots in the array have been
-initialised, and it assumed that slots won't get reused, but rather the segment
-will get discarded as the queue is consumed.
-
-
-Folio marks
-===========
-
-Folios within a queue can also have marks assigned to them. These marks can be
-used to note information such as if a folio needs folio_put() calling upon it.
-There are three marks available to be set for each folio.
-
-The marks can be set by::
-
- void folioq_mark(struct folio_queue *folioq, unsigned int slot);
- void folioq_mark2(struct folio_queue *folioq, unsigned int slot);
-
-Cleared by::
-
- void folioq_unmark(struct folio_queue *folioq, unsigned int slot);
- void folioq_unmark2(struct folio_queue *folioq, unsigned int slot);
-
-And the marks can be queried by::
-
- bool folioq_is_marked(const struct folio_queue *folioq, unsigned int slot);
- bool folioq_is_marked2(const struct folio_queue *folioq, unsigned int slot);
-
-The marks can be used for any purpose and are not interpreted by this API.
-
-
-Folio queue iteration
-=====================
-
-A list of segments may be iterated over using the I/O iterator facility using
-an ``iov_iter`` iterator of ``ITER_FOLIOQ`` type. The iterator may be
-initialised with::
-
- void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction,
- const struct folio_queue *folioq,
- unsigned int first_slot, unsigned int offset,
- size_t count);
-
-This may be told to start at a particular segment, slot and offset within a
-queue. The iov iterator functions will follow the next pointers when advancing
-and prev pointers when reverting when needed.
-
-
-Lockless simultaneous production/consumption issues
-===================================================
-
-If properly managed, the list can be extended by the producer at the head end
-and shortened by the consumer at the tail end simultaneously without the need
-to take locks. The ITER_FOLIOQ iterator inserts appropriate barriers to aid
-with this.
-
-Care must be taken when simultaneously producing and consuming a list. If the
-last segment is reached and the folios it refers to are entirely consumed by
-the IOV iterators, an iov_iter struct will be left pointing to the last segment
-with a slot number equal to the capacity of that segment. The iterator will
-try to continue on from this if there's another segment available when it is
-used again, but care must be taken lest the segment got removed and freed by
-the consumer before the iterator was advanced.
-
-It is recommended that the queue always contain at least one segment, even if
-that segment has never been filled or is entirely spent. This prevents the
-head and tail pointers from collapsing.
-
-
-API Function Reference
-======================
-
-.. kernel-doc:: include/linux/folio_queue.h
diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst
index 13769d5c40bf..16c529a33ac4 100644
--- a/Documentation/core-api/index.rst
+++ b/Documentation/core-api/index.rst
@@ -39,7 +39,6 @@ Library functionality that is used throughout the kernel.
kref
cleanup
assoc_array
- folio_queue
xarray
maple_tree
idr
diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c
index 442f893a0d65..7969c0b1f9a9 100644
--- a/fs/netfs/iterator.c
+++ b/fs/netfs/iterator.c
@@ -135,195 +135,3 @@ ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t max_se
return extracted ?: ret;
}
EXPORT_SYMBOL_GPL(netfs_extract_iter);
-
-#if 0
-/*
- * Select the span of a bvec iterator we're going to use. Limit it by both maximum
- * size and maximum number of segments. Returns the size of the span in bytes.
- */
-static size_t netfs_limit_bvec(const struct iov_iter *iter, size_t start_offset,
- size_t max_size, size_t max_segs)
-{
- const struct bio_vec *bvecs = iter->bvec;
- unsigned int nbv = iter->nr_segs, ix = 0, nsegs = 0;
- size_t len, span = 0, n = iter->count;
- size_t skip = iter->iov_offset + start_offset;
-
- if (WARN_ON(!iov_iter_is_bvec(iter)) ||
- WARN_ON(start_offset > n) ||
- n == 0)
- return 0;
-
- while (n && ix < nbv && skip) {
- len = bvecs[ix].bv_len;
- if (skip < len)
- break;
- skip -= len;
- n -= len;
- ix++;
- }
-
- while (n && ix < nbv) {
- len = min3(n, bvecs[ix].bv_len - skip, max_size);
- span += len;
- nsegs++;
- ix++;
- if (span >= max_size || nsegs >= max_segs)
- break;
- skip = 0;
- n -= len;
- }
-
- return min(span, max_size);
-}
-
-/*
- * Select the span of a kvec iterator we're going to use. Limit it by both
- * maximum size and maximum number of segments. Returns the size of the span
- * in bytes.
- */
-static size_t netfs_limit_kvec(const struct iov_iter *iter, size_t start_offset,
- size_t max_size, size_t max_segs)
-{
- const struct kvec *kvecs = iter->kvec;
- unsigned int nkv = iter->nr_segs, ix = 0, nsegs = 0;
- size_t len, span = 0, n = iter->count;
- size_t skip = iter->iov_offset + start_offset;
-
- if (WARN_ON(!iov_iter_is_kvec(iter)) ||
- WARN_ON(start_offset > n) ||
- n == 0)
- return 0;
-
- while (n && ix < nkv && skip) {
- len = kvecs[ix].iov_len;
- if (skip < len)
- break;
- skip -= len;
- n -= len;
- ix++;
- }
-
- while (n && ix < nkv) {
- len = min3(n, kvecs[ix].iov_len - skip, max_size);
- span += len;
- nsegs++;
- ix++;
- if (span >= max_size || nsegs >= max_segs)
- break;
- skip = 0;
- n -= len;
- }
-
- return min(span, max_size);
-}
-
-/*
- * Select the span of an xarray iterator we're going to use. Limit it by both
- * maximum size and maximum number of segments. It is assumed that segments
- * can be larger than a page in size, provided they're physically contiguous.
- * Returns the size of the span in bytes.
- */
-static size_t netfs_limit_xarray(const struct iov_iter *iter, size_t start_offset,
- size_t max_size, size_t max_segs)
-{
- struct folio *folio;
- unsigned int nsegs = 0;
- loff_t pos = iter->xarray_start + iter->iov_offset;
- pgoff_t index = pos / PAGE_SIZE;
- size_t span = 0, n = iter->count;
-
- XA_STATE(xas, iter->xarray, index);
-
- if (WARN_ON(!iov_iter_is_xarray(iter)) ||
- WARN_ON(start_offset > n) ||
- n == 0)
- return 0;
- max_size = min(max_size, n - start_offset);
-
- rcu_read_lock();
- xas_for_each(&xas, folio, ULONG_MAX) {
- size_t offset, flen, len;
- if (xas_retry(&xas, folio))
- continue;
- if (WARN_ON(xa_is_value(folio)))
- break;
- if (WARN_ON(folio_test_hugetlb(folio)))
- break;
-
- flen = folio_size(folio);
- offset = offset_in_folio(folio, pos);
- len = min(max_size, flen - offset);
- span += len;
- nsegs++;
- if (span >= max_size || nsegs >= max_segs)
- break;
- }
-
- rcu_read_unlock();
- return min(span, max_size);
-}
-
-/*
- * Select the span of a folio queue iterator we're going to use. Limit it by
- * both maximum size and maximum number of segments. Returns the size of the
- * span in bytes.
- */
-static size_t netfs_limit_folioq(const struct iov_iter *iter, size_t start_offset,
- size_t max_size, size_t max_segs)
-{
- const struct folio_queue *folioq = iter->folioq;
- unsigned int nsegs = 0;
- unsigned int slot = iter->folioq_slot;
- size_t span = 0, n = iter->count;
-
- if (WARN_ON(!iov_iter_is_folioq(iter)) ||
- WARN_ON(start_offset > n) ||
- n == 0)
- return 0;
- max_size = umin(max_size, n - start_offset);
-
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = folioq->next;
- slot = 0;
- }
-
- start_offset += iter->iov_offset;
- do {
- size_t flen = folioq_folio_size(folioq, slot);
-
- if (start_offset < flen) {
- span += flen - start_offset;
- nsegs++;
- start_offset = 0;
- } else {
- start_offset -= flen;
- }
- if (span >= max_size || nsegs >= max_segs)
- break;
-
- slot++;
- if (slot >= folioq_nr_slots(folioq)) {
- folioq = folioq->next;
- slot = 0;
- }
- } while (folioq);
-
- return umin(span, max_size);
-}
-
-size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,
- size_t max_size, size_t max_segs)
-{
- if (iov_iter_is_folioq(iter))
- return netfs_limit_folioq(iter, start_offset, max_size, max_segs);
- if (iov_iter_is_bvec(iter))
- return netfs_limit_bvec(iter, start_offset, max_size, max_segs);
- if (iov_iter_is_xarray(iter))
- return netfs_limit_xarray(iter, start_offset, max_size, max_segs);
- if (iov_iter_is_kvec(iter))
- return netfs_limit_kvec(iter, start_offset, max_size, max_segs);
- BUG();
-}
-EXPORT_SYMBOL(netfs_limit_iter);
-#endif
diff --git a/fs/netfs/rolling_buffer.c b/fs/netfs/rolling_buffer.c
deleted file mode 100644
index 292011c1cacb..000000000000
--- a/fs/netfs/rolling_buffer.c
+++ /dev/null
@@ -1,297 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-/* Rolling buffer helpers
- *
- * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved.
- * Written by David Howells (dhowells@redhat.com)
- */
-
-#include <linux/bitops.h>
-#include <linux/pagemap.h>
-#include <linux/rolling_buffer.h>
-#include <linux/slab.h>
-#include "internal.h"
-
-static atomic_t debug_ids;
-
-/**
- * netfs_folioq_alloc - Allocate a folio_queue struct
- * @rreq_id: Associated debugging ID for tracing purposes
- * @gfp: Allocation constraints
- * @trace: Trace tag to indicate the purpose of the allocation
- *
- * Allocate, initialise and account the folio_queue struct and log a trace line
- * to mark the allocation.
- */
-struct folio_queue *netfs_folioq_alloc(unsigned int rreq_id, gfp_t gfp,
- unsigned int /*enum netfs_folioq_trace*/ trace)
-{
- struct folio_queue *fq;
-
- fq = kmalloc_obj(*fq, gfp);
- if (fq) {
- netfs_stat(&netfs_n_folioq);
- folioq_init(fq, rreq_id);
- fq->debug_id = atomic_inc_return(&debug_ids);
- trace_netfs_folioq(fq, trace);
- }
- return fq;
-}
-EXPORT_SYMBOL(netfs_folioq_alloc);
-
-/**
- * netfs_folioq_free - Free a folio_queue struct
- * @folioq: The object to free
- * @trace: Trace tag to indicate which free
- *
- * Free and unaccount the folio_queue struct.
- */
-void netfs_folioq_free(struct folio_queue *folioq,
- unsigned int /*enum netfs_trace_folioq*/ trace)
-{
- trace_netfs_folioq(folioq, trace);
- netfs_stat_d(&netfs_n_folioq);
- kfree(folioq);
-}
-EXPORT_SYMBOL(netfs_folioq_free);
-
-/*
- * Initialise a rolling buffer. We allocate an empty folio queue struct to so
- * that the pointers can be independently driven by the producer and the
- * consumer.
- */
-int rolling_buffer_init(struct rolling_buffer *roll, unsigned int rreq_id,
- unsigned int direction)
-{
- struct folio_queue *fq;
-
- fq = netfs_folioq_alloc(rreq_id, GFP_NOFS, netfs_trace_folioq_rollbuf_init);
- if (!fq)
- return -ENOMEM;
-
- roll->head = fq;
- roll->tail = fq;
- iov_iter_folio_queue(&roll->iter, direction, fq, 0, 0, 0);
- return 0;
-}
-
-/*
- * Add another folio_queue to a rolling buffer if there's no space left.
- */
-int rolling_buffer_make_space(struct rolling_buffer *roll)
-{
- struct folio_queue *fq, *head = roll->head;
-
- if (!folioq_full(head))
- return 0;
-
- fq = netfs_folioq_alloc(head->rreq_id, GFP_NOFS, netfs_trace_folioq_make_space);
- if (!fq)
- return -ENOMEM;
- fq->prev = head;
-
- roll->head = fq;
- if (folioq_full(head)) {
- /* Make sure we don't leave the master iterator pointing to a
- * block that might get immediately consumed.
- */
- if (roll->iter.folioq == head &&
- roll->iter.folioq_slot == folioq_nr_slots(head)) {
- roll->iter.folioq = fq;
- roll->iter.folioq_slot = 0;
- }
- }
-
- /* Make sure the initialisation is stored before the next pointer.
- *
- * [!] NOTE: After we set head->next, the consumer is at liberty to
- * immediately delete the old head.
- */
- smp_store_release(&head->next, fq);
- return 0;
-}
-
-/*
- * Decant the list of folios to read into a rolling buffer.
- */
-ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll,
- struct readahead_control *ractl,
- struct folio_batch *put_batch)
-{
- struct folio_queue *fq;
- struct page **vec;
- int nr, ix, to;
- ssize_t size = 0;
-
- if (rolling_buffer_make_space(roll) < 0)
- return -ENOMEM;
-
- fq = roll->head;
- vec = (struct page **)fq->vec.folios;
- nr = __readahead_batch(ractl, vec + folio_batch_count(&fq->vec),
- folio_batch_space(&fq->vec));
- ix = fq->vec.nr;
- to = ix + nr;
- fq->vec.nr = to;
- for (; ix < to; ix++) {
- struct folio *folio = folioq_folio(fq, ix);
- unsigned int order = folio_order(folio);
-
- fq->orders[ix] = order;
- size += PAGE_SIZE << order;
- trace_netfs_folio(folio, netfs_folio_trace_read);
- if (!folio_batch_add(put_batch, folio))
- folio_batch_release(put_batch);
- }
- WRITE_ONCE(roll->iter.count, roll->iter.count + size);
-
- /* Store the counter after setting the slot. */
- smp_store_release(&roll->next_head_slot, to);
- return size;
-}
-
-/*
- * Decant the entire list of folios to read into a rolling buffer.
- */
-ssize_t rolling_buffer_bulk_load_from_ra(struct rolling_buffer *roll,
- struct readahead_control *ractl,
- unsigned int rreq_id)
-{
- XA_STATE(xas, &ractl->mapping->i_pages, ractl->_index);
- struct folio_queue *fq;
- struct folio *folio;
- ssize_t loaded = 0;
- int nr, slot = 0, npages = 0;
-
- /* First allocate all the folioqs we're going to need to avoid having
- * to deal with ENOMEM later.
- */
- nr = ractl->_nr_folios;
- do {
- fq = netfs_folioq_alloc(rreq_id, GFP_KERNEL,
- netfs_trace_folioq_make_space);
- if (!fq) {
- rolling_buffer_clear(roll);
- return -ENOMEM;
- }
- fq->prev = roll->head;
- if (!roll->tail)
- roll->tail = fq;
- else
- roll->head->next = fq;
- roll->head = fq;
-
- nr -= folioq_nr_slots(fq);
- } while (nr > 0);
-
- rcu_read_lock();
-
- fq = roll->tail;
- xas_for_each(&xas, folio, ractl->_index + ractl->_nr_pages - 1) {
- unsigned int order;
-
- if (xas_retry(&xas, folio))
- continue;
- VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
-
- order = folio_order(folio);
- fq->orders[slot] = order;
- fq->vec.folios[slot] = folio;
- loaded += PAGE_SIZE << order;
- npages += 1 << order;
- trace_netfs_folio(folio, netfs_folio_trace_read);
-
- slot++;
- if (slot >= folioq_nr_slots(fq)) {
- fq->vec.nr = slot;
- fq = fq->next;
- if (!fq) {
- WARN_ON_ONCE(npages < readahead_count(ractl));
- break;
- }
- slot = 0;
- }
- }
-
- rcu_read_unlock();
-
- if (fq)
- fq->vec.nr = slot;
-
- WRITE_ONCE(roll->iter.count, loaded);
- iov_iter_folio_queue(&roll->iter, ITER_DEST, roll->tail, 0, 0, loaded);
- ractl->_index += npages;
- ractl->_nr_pages -= npages;
- return loaded;
-}
-
-/*
- * Append a folio to the rolling buffer.
- */
-ssize_t rolling_buffer_append(struct rolling_buffer *roll, struct folio *folio,
- unsigned int flags)
-{
- ssize_t size = folio_size(folio);
- int slot;
-
- if (rolling_buffer_make_space(roll) < 0)
- return -ENOMEM;
-
- slot = folioq_append(roll->head, folio);
- if (flags & ROLLBUF_MARK_1)
- folioq_mark(roll->head, slot);
- if (flags & ROLLBUF_MARK_2)
- folioq_mark2(roll->head, slot);
-
- WRITE_ONCE(roll->iter.count, roll->iter.count + size);
-
- /* Store the counter after setting the slot. */
- smp_store_release(&roll->next_head_slot, slot);
- return size;
-}
-
-/*
- * Delete a spent buffer from a rolling queue and return the next in line. We
- * don't return the last buffer to keep the pointers independent, but return
- * NULL instead.
- */
-struct folio_queue *rolling_buffer_delete_spent(struct rolling_buffer *roll)
-{
- struct folio_queue *spent = roll->tail, *next = READ_ONCE(spent->next);
-
- if (!next)
- return NULL;
- next->prev = NULL;
- netfs_folioq_free(spent, netfs_trace_folioq_delete);
- roll->tail = next;
- return next;
-}
-
-/*
- * Clear out a rolling queue. Folios that have mark 1 set are put.
- */
-void rolling_buffer_clear(struct rolling_buffer *roll)
-{
- struct folio_batch fbatch;
- struct folio_queue *p;
-
- folio_batch_init(&fbatch);
-
- while ((p = roll->tail)) {
- roll->tail = p->next;
- for (int slot = 0; slot < folioq_count(p); slot++) {
- struct folio *folio = folioq_folio(p, slot);
-
- if (!folio)
- continue;
- if (folioq_is_marked(p, slot)) {
- trace_netfs_folio(folio, netfs_folio_trace_put);
- if (!folio_batch_add(&fbatch, folio))
- folio_batch_release(&fbatch);
- }
- }
-
- netfs_folioq_free(p, netfs_trace_folioq_clear);
- }
-
- folio_batch_release(&fbatch);
-}
diff --git a/include/linux/folio_queue.h b/include/linux/folio_queue.h
deleted file mode 100644
index adab609c972e..000000000000
--- a/include/linux/folio_queue.h
+++ /dev/null
@@ -1,282 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-/* Queue of folios definitions
- *
- * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved.
- * Written by David Howells (dhowells@redhat.com)
- *
- * See:
- *
- * Documentation/core-api/folio_queue.rst
- *
- * for a description of the API.
- */
-
-#ifndef _LINUX_FOLIO_QUEUE_H
-#define _LINUX_FOLIO_QUEUE_H
-
-#include <linux/pagevec.h>
-#include <linux/mm.h>
-
-/*
- * Segment in a queue of running buffers. Each segment can hold a number of
- * folios and a portion of the queue can be referenced with the ITER_FOLIOQ
- * iterator. The possibility exists of inserting non-folio elements into the
- * queue (such as gaps).
- *
- * Explicit prev and next pointers are used instead of a list_head to make it
- * easier to add segments to tail and remove them from the head without the
- * need for a lock.
- */
-struct folio_queue {
- struct folio_batch vec; /* Folios in the queue segment */
- u8 orders[PAGEVEC_SIZE]; /* Order of each folio */
- struct folio_queue *next; /* Next queue segment or NULL */
- struct folio_queue *prev; /* Previous queue segment of NULL */
- unsigned long marks; /* 1-bit mark per folio */
- unsigned long marks2; /* Second 1-bit mark per folio */
-#if PAGEVEC_SIZE > BITS_PER_LONG
-#error marks is not big enough
-#endif
- unsigned int rreq_id;
- unsigned int debug_id;
-};
-
-/**
- * folioq_init - Initialise a folio queue segment
- * @folioq: The segment to initialise
- * @rreq_id: The request identifier to use in tracelines.
- *
- * Initialise a folio queue segment and set an identifier to be used in traces.
- *
- * Note that the folio pointers are left uninitialised.
- */
-static inline void folioq_init(struct folio_queue *folioq, unsigned int rreq_id)
-{
- folio_batch_init(&folioq->vec);
- folioq->next = NULL;
- folioq->prev = NULL;
- folioq->marks = 0;
- folioq->marks2 = 0;
- folioq->rreq_id = rreq_id;
- folioq->debug_id = 0;
-}
-
-/**
- * folioq_nr_slots: Query the capacity of a folio queue segment
- * @folioq: The segment to query
- *
- * Query the number of folios that a particular folio queue segment might hold.
- * [!] NOTE: This must not be assumed to be the same for every segment!
- */
-static inline unsigned int folioq_nr_slots(const struct folio_queue *folioq)
-{
- return PAGEVEC_SIZE;
-}
-
-/**
- * folioq_count: Query the occupancy of a folio queue segment
- * @folioq: The segment to query
- *
- * Query the number of folios that have been added to a folio queue segment.
- * Note that this is not decreased as folios are removed from a segment.
- */
-static inline unsigned int folioq_count(struct folio_queue *folioq)
-{
- return folio_batch_count(&folioq->vec);
-}
-
-/**
- * folioq_full: Query if a folio queue segment is full
- * @folioq: The segment to query
- *
- * Query if a folio queue segment is fully occupied. Note that this does not
- * change if folios are removed from a segment.
- */
-static inline bool folioq_full(struct folio_queue *folioq)
-{
- //return !folio_batch_space(&folioq->vec);
- return folioq_count(folioq) >= folioq_nr_slots(folioq);
-}
-
-/**
- * folioq_is_marked: Check first folio mark in a folio queue segment
- * @folioq: The segment to query
- * @slot: The slot number of the folio to query
- *
- * Determine if the first mark is set for the folio in the specified slot in a
- * folio queue segment.
- */
-static inline bool folioq_is_marked(const struct folio_queue *folioq, unsigned int slot)
-{
- return test_bit(slot, &folioq->marks);
-}
-
-/**
- * folioq_mark: Set the first mark on a folio in a folio queue segment
- * @folioq: The segment to modify
- * @slot: The slot number of the folio to modify
- *
- * Set the first mark for the folio in the specified slot in a folio queue
- * segment.
- */
-static inline void folioq_mark(struct folio_queue *folioq, unsigned int slot)
-{
- set_bit(slot, &folioq->marks);
-}
-
-/**
- * folioq_unmark: Clear the first mark on a folio in a folio queue segment
- * @folioq: The segment to modify
- * @slot: The slot number of the folio to modify
- *
- * Clear the first mark for the folio in the specified slot in a folio queue
- * segment.
- */
-static inline void folioq_unmark(struct folio_queue *folioq, unsigned int slot)
-{
- clear_bit(slot, &folioq->marks);
-}
-
-/**
- * folioq_is_marked2: Check second folio mark in a folio queue segment
- * @folioq: The segment to query
- * @slot: The slot number of the folio to query
- *
- * Determine if the second mark is set for the folio in the specified slot in a
- * folio queue segment.
- */
-static inline bool folioq_is_marked2(const struct folio_queue *folioq, unsigned int slot)
-{
- return test_bit(slot, &folioq->marks2);
-}
-
-/**
- * folioq_mark2: Set the second mark on a folio in a folio queue segment
- * @folioq: The segment to modify
- * @slot: The slot number of the folio to modify
- *
- * Set the second mark for the folio in the specified slot in a folio queue
- * segment.
- */
-static inline void folioq_mark2(struct folio_queue *folioq, unsigned int slot)
-{
- set_bit(slot, &folioq->marks2);
-}
-
-/**
- * folioq_unmark2: Clear the second mark on a folio in a folio queue segment
- * @folioq: The segment to modify
- * @slot: The slot number of the folio to modify
- *
- * Clear the second mark for the folio in the specified slot in a folio queue
- * segment.
- */
-static inline void folioq_unmark2(struct folio_queue *folioq, unsigned int slot)
-{
- clear_bit(slot, &folioq->marks2);
-}
-
-/**
- * folioq_append: Add a folio to a folio queue segment
- * @folioq: The segment to add to
- * @folio: The folio to add
- *
- * Add a folio to the tail of the sequence in a folio queue segment, increasing
- * the occupancy count and returning the slot number for the folio just added.
- * The folio size is extracted and stored in the queue and the marks are left
- * unmodified.
- *
- * Note that it's left up to the caller to check that the segment capacity will
- * not be exceeded and to extend the queue.
- */
-static inline unsigned int folioq_append(struct folio_queue *folioq, struct folio *folio)
-{
- unsigned int slot = folioq->vec.nr++;
-
- folioq->vec.folios[slot] = folio;
- folioq->orders[slot] = folio_order(folio);
- return slot;
-}
-
-/**
- * folioq_append_mark: Add a folio to a folio queue segment
- * @folioq: The segment to add to
- * @folio: The folio to add
- *
- * Add a folio to the tail of the sequence in a folio queue segment, increasing
- * the occupancy count and returning the slot number for the folio just added.
- * The folio size is extracted and stored in the queue, the first mark is set
- * and and the second and third marks are left unmodified.
- *
- * Note that it's left up to the caller to check that the segment capacity will
- * not be exceeded and to extend the queue.
- */
-static inline unsigned int folioq_append_mark(struct folio_queue *folioq, struct folio *folio)
-{
- unsigned int slot = folioq->vec.nr++;
-
- folioq->vec.folios[slot] = folio;
- folioq->orders[slot] = folio_order(folio);
- folioq_mark(folioq, slot);
- return slot;
-}
-
-/**
- * folioq_folio: Get a folio from a folio queue segment
- * @folioq: The segment to access
- * @slot: The folio slot to access
- *
- * Retrieve the folio in the specified slot from a folio queue segment. Note
- * that no bounds check is made and if the slot hasn't been added into yet, the
- * pointer will be undefined. If the slot has been cleared, NULL will be
- * returned.
- */
-static inline struct folio *folioq_folio(const struct folio_queue *folioq, unsigned int slot)
-{
- return folioq->vec.folios[slot];
-}
-
-/**
- * folioq_folio_order: Get the order of a folio from a folio queue segment
- * @folioq: The segment to access
- * @slot: The folio slot to access
- *
- * Retrieve the order of the folio in the specified slot from a folio queue
- * segment. Note that no bounds check is made and if the slot hasn't been
- * added into yet, the order returned will be 0.
- */
-static inline unsigned int folioq_folio_order(const struct folio_queue *folioq, unsigned int slot)
-{
- return folioq->orders[slot];
-}
-
-/**
- * folioq_folio_size: Get the size of a folio from a folio queue segment
- * @folioq: The segment to access
- * @slot: The folio slot to access
- *
- * Retrieve the size of the folio in the specified slot from a folio queue
- * segment. Note that no bounds check is made and if the slot hasn't been
- * added into yet, the size returned will be PAGE_SIZE.
- */
-static inline size_t folioq_folio_size(const struct folio_queue *folioq, unsigned int slot)
-{
- return PAGE_SIZE << folioq_folio_order(folioq, slot);
-}
-
-/**
- * folioq_clear: Clear a folio from a folio queue segment
- * @folioq: The segment to clear
- * @slot: The folio slot to clear
- *
- * Clear a folio from a sequence in a folio queue segment and clear its marks.
- * The occupancy count is left unchanged.
- */
-static inline void folioq_clear(struct folio_queue *folioq, unsigned int slot)
-{
- folioq->vec.folios[slot] = NULL;
- folioq_unmark(folioq, slot);
- folioq_unmark2(folioq, slot);
-}
-
-#endif /* _LINUX_FOLIO_QUEUE_H */
diff --git a/include/linux/rolling_buffer.h b/include/linux/rolling_buffer.h
deleted file mode 100644
index b35ef43f325f..000000000000
--- a/include/linux/rolling_buffer.h
+++ /dev/null
@@ -1,64 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-/* Rolling buffer of folios
- *
- * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved.
- * Written by David Howells (dhowells@redhat.com)
- */
-
-#ifndef _ROLLING_BUFFER_H
-#define _ROLLING_BUFFER_H
-
-#include <linux/folio_queue.h>
-#include <linux/uio.h>
-
-/*
- * Rolling buffer. Whilst the buffer is live and in use, folios and folio
- * queue segments can be added to one end by one thread and removed from the
- * other end by another thread. The buffer isn't allowed to be empty; it must
- * always have at least one folio_queue in it so that neither side has to
- * modify both queue pointers.
- *
- * The iterator in the buffer is extended as buffers are inserted. It can be
- * snapshotted to use a segment of the buffer.
- */
-struct rolling_buffer {
- struct folio_queue *head; /* Producer's insertion point */
- struct folio_queue *tail; /* Consumer's removal point */
- struct iov_iter iter; /* Iterator tracking what's left in the buffer */
- u8 next_head_slot; /* Next slot in ->head */
- u8 first_tail_slot; /* First slot in ->tail */
-};
-
-/*
- * Snapshot of a rolling buffer.
- */
-struct rolling_buffer_snapshot {
- struct folio_queue *curr_folioq; /* Queue segment in which current folio resides */
- unsigned char curr_slot; /* Folio currently being read */
- unsigned char curr_order; /* Order of folio */
-};
-
-/* Marks to store per-folio in the internal folio_queue structs. */
-#define ROLLBUF_MARK_1 BIT(0)
-#define ROLLBUF_MARK_2 BIT(1)
-
-int rolling_buffer_init(struct rolling_buffer *roll, unsigned int rreq_id,
- unsigned int direction);
-int rolling_buffer_make_space(struct rolling_buffer *roll);
-ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll,
- struct readahead_control *ractl,
- struct folio_batch *put_batch);
-ssize_t rolling_buffer_bulk_load_from_ra(struct rolling_buffer *roll,
- struct readahead_control *ractl,
- unsigned int rreq_id);
-ssize_t rolling_buffer_append(struct rolling_buffer *roll, struct folio *folio,
- unsigned int flags);
-struct folio_queue *rolling_buffer_delete_spent(struct rolling_buffer *roll);
-void rolling_buffer_clear(struct rolling_buffer *roll);
-
-static inline void rolling_buffer_advance(struct rolling_buffer *roll, size_t amount)
-{
- iov_iter_advance(&roll->iter, amount);
-}
-
-#endif /* _ROLLING_BUFFER_H */
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 24/26] netfs: Check for too much data being read
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (22 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 23/26] netfs: Remove folio_queue and rolling_buffer David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 10:45 ` [PATCH 25/26] netfs: Limit the the minimum trigger for progress reporting David Howells
2026-03-26 10:45 ` [PATCH 26/26] netfs: Combine prepare and issue ops and grab the buffers on request David Howells
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
Put in a check in read subreq termination to detect more data being read
for a subrequest than was requested.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/netfs/read_collect.c | 8 ++++++++
include/trace/events/netfs.h | 1 +
2 files changed, 9 insertions(+)
diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
index c7180680226c..bacf047029fa 100644
--- a/fs/netfs/read_collect.c
+++ b/fs/netfs/read_collect.c
@@ -545,6 +545,14 @@ void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq)
break;
}
+ if (subreq->transferred > subreq->len) {
+ subreq->transferred = 0;
+ __set_bit(NETFS_SREQ_FAILED, &subreq->flags);
+ __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
+ trace_netfs_sreq(subreq, netfs_sreq_trace_too_much);
+ subreq->error = -EIO;
+ }
+
/* Deal with retry requests, short reads and errors. If we retry
* but don't make progress, we abandon the attempt.
*/
diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
index df3d440563ec..eeb8386e0709 100644
--- a/include/trace/events/netfs.h
+++ b/include/trace/events/netfs.h
@@ -125,6 +125,7 @@
EM(netfs_sreq_trace_submit, "SUBMT") \
EM(netfs_sreq_trace_superfluous, "SPRFL") \
EM(netfs_sreq_trace_terminated, "TERM ") \
+ EM(netfs_sreq_trace_too_much, "!TOOM") \
EM(netfs_sreq_trace_wait_for, "_WAIT") \
EM(netfs_sreq_trace_write, "WRITE") \
EM(netfs_sreq_trace_write_skip, "SKIP ") \
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 25/26] netfs: Limit the the minimum trigger for progress reporting
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (23 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 24/26] netfs: Check for too much data being read David Howells
@ 2026-03-26 10:45 ` David Howells
2026-03-26 14:19 ` ChenXiaoSong
2026-03-26 10:45 ` [PATCH 26/26] netfs: Combine prepare and issue ops and grab the buffers on request David Howells
25 siblings, 1 reply; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
For really big read RPC ops that span multiple folios, netfslib allows the
filesystem to give progress notifications to wake up the collector thread
to do a collection of folios that have now been fetched, even if the RPC is
still ongoing, thereby allowing the application to make progress.
The trigger for this is that at least one folio has been downloaded since
the clean point. If, however, the folios are small, this means the
collector thread is constantly being woken up - which has a negative
performance impact on the system.
Set a minimum trigger of 256KiB or the size of the folio at the front of
the queue, whichever is larger.
Also, fix the base to be the stream collection point, not the point at
which the collector has cleaned up to (which is currently 0 until something
has been collected).
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/netfs/read_collect.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
index bacf047029fa..6d49f9a6b1f0 100644
--- a/fs/netfs/read_collect.c
+++ b/fs/netfs/read_collect.c
@@ -494,15 +494,15 @@ void netfs_read_collection_worker(struct work_struct *work)
void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq)
{
struct netfs_io_request *rreq = subreq->rreq;
- struct netfs_io_stream *stream = &rreq->io_streams[0];
- size_t fsize = PAGE_SIZE << rreq->front_folio_order;
+ struct netfs_io_stream *stream = &rreq->io_streams[subreq->stream_nr];
+ size_t fsize = umax(PAGE_SIZE << rreq->front_folio_order, 256 * 1024);
trace_netfs_sreq(subreq, netfs_sreq_trace_progress);
/* If we are at the head of the queue, wake up the collector,
* getting a ref to it if we were the ones to do so.
*/
- if (subreq->start + subreq->transferred > rreq->cleaned_to + fsize &&
+ if (subreq->start + subreq->transferred >= stream->collected_to + fsize &&
(rreq->origin == NETFS_READAHEAD ||
rreq->origin == NETFS_READPAGE ||
rreq->origin == NETFS_READ_FOR_WRITE) &&
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 26/26] netfs: Combine prepare and issue ops and grab the buffers on request
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
` (24 preceding siblings ...)
2026-03-26 10:45 ` [PATCH 25/26] netfs: Limit the the minimum trigger for progress reporting David Howells
@ 2026-03-26 10:45 ` David Howells
25 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 10:45 UTC (permalink / raw)
To: Christian Brauner, Matthew Wilcox, Christoph Hellwig
Cc: David Howells, Paulo Alcantara, Jens Axboe, Leon Romanovsky,
Steve French, ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, Paulo Alcantara
Modify the way subrequests are generated in netfslib to try and simplify
the code. The issue, primarily, is in writeback: the code has to create
multiple streams of write requests to disparate targets with different
properties (e.g. server and fscache), where not every folio needs to go to
every target (e.g. data just read from the server may only need writing to
the cache).
The current model in writeback, at least, is to go carefully through every
folio, preparing a subrequest for each stream when it was detected that
part of the current folio needed to go to that stream, and repeating this
within and across contiguous folios; then to issue subrequests as they
become full or hit boundaries after first setting up the buffer. However,
this is quite difficult to follow - and makes it tricky to handle
discontiguous folios in a request.
This is changed such that netfs now accumulates buffers and attaches them
to each stream when they become valid for that stream, then flushes the
stream when a limit or a boundary is hit. The issuing code in netfs then
loops around creating and issuing subrequests without calling a separate
prepare stage (though a function is provided to get an estimate of when
flushing should occur). The filesystem (or cache) then gets to take a
slice of the master bvec chain as its I/O buffer for each subrequest,
including discontiguities if it can support a sparse/vectored RPC (as Ceph
can).
Similar-ish changes also apply to buffered read and unbuffered read and
write, though in each of those cases there is only a single contiguous
stream. Though for buffered read this consists of interwoven requests from
multiple sources (server or cache).
To this end, netfslib is changed in the following ways:
(1) ->prepare_xxx(), buffer selection and ->issue_xxx() are now collapsed
together such that one ->issue_xxx() call is made with the subrequest
defined to the maximum extent; the filesystem/cache then reduces the
length of the subrequest and calls back to netfslib to grab a slice of
the buffer, which may reduce the subrequest further if a maximum
segment limit is set. The filesystem/cache then dispatches the
operation.
(2) Retry buffer tracking is added to the netfs_io_request struct. This
is then selected by the subrequest retry counter being non-zero.
(3) The use of iov_iter is pushed down to the filesystem. Netfslib now
provides the filesystem with a bvecq holding the buffer rather than an
iov_iter. The bvecq can be duplicated and headers/trailers attached
to hold protocol and several bvecqs can be linked together to create a
compound operation.
(4) The ->issue_xxx() functions now return an error code that allows them
to return an error without having to terminate the subrequest.
Netfslib will handle the error immediately if it can but may request
termination and punt responsibility to the result collector.
->issue_xxx() can return 0 if synchronously complete and -EIOCBQUEUED
if the operation will complete (or already has completed)
asynchronously.
(5) During writeback, netfslib now builds up an accumulation of buffered
data before issuing writes on each stream (one server, one cache). It
asks each stream for an estimate of how much data to accumulate before
it next generates subrequests on the stream. The filesystem or cache
is not required to use up all the data accumulated on a stream at that
time unless the end of the pagecache is hit.
(6) During read-gaps, in which there are two gaps on either end of a dirty
streaming write page that need to be filled, a buffer is constructed
consisting of the two ends plus a sink page repeated to cover the
middle portion. This is passed to the server as a single write. For
something like Ceph, this should probably be done either as a
vectored/sparse read or as two separate reads (if different Ceph
objects are involved).
(7) During unbuffered/DIO read/write, there is a single contiguous file
region to be read or written as a single stream. The dispatching
function just creates subrequests and calls ->issue_xxx() repeatedly
to eat through the bufferage.
(8) At the start of buffered read, the entire set of folios allocated by
VM readahead is loaded into a bvecq chain, rather than trying to do it
piecemeal as-needed. As the pages were already added and locked by
the VM, this is slightly more efficient than loading piecemeal as only
a single iteration of the xarray is required.
(9) During buffered read, there is a single contiguous file region, to
read as a single stream - however, this stream may be stitched
together from subrequests to multiple sources. Which sources are used
where is now determined by querying the cache to find the next couple
of extents in which it has data; netfslib uses this to direct the
subrequests towards the appropriate sources.
Each subrequest is given the maximum length in the current extent and
then ->issue_read() is called. The filesystem then limits the size
and slices off a piece of the buffer for that extent.
(10) Cachefiles now provides an estimation function that indicates the
standard maxima for doing DIO (MAX_RW_COUNT and BIO_MAX_VECS).
Note that sparse cachefiles still rely on the backing filesystem for
content mapping. That will need to be addressed in a future patch and is
not trivial to fix.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
fs/9p/vfs_addr.c | 49 +-
fs/afs/dir.c | 8 +-
fs/afs/file.c | 26 +-
fs/afs/fsclient.c | 8 +-
fs/afs/internal.h | 8 +-
fs/afs/write.c | 35 +-
fs/afs/yfsclient.c | 6 +-
fs/cachefiles/io.c | 237 ++++++---
fs/ceph/Kconfig | 1 +
fs/ceph/addr.c | 127 ++---
fs/netfs/Kconfig | 3 +
fs/netfs/Makefile | 2 +-
fs/netfs/buffered_read.c | 236 +++++----
fs/netfs/buffered_write.c | 27 +-
fs/netfs/direct_read.c | 91 ++--
fs/netfs/direct_write.c | 145 +++---
fs/netfs/fscache_io.c | 6 -
fs/netfs/internal.h | 78 ++-
fs/netfs/iterator.c | 6 +-
fs/netfs/misc.c | 33 +-
fs/netfs/objects.c | 7 +-
fs/netfs/read_collect.c | 33 +-
fs/netfs/read_pgpriv2.c | 116 +++--
fs/netfs/read_retry.c | 207 ++++----
fs/netfs/read_single.c | 150 +++---
fs/netfs/write_collect.c | 41 +-
fs/netfs/write_issue.c | 805 ++++++++++++++++++------------
fs/netfs/write_retry.c | 136 +++--
fs/nfs/Kconfig | 1 +
fs/nfs/fscache.c | 24 +-
fs/smb/client/cifssmb.c | 13 +-
fs/smb/client/file.c | 146 +++---
fs/smb/client/smb2ops.c | 9 +-
fs/smb/client/smb2pdu.c | 28 +-
fs/smb/client/transport.c | 15 +-
include/linux/netfs.h | 96 ++--
include/trace/events/cachefiles.h | 2 +
include/trace/events/netfs.h | 51 +-
net/9p/client.c | 8 +-
39 files changed, 1790 insertions(+), 1230 deletions(-)
diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
index 862164181bac..0db56cc00467 100644
--- a/fs/9p/vfs_addr.c
+++ b/fs/9p/vfs_addr.c
@@ -48,32 +48,71 @@ static void v9fs_begin_writeback(struct netfs_io_request *wreq)
wreq->io_streams[0].avail = true;
}
+/*
+ * Estimate how much data should be accumulated before we start issuing
+ * write subrequests.
+ */
+static int v9fs_estimate_write(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream,
+ struct netfs_write_estimate *estimate)
+{
+ struct p9_fid *fid = wreq->netfs_priv;
+ unsigned long long limit = ULLONG_MAX - stream->issue_from;
+ unsigned long long max_len = fid->clnt->msize - P9_IOHDRSZ;
+
+ estimate->issue_at = stream->issue_from + umin(max_len, limit);
+ return 0;
+}
+
/*
* Issue a subrequest to write to the server.
*/
-static void v9fs_issue_write(struct netfs_io_subrequest *subreq)
+static int v9fs_issue_write(struct netfs_io_subrequest *subreq)
{
+ struct iov_iter iter;
struct p9_fid *fid = subreq->rreq->netfs_priv;
int err, len;
- len = p9_client_write(fid, subreq->start, &subreq->io_iter, &err);
+ subreq->len = umin(subreq->len, fid->clnt->msize - P9_IOHDRSZ);
+
+ err = netfs_prepare_write_buffer(subreq, INT_MAX);
+ if (err < 0)
+ return err;
+
+ iov_iter_bvec_queue(&iter, ITER_SOURCE, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, subreq->len);
+
+ len = p9_client_write(fid, subreq->start, &iter, &err);
if (len > 0)
__set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
netfs_write_subrequest_terminated(subreq, len ?: err);
+ return err;
}
/**
* v9fs_issue_read - Issue a read from 9P
* @subreq: The read to make
+ * @rctx: Read generation context
*/
-static void v9fs_issue_read(struct netfs_io_subrequest *subreq)
+static int v9fs_issue_read(struct netfs_io_subrequest *subreq)
{
struct netfs_io_request *rreq = subreq->rreq;
+ struct iov_iter iter;
struct p9_fid *fid = rreq->netfs_priv;
unsigned long long pos = subreq->start + subreq->transferred;
int total, err;
- total = p9_client_read(fid, pos, &subreq->io_iter, &err);
+ err = netfs_prepare_read_buffer(subreq, INT_MAX);
+ if (err < 0)
+ return err;
+
+ iov_iter_bvec_queue(&iter, ITER_DEST, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, subreq->len);
+
+ /* After this point, we're not allowed to return an error. */
+ netfs_mark_read_submission(subreq);
+
+ total = p9_client_read(fid, pos, &iter, &err);
/* if we just extended the file size, any portion not in
* cache won't be on server and is zeroes */
@@ -89,6 +128,7 @@ static void v9fs_issue_read(struct netfs_io_subrequest *subreq)
subreq->error = err;
netfs_read_subreq_terminated(subreq);
+ return -EIOCBQUEUED;
}
/**
@@ -154,6 +194,7 @@ const struct netfs_request_ops v9fs_req_ops = {
.free_request = v9fs_free_request,
.issue_read = v9fs_issue_read,
.begin_writeback = v9fs_begin_writeback,
+ .estimate_write = v9fs_estimate_write,
.issue_write = v9fs_issue_write,
};
diff --git a/fs/afs/dir.c b/fs/afs/dir.c
index 6627a0d38e73..52ab84ab8c1f 100644
--- a/fs/afs/dir.c
+++ b/fs/afs/dir.c
@@ -255,7 +255,8 @@ static ssize_t afs_do_read_single(struct afs_vnode *dvnode, struct file *file)
if (dvnode->directory_size < i_size) {
size_t cur_size = dvnode->directory_size;
- ret = bvecq_expand_buffer(&dvnode->directory, &cur_size, i_size,
+ ret = bvecq_expand_buffer(&dvnode->directory, &cur_size,
+ round_up(i_size, PAGE_SIZE),
mapping_gfp_mask(dvnode->netfs.inode.i_mapping));
dvnode->directory_size = cur_size;
if (ret < 0)
@@ -2210,9 +2211,10 @@ int afs_single_writepages(struct address_space *mapping,
if (is_dir ?
test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) :
atomic64_read(&dvnode->cb_expires_at) != AFS_NO_CB_PROMISE) {
+ size_t len = i_size_read(&dvnode->netfs.inode);
iov_iter_bvec_queue(&iter, ITER_SOURCE, dvnode->directory, 0, 0,
- i_size_read(&dvnode->netfs.inode));
- ret = netfs_writeback_single(mapping, wbc, &iter);
+ round_up(len, PAGE_SIZE));
+ ret = netfs_writeback_single(mapping, wbc, &iter, len);
}
up_read(&dvnode->validate_lock);
diff --git a/fs/afs/file.c b/fs/afs/file.c
index 424e0c98d67f..42131fe450af 100644
--- a/fs/afs/file.c
+++ b/fs/afs/file.c
@@ -329,11 +329,12 @@ void afs_fetch_data_immediate_cancel(struct afs_call *call)
/*
* Fetch file data from the volume.
*/
-static void afs_issue_read(struct netfs_io_subrequest *subreq)
+static int afs_issue_read(struct netfs_io_subrequest *subreq)
{
struct afs_operation *op;
struct afs_vnode *vnode = AFS_FS_I(subreq->rreq->inode);
struct key *key = subreq->rreq->netfs_priv;
+ int ret;
_enter("%s{%llx:%llu.%u},%x,,,",
vnode->volume->name,
@@ -342,19 +343,21 @@ static void afs_issue_read(struct netfs_io_subrequest *subreq)
vnode->fid.unique,
key_serial(key));
+ ret = netfs_prepare_read_buffer(subreq, INT_MAX);
+ if (ret < 0)
+ return ret;
+
op = afs_alloc_operation(key, vnode->volume);
- if (IS_ERR(op)) {
- subreq->error = PTR_ERR(op);
- netfs_read_subreq_terminated(subreq);
- return;
- }
+ if (IS_ERR(op))
+ return PTR_ERR(op);
afs_op_set_vnode(op, 0, vnode);
op->fetch.subreq = subreq;
op->ops = &afs_fetch_data_operation;
- trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+ /* After this point, we're not allowed to return an error. */
+ netfs_mark_read_submission(subreq);
if (subreq->rreq->origin == NETFS_READAHEAD ||
subreq->rreq->iocb) {
@@ -363,18 +366,19 @@ static void afs_issue_read(struct netfs_io_subrequest *subreq)
if (!afs_begin_vnode_operation(op)) {
subreq->error = afs_put_operation(op);
netfs_read_subreq_terminated(subreq);
- return;
+ return -EIOCBQUEUED;
}
if (!afs_select_fileserver(op)) {
- afs_end_read(op);
- return;
+ afs_end_read(op); /* Error recorded here. */
+ return -EIOCBQUEUED;
}
afs_issue_read_call(op);
} else {
afs_do_sync_operation(op);
}
+ return -EIOCBQUEUED;
}
static int afs_init_request(struct netfs_io_request *rreq, struct file *file)
@@ -453,7 +457,7 @@ const struct netfs_request_ops afs_req_ops = {
.update_i_size = afs_update_i_size,
.invalidate_cache = afs_netfs_invalidate_cache,
.begin_writeback = afs_begin_writeback,
- .prepare_write = afs_prepare_write,
+ .estimate_write = afs_estimate_write,
.issue_write = afs_issue_write,
.retry_request = afs_retry_request,
};
diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
index 95494d5f2b8a..f59a9db4bb0e 100644
--- a/fs/afs/fsclient.c
+++ b/fs/afs/fsclient.c
@@ -339,7 +339,9 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call)
if (call->remaining == 0)
goto no_more_data;
- call->iter = &subreq->io_iter;
+ iov_iter_bvec_queue(&call->def_iter, ITER_DEST, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, subreq->len);
+
call->iov_len = umin(call->remaining, subreq->len - subreq->transferred);
call->unmarshall++;
fallthrough;
@@ -1085,7 +1087,7 @@ static void afs_fs_store_data64(struct afs_operation *op)
if (!call)
return afs_op_nomem(op);
- call->write_iter = op->store.write_iter;
+ call->write_iter = &op->store.write_iter;
/* marshall the parameters */
bp = call->request;
@@ -1139,7 +1141,7 @@ void afs_fs_store_data(struct afs_operation *op)
if (!call)
return afs_op_nomem(op);
- call->write_iter = op->store.write_iter;
+ call->write_iter = &op->store.write_iter;
/* marshall the parameters */
bp = call->request;
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index 9bf5d2f1dbc4..a60df9357a4f 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -906,7 +906,7 @@ struct afs_operation {
afs_lock_type_t type;
} lock;
struct {
- struct iov_iter *write_iter;
+ struct iov_iter write_iter;
loff_t pos;
loff_t size;
loff_t i_size;
@@ -1680,8 +1680,10 @@ extern int afs_check_volume_status(struct afs_volume *, struct afs_operation *);
/*
* write.c
*/
-void afs_prepare_write(struct netfs_io_subrequest *subreq);
-void afs_issue_write(struct netfs_io_subrequest *subreq);
+int afs_estimate_write(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream,
+ struct netfs_write_estimate *estimate);
+int afs_issue_write(struct netfs_io_subrequest *subreq);
void afs_begin_writeback(struct netfs_io_request *wreq);
void afs_retry_request(struct netfs_io_request *wreq, struct netfs_io_stream *stream);
extern int afs_writepages(struct address_space *, struct writeback_control *);
diff --git a/fs/afs/write.c b/fs/afs/write.c
index 93ad86ff3345..1f6045bfeecc 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -84,17 +84,20 @@ static const struct afs_operation_ops afs_store_data_operation = {
};
/*
- * Prepare a subrequest to write to the server. This sets the max_len
- * parameter.
+ * Estimate the maximum size of a write we can send to the server.
*/
-void afs_prepare_write(struct netfs_io_subrequest *subreq)
+int afs_estimate_write(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream,
+ struct netfs_write_estimate *estimate)
{
- struct netfs_io_stream *stream = &subreq->rreq->io_streams[subreq->stream_nr];
+ unsigned long long limit = ULLONG_MAX - stream->issue_from;
+ unsigned long long max_len = 256 * 1024 * 1024;
//if (test_bit(NETFS_SREQ_RETRYING, &subreq->flags))
- // subreq->max_len = 512 * 1024;
- //else
- stream->sreq_max_len = 256 * 1024 * 1024;
+ // max_len = 512 * 1024;
+
+ estimate->issue_at = stream->issue_from + umin(max_len, limit);
+ return 0;
}
/*
@@ -140,12 +143,15 @@ static void afs_issue_write_worker(struct work_struct *work)
op->flags |= AFS_OPERATION_UNINTR;
op->ops = &afs_store_data_operation;
+ trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
afs_begin_vnode_operation(op);
- op->store.write_iter = &subreq->io_iter;
op->store.i_size = umax(pos + len, vnode->netfs.remote_i_size);
op->mtime = inode_get_mtime(&vnode->netfs.inode);
+ iov_iter_bvec_queue(&op->store.write_iter, ITER_SOURCE, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, subreq->len);
+
afs_wait_for_operation(op);
ret = afs_put_operation(op);
switch (ret) {
@@ -169,11 +175,20 @@ static void afs_issue_write_worker(struct work_struct *work)
netfs_write_subrequest_terminated(subreq, ret < 0 ? ret : subreq->len);
}
-void afs_issue_write(struct netfs_io_subrequest *subreq)
+int afs_issue_write(struct netfs_io_subrequest *subreq)
{
+ int ret;
+
+ if (subreq->len > 256 * 1024 * 1024)
+ subreq->len = 256 * 1024 * 1024;
+ ret = netfs_prepare_write_buffer(subreq, INT_MAX);
+ if (ret < 0)
+ return ret;
+
subreq->work.func = afs_issue_write_worker;
if (!queue_work(system_dfl_wq, &subreq->work))
WARN_ON_ONCE(1);
+ return -EIOCBQUEUED;
}
/*
@@ -184,6 +199,8 @@ void afs_begin_writeback(struct netfs_io_request *wreq)
{
if (S_ISREG(wreq->inode->i_mode))
afs_get_writeback_key(wreq);
+
+ wreq->io_streams[0].avail = true;
}
/*
diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c
index 24fb562ebd33..ffd1d4c87290 100644
--- a/fs/afs/yfsclient.c
+++ b/fs/afs/yfsclient.c
@@ -385,7 +385,9 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call *call)
if (call->remaining == 0)
goto no_more_data;
- call->iter = &subreq->io_iter;
+ iov_iter_bvec_queue(&call->def_iter, ITER_DEST, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, subreq->len);
+
call->iov_len = min(call->remaining, subreq->len - subreq->transferred);
call->unmarshall++;
fallthrough;
@@ -1357,7 +1359,7 @@ void yfs_fs_store_data(struct afs_operation *op)
if (!call)
return afs_op_nomem(op);
- call->write_iter = op->store.write_iter;
+ call->write_iter = &op->store.write_iter;
/* marshall the parameters */
bp = call->request;
diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
index 2af55a75b511..05a37b4bdf10 100644
--- a/fs/cachefiles/io.c
+++ b/fs/cachefiles/io.c
@@ -26,7 +26,10 @@ struct cachefiles_kiocb {
};
struct cachefiles_object *object;
netfs_io_terminated_t term_func;
- void *term_func_priv;
+ union {
+ struct netfs_io_subrequest *subreq;
+ void *term_func_priv;
+ };
bool was_async;
unsigned int inval_counter; /* Copy of cookie->inval_counter */
u64 b_writing;
@@ -194,6 +197,125 @@ static int cachefiles_read(struct netfs_cache_resources *cres,
return ret;
}
+/*
+ * Handle completion of a read from the cache issued by netfslib.
+ */
+static void cachefiles_issue_read_complete(struct kiocb *iocb, long ret)
+{
+ struct cachefiles_kiocb *ki = container_of(iocb, struct cachefiles_kiocb, iocb);
+ struct netfs_io_subrequest *subreq = ki->subreq;
+ struct inode *inode = file_inode(ki->iocb.ki_filp);
+
+ _enter("%ld", ret);
+
+ if (ret < 0) {
+ subreq->error = -ESTALE;
+ trace_cachefiles_io_error(ki->object, inode, ret,
+ cachefiles_trace_read_error);
+ }
+
+ if (ret >= 0) {
+ if (ki->object->cookie->inval_counter == ki->inval_counter) {
+ subreq->error = 0;
+ if (ret > 0) {
+ subreq->transferred += ret;
+ __set_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
+ }
+ } else {
+ subreq->error = -ESTALE;
+ }
+ }
+
+ netfs_read_subreq_terminated(subreq);
+ cachefiles_put_kiocb(ki);
+}
+
+/*
+ * Issue a read operation to the cache.
+ */
+static int cachefiles_issue_read(struct netfs_io_subrequest *subreq)
+{
+ struct netfs_cache_resources *cres = &subreq->rreq->cache_resources;
+ struct cachefiles_object *object;
+ struct cachefiles_kiocb *ki;
+ struct iov_iter iter;
+ struct file *file;
+ unsigned int old_nofs;
+ ssize_t ret = -ENOBUFS;
+
+ if (!fscache_wait_for_operation(cres, FSCACHE_WANT_READ))
+ return -ENOBUFS;
+
+ fscache_count_read();
+ object = cachefiles_cres_object(cres);
+ file = cachefiles_cres_file(cres);
+
+ _enter("%pD,%li,%llx,%zx/%llx",
+ file, file_inode(file)->i_ino, subreq->start, subreq->len,
+ i_size_read(file_inode(file)));
+
+ if (subreq->len > MAX_RW_COUNT)
+ subreq->len = MAX_RW_COUNT;
+
+ ret = netfs_prepare_read_buffer(subreq, BIO_MAX_VECS);
+ if (ret < 0)
+ return ret;
+
+ iov_iter_bvec_queue(&iter, ITER_DEST, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, subreq->len);
+
+ ki = kzalloc_obj(struct cachefiles_kiocb);
+ if (!ki)
+ return -ENOMEM;
+
+ refcount_set(&ki->ki_refcnt, 2);
+ ki->iocb.ki_filp = file;
+ ki->iocb.ki_pos = subreq->start;
+ ki->iocb.ki_flags = IOCB_DIRECT;
+ ki->iocb.ki_ioprio = get_current_ioprio();
+ ki->iocb.ki_complete = cachefiles_issue_read_complete;
+ ki->object = object;
+ ki->inval_counter = cres->inval_counter;
+ ki->subreq = subreq;
+ ki->was_async = true;
+
+ /* After this point, we're not allowed to return an error. */
+ netfs_mark_read_submission(subreq);
+
+ get_file(ki->iocb.ki_filp);
+ cachefiles_grab_object(object, cachefiles_obj_get_ioreq);
+
+ trace_cachefiles_read(object, file_inode(file), ki->iocb.ki_pos, subreq->len);
+ old_nofs = memalloc_nofs_save();
+ ret = cachefiles_inject_read_error();
+ if (ret == 0)
+ ret = vfs_iocb_iter_read(file, &ki->iocb, &iter);
+ memalloc_nofs_restore(old_nofs);
+
+ switch (ret) {
+ case -EIOCBQUEUED:
+ break;
+
+ case -ERESTARTSYS:
+ case -ERESTARTNOINTR:
+ case -ERESTARTNOHAND:
+ case -ERESTART_RESTARTBLOCK:
+ /* There's no easy way to restart the syscall since other AIO's
+ * may be already running. Just fail this IO with EINTR.
+ */
+ ret = -EINTR;
+ fallthrough;
+ default:
+ ki->was_async = false;
+ cachefiles_issue_read_complete(&ki->iocb, ret);
+ break;
+ }
+
+ cachefiles_put_kiocb(ki);
+ _leave(" = %zd", ret);
+ return -EIOCBQUEUED;
+}
+
/*
* Query the occupancy of the cache in a region, returning the extent of the
* next two chunks of cached data and the next hole.
@@ -610,104 +732,67 @@ int __cachefiles_prepare_write(struct cachefiles_object *object,
cachefiles_has_space_for_write);
}
-static int cachefiles_prepare_write(struct netfs_cache_resources *cres,
- loff_t *_start, size_t *_len, size_t upper_len,
- loff_t i_size, bool no_space_allocated_yet)
-{
- struct cachefiles_object *object = cachefiles_cres_object(cres);
- struct cachefiles_cache *cache = object->volume->cache;
- const struct cred *saved_cred;
- int ret;
-
- if (!cachefiles_cres_file(cres)) {
- if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE))
- return -ENOBUFS;
- if (!cachefiles_cres_file(cres))
- return -ENOBUFS;
- }
-
- cachefiles_begin_secure(cache, &saved_cred);
- ret = __cachefiles_prepare_write(object, cachefiles_cres_file(cres),
- _start, _len, upper_len,
- no_space_allocated_yet);
- cachefiles_end_secure(cache, saved_cred);
- return ret;
-}
-
-static void cachefiles_prepare_write_subreq(struct netfs_io_subrequest *subreq)
+static int cachefiles_estimate_write(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream,
+ struct netfs_write_estimate *estimate)
{
- struct netfs_io_request *wreq = subreq->rreq;
- struct netfs_cache_resources *cres = &wreq->cache_resources;
- struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr];
-
- _enter("W=%x[%x] %llx", wreq->debug_id, subreq->debug_index, subreq->start);
-
- stream->sreq_max_len = MAX_RW_COUNT;
- stream->sreq_max_segs = BIO_MAX_VECS;
-
- if (!cachefiles_cres_file(cres)) {
- if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE))
- return netfs_prepare_write_failed(subreq);
- if (!cachefiles_cres_file(cres))
- return netfs_prepare_write_failed(subreq);
- }
+ estimate->issue_at = stream->issue_from + MAX_RW_COUNT;
+ estimate->max_segs = BIO_MAX_VECS;
+ return 0;
}
-static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)
+static int cachefiles_issue_write(struct netfs_io_subrequest *subreq)
{
struct netfs_io_request *wreq = subreq->rreq;
struct netfs_cache_resources *cres = &wreq->cache_resources;
struct cachefiles_object *object = cachefiles_cres_object(cres);
struct cachefiles_cache *cache = object->volume->cache;
+ struct iov_iter iter;
const struct cred *saved_cred;
- size_t off, pre, post, len = subreq->len;
loff_t start = subreq->start;
+ size_t len = subreq->len;
int ret;
_enter("W=%x[%x] %llx-%llx",
wreq->debug_id, subreq->debug_index, start, start + len - 1);
- /* We need to start on the cache granularity boundary */
- off = start & (cache->bsize - 1);
- if (off) {
- pre = cache->bsize - off;
- if (pre >= len) {
- fscache_count_dio_misfit();
- netfs_write_subrequest_terminated(subreq, len);
- return;
- }
- subreq->transferred += pre;
- start += pre;
- len -= pre;
- iov_iter_advance(&subreq->io_iter, pre);
- }
-
- /* We also need to end on the cache granularity boundary */
- post = len & (cache->bsize - 1);
- if (post) {
- len -= post;
- if (len == 0) {
- fscache_count_dio_misfit();
- netfs_write_subrequest_terminated(subreq, post);
- return;
- }
- iov_iter_truncate(&subreq->io_iter, len);
+ if (!cachefiles_cres_file(cres)) {
+ if (!fscache_wait_for_operation(cres, FSCACHE_WANT_WRITE))
+ return -EINVAL;
+ if (!cachefiles_cres_file(cres))
+ return -EINVAL;
+ }
+
+ ret = netfs_prepare_write_buffer(subreq, BIO_MAX_VECS);
+ if (ret < 0)
+ return ret;
+
+ /* The buffer extraction func may round out start and end. */
+ start = subreq->start;
+ len = subreq->len;
+
+ /* We need to start and end on cache granularity boundaries. */
+ if (WARN_ON_ONCE(start & (cache->bsize - 1)) ||
+ WARN_ON_ONCE(len & (cache->bsize - 1))) {
+ fscache_count_dio_misfit();
+ return -EIO;
}
+ iov_iter_bvec_queue(&iter, ITER_SOURCE, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, len);
+
trace_netfs_sreq(subreq, netfs_sreq_trace_cache_prepare);
cachefiles_begin_secure(cache, &saved_cred);
ret = __cachefiles_prepare_write(object, cachefiles_cres_file(cres),
&start, &len, len, true);
cachefiles_end_secure(cache, saved_cred);
- if (ret < 0) {
- netfs_write_subrequest_terminated(subreq, ret);
- return;
- }
+ if (ret < 0)
+ return ret;
trace_netfs_sreq(subreq, netfs_sreq_trace_cache_write);
- cachefiles_write(&subreq->rreq->cache_resources,
- subreq->start, &subreq->io_iter,
+ cachefiles_write(&subreq->rreq->cache_resources, subreq->start, &iter,
netfs_write_subrequest_terminated, subreq);
+ return -EIOCBQUEUED;
}
/*
@@ -854,9 +939,9 @@ static const struct netfs_cache_ops cachefiles_netfs_cache_ops = {
.end_operation = cachefiles_end_operation,
.read = cachefiles_read,
.write = cachefiles_write,
+ .issue_read = cachefiles_issue_read,
.issue_write = cachefiles_issue_write,
- .prepare_write = cachefiles_prepare_write,
- .prepare_write_subreq = cachefiles_prepare_write_subreq,
+ .estimate_write = cachefiles_estimate_write,
.prepare_ondemand_read = cachefiles_prepare_ondemand_read,
.query_occupancy = cachefiles_query_occupancy,
.collect_write = cachefiles_collect_write,
diff --git a/fs/ceph/Kconfig b/fs/ceph/Kconfig
index 3d64a316ca31..aa6ccd7794d2 100644
--- a/fs/ceph/Kconfig
+++ b/fs/ceph/Kconfig
@@ -4,6 +4,7 @@ config CEPH_FS
depends on INET
select CEPH_LIB
select NETFS_SUPPORT
+ select NETFS_PGPRIV2
select FS_ENCRYPTION_ALGS if FS_ENCRYPTION
default n
help
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index e87b3bb94ee8..8aab4f7c516f 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -269,7 +269,7 @@ static void finish_netfs_read(struct ceph_osd_request *req)
ceph_dec_osd_stopping_blocker(fsc->mdsc);
}
-static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
+static int ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
{
struct netfs_io_request *rreq = subreq->rreq;
struct inode *inode = rreq->inode;
@@ -278,7 +278,8 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
struct ceph_mds_request *req;
struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(inode->i_sb);
struct ceph_inode_info *ci = ceph_inode(inode);
- ssize_t err = 0;
+ struct iov_iter iter;
+ ssize_t err;
size_t len;
int mode;
@@ -287,21 +288,33 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
__clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
- if (subreq->start >= inode->i_size)
+ if (subreq->start >= inode->i_size) {
+ __set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags);
+ err = 0;
goto out;
+ }
+
+ err = netfs_prepare_read_buffer(subreq, INT_MAX);
+ if (err < 0)
+ return err;
+
+ iov_iter_bvec_queue(&iter, ITER_DEST, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset,
+ subreq->len);
/* We need to fetch the inline data. */
mode = ceph_try_to_choose_auth_mds(inode, CEPH_STAT_CAP_INLINE_DATA);
req = ceph_mdsc_create_request(mdsc, CEPH_MDS_OP_GETATTR, mode);
- if (IS_ERR(req)) {
- err = PTR_ERR(req);
- goto out;
- }
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
req->r_ino1 = ci->i_vino;
req->r_args.getattr.mask = cpu_to_le32(CEPH_STAT_CAP_INLINE_DATA);
req->r_num_caps = 2;
- trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+ /* After this point, we're not allowed to return an error. */
+ netfs_mark_read_submission(subreq);
+
err = ceph_mdsc_do_request(mdsc, NULL, req);
if (err < 0)
goto out;
@@ -311,11 +324,11 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
if (iinfo->inline_version == CEPH_INLINE_NONE) {
/* The data got uninlined */
ceph_mdsc_put_request(req);
- return false;
+ return 1;
}
len = min_t(size_t, iinfo->inline_len - subreq->start, subreq->len);
- err = copy_to_iter(iinfo->inline_data + subreq->start, len, &subreq->io_iter);
+ err = copy_to_iter(iinfo->inline_data + subreq->start, len, &iter);
if (err == 0) {
err = -EFAULT;
} else {
@@ -328,26 +341,10 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
subreq->error = err;
trace_netfs_sreq(subreq, netfs_sreq_trace_io_progress);
netfs_read_subreq_terminated(subreq);
- return true;
+ return -EIOCBQUEUED;
}
-static int ceph_netfs_prepare_read(struct netfs_io_subrequest *subreq)
-{
- struct netfs_io_request *rreq = subreq->rreq;
- struct inode *inode = rreq->inode;
- struct ceph_inode_info *ci = ceph_inode(inode);
- struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode);
- u64 objno, objoff;
- u32 xlen;
-
- /* Truncate the extent at the end of the current block */
- ceph_calc_file_object_mapping(&ci->i_layout, subreq->start, subreq->len,
- &objno, &objoff, &xlen);
- rreq->io_streams[0].sreq_max_len = umin(xlen, fsc->mount_options->rsize);
- return 0;
-}
-
-static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq)
+static int ceph_netfs_issue_read(struct netfs_io_subrequest *subreq)
{
struct netfs_io_request *rreq = subreq->rreq;
struct inode *inode = rreq->inode;
@@ -356,48 +353,65 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq)
struct ceph_client *cl = fsc->client;
struct ceph_osd_request *req = NULL;
struct ceph_vino vino = ceph_vino(inode);
+ struct iov_iter iter;
+ u64 objno, objoff, len, off = subreq->start;
+ u32 maxlen;
int err;
- u64 len;
bool sparse = IS_ENCRYPTED(inode) || ceph_test_mount_opt(fsc, SPARSEREAD);
- u64 off = subreq->start;
int extent_cnt;
- if (ceph_inode_is_shutdown(inode)) {
- err = -EIO;
- goto out;
+ if (ceph_inode_is_shutdown(inode))
+ return -EIO;
+
+ if (ceph_has_inline_data(ci)) {
+ err = ceph_netfs_issue_op_inline(subreq);
+ if (err != 1)
+ return err;
}
- if (ceph_has_inline_data(ci) && ceph_netfs_issue_op_inline(subreq))
- return;
+ /* Truncate the extent at the end of the current block */
+ ceph_calc_file_object_mapping(&ci->i_layout, subreq->start, subreq->len,
+ &objno, &objoff, &maxlen);
+ maxlen = umin(maxlen, fsc->mount_options->rsize);
+ len = umin(subreq->len, maxlen);
+ subreq->len = len;
// TODO: This rounding here is slightly dodgy. It *should* work, for
// now, as the cache only deals in blocks that are a multiple of
// PAGE_SIZE and fscrypt blocks are at most PAGE_SIZE. What needs to
// happen is for the fscrypt driving to be moved into netfslib and the
// data in the cache also to be stored encrypted.
- len = subreq->len;
ceph_fscrypt_adjust_off_and_len(inode, &off, &len);
req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout, vino,
off, &len, 0, 1, sparse ? CEPH_OSD_OP_SPARSE_READ : CEPH_OSD_OP_READ,
CEPH_OSD_FLAG_READ, NULL, ci->i_truncate_seq,
ci->i_truncate_size, false);
- if (IS_ERR(req)) {
- err = PTR_ERR(req);
- req = NULL;
- goto out;
- }
+ if (IS_ERR(req))
+ return PTR_ERR(req);
if (sparse) {
extent_cnt = __ceph_sparse_read_ext_count(inode, len);
err = ceph_alloc_sparse_ext_map(&req->r_ops[0], extent_cnt);
- if (err)
- goto out;
+ if (err) {
+ ceph_osdc_put_request(req);
+ return err;
+ }
}
doutc(cl, "%llx.%llx pos=%llu orig_len=%zu len=%llu\n",
ceph_vinop(inode), subreq->start, subreq->len, len);
+ err = netfs_prepare_read_buffer(subreq, INT_MAX);
+ if (err < 0) {
+ ceph_osdc_put_request(req);
+ return err;
+ }
+
+ iov_iter_bvec_queue(&iter, ITER_DEST, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset,
+ subreq->len);
+
/*
* FIXME: For now, use CEPH_OSD_DATA_TYPE_PAGES instead of _ITER for
* encrypted inodes. We'd need infrastructure that handles an iov_iter
@@ -416,13 +430,12 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq)
* ceph_msg_data_cursor_init() triggers BUG_ON() in the case
* if msg->sparse_read_total > msg->data_length.
*/
- subreq->io_iter.count = len;
-
- err = iov_iter_get_pages_alloc2(&subreq->io_iter, &pages, len, &page_off);
+ err = iov_iter_get_pages_alloc2(&iter, &pages, len, &page_off);
if (err < 0) {
doutc(cl, "%llx.%llx failed to allocate pages, %d\n",
ceph_vinop(inode), err);
- goto out;
+ ceph_osdc_put_request(req);
+ return -EIO;
}
/* should always give us a page-aligned read */
@@ -433,32 +446,28 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq)
osd_req_op_extent_osd_data_pages(req, 0, pages, len, 0, false,
false);
} else {
- osd_req_op_extent_osd_iter(req, 0, &subreq->io_iter);
+ osd_req_op_extent_osd_iter(req, 0, &iter);
}
if (!ceph_inc_osd_stopping_blocker(fsc->mdsc)) {
- err = -EIO;
- goto out;
+ ceph_osdc_put_request(req);
+ return -EIO;
}
req->r_callback = finish_netfs_read;
req->r_priv = subreq;
req->r_inode = inode;
ihold(inode);
- trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+ /* After this point, we're not allowed to return an error. */
+ netfs_mark_read_submission(subreq);
ceph_osdc_start_request(req->r_osdc, req);
-out:
ceph_osdc_put_request(req);
- if (err) {
- subreq->error = err;
- netfs_read_subreq_terminated(subreq);
- }
- doutc(cl, "%llx.%llx result %d\n", ceph_vinop(inode), err);
+ doutc(cl, "%llx.%llx result -EIOCBQUEUED\n", ceph_vinop(inode));
+ return -EIOCBQUEUED;
}
static int ceph_init_request(struct netfs_io_request *rreq, struct file *file)
{
struct inode *inode = rreq->inode;
- struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode);
struct ceph_client *cl = ceph_inode_to_client(inode);
int got = 0, want = CEPH_CAP_FILE_CACHE;
struct ceph_netfs_request_data *priv;
@@ -510,7 +519,6 @@ static int ceph_init_request(struct netfs_io_request *rreq, struct file *file)
priv->caps = got;
rreq->netfs_priv = priv;
- rreq->io_streams[0].sreq_max_len = fsc->mount_options->rsize;
out:
if (ret < 0) {
@@ -538,7 +546,6 @@ static void ceph_netfs_free_request(struct netfs_io_request *rreq)
const struct netfs_request_ops ceph_netfs_ops = {
.init_request = ceph_init_request,
.free_request = ceph_netfs_free_request,
- .prepare_read = ceph_netfs_prepare_read,
.issue_read = ceph_netfs_issue_read,
.expand_readahead = ceph_netfs_expand_readahead,
.check_write_begin = ceph_netfs_check_write_begin,
diff --git a/fs/netfs/Kconfig b/fs/netfs/Kconfig
index 7701c037c328..d0e7b0971fa3 100644
--- a/fs/netfs/Kconfig
+++ b/fs/netfs/Kconfig
@@ -22,6 +22,9 @@ config NETFS_STATS
between CPUs. On the other hand, the stats are very useful for
debugging purposes. Saying 'Y' here is recommended.
+config NETFS_PGPRIV2
+ bool
+
config NETFS_DEBUG
bool "Enable dynamic debugging netfslib and FS-Cache"
depends on NETFS_SUPPORT
diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile
index 0621e6870cbd..421dd0be413b 100644
--- a/fs/netfs/Makefile
+++ b/fs/netfs/Makefile
@@ -12,13 +12,13 @@ netfs-y := \
misc.o \
objects.o \
read_collect.o \
- read_pgpriv2.o \
read_retry.o \
read_single.o \
write_collect.o \
write_issue.o \
write_retry.o
+netfs-$(CONFIG_NETFS_PGPRIV2) += read_pgpriv2.o
netfs-$(CONFIG_NETFS_STATS) += stats.o
netfs-$(CONFIG_FSCACHE) += \
diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
index 2cfd33abfb80..81aa99910e5d 100644
--- a/fs/netfs/buffered_read.c
+++ b/fs/netfs/buffered_read.c
@@ -98,51 +98,68 @@ static int netfs_begin_cache_read(struct netfs_io_request *rreq, struct netfs_in
}
/*
- * netfs_prepare_read_iterator - Prepare the subreq iterator for I/O
- * @subreq: The subrequest to be set up
- *
- * Prepare the I/O iterator representing the read buffer on a subrequest for
- * the filesystem to use for I/O (it can be passed directly to a socket). This
- * is intended to be called from the ->issue_read() method once the filesystem
- * has trimmed the request to the size it wants.
- *
- * Returns the limited size if successful and -ENOMEM if insufficient memory
- * available.
+ * Prepare the I/O buffer on a buffered read subrequest for the filesystem to
+ * use as a bvec queue.
*/
-static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq)
+static int netfs_prepare_buffered_read_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs)
{
struct netfs_io_request *rreq = subreq->rreq;
struct netfs_io_stream *stream = &rreq->io_streams[0];
ssize_t extracted;
- size_t rsize = subreq->len;
- if (subreq->source == NETFS_DOWNLOAD_FROM_SERVER)
- rsize = umin(rsize, stream->sreq_max_len);
+ _enter("R=%08x[%x] l=%zx s=%u",
+ rreq->debug_id, subreq->debug_index, subreq->len, max_segs);
- bvecq_pos_set(&subreq->dispatch_pos, &rreq->dispatch_cursor);
- extracted = bvecq_slice(&rreq->dispatch_cursor, subreq->len,
- stream->sreq_max_segs, &subreq->nr_segs);
- if (extracted < rsize) {
+ bvecq_pos_set(&subreq->dispatch_pos, &stream->dispatch_cursor);
+ bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
+ extracted = bvecq_slice(&stream->dispatch_cursor, subreq->len,
+ max_segs, &subreq->nr_segs);
+
+ stream->buffered -= extracted;
+ if (extracted < subreq->len) {
subreq->len = extracted;
trace_netfs_sreq(subreq, netfs_sreq_trace_limited);
}
- return subreq->len;
+ return 0;
}
-/*
- * Issue a read against the cache.
- * - Eats the caller's ref on subreq.
+/**
+ * netfs_prepare_read_buffer - Get the buffer for a subrequest
+ * @subreq: The subrequest to get the buffer for
+ * @max_segs: Maximum number of segments in buffer (or INT_MAX)
+ *
+ * Extract a slice of buffer from the stream and attach it to the subrequest as
+ * a bio_vec queue. The maximum amount of data attached is set by
+ * @subreq->len, but this may be shortened if @max_segs would be exceeded.
+ *
+ * [!] NOTE: This must be run in the same thread as ->issue_read() was called
+ * in as we access the readahead_control struct if there is one.
*/
-static void netfs_read_cache_to_pagecache(struct netfs_io_request *rreq,
- struct netfs_io_subrequest *subreq)
+int netfs_prepare_read_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs)
{
- struct netfs_cache_resources *cres = &rreq->cache_resources;
-
- netfs_stat(&netfs_n_rh_read);
- cres->ops->read(cres, subreq->start, &subreq->io_iter, NETFS_READ_HOLE_IGNORE,
- netfs_cache_read_terminated, subreq);
+ switch (subreq->rreq->origin) {
+ case NETFS_READAHEAD:
+ case NETFS_READPAGE:
+ case NETFS_READ_FOR_WRITE:
+ if (subreq->retry_count)
+ return netfs_prepare_buffered_read_retry_buffer(subreq, max_segs);
+ return netfs_prepare_buffered_read_buffer(subreq, max_segs);
+
+ case NETFS_UNBUFFERED_READ:
+ case NETFS_DIO_READ:
+ case NETFS_READ_GAPS:
+ return netfs_prepare_unbuffered_read_buffer(subreq, max_segs);
+ case NETFS_READ_SINGLE:
+ return netfs_prepare_read_single_buffer(subreq, max_segs);
+ default:
+ WARN_ON_ONCE(1);
+ return -EIO;
+ }
}
+EXPORT_SYMBOL(netfs_prepare_read_buffer);
int netfs_read_query_cache(struct netfs_io_request *rreq, struct fscache_occupancy *occ)
{
@@ -157,12 +174,22 @@ int netfs_read_query_cache(struct netfs_io_request *rreq, struct fscache_occupan
return cres->ops->query_occupancy(cres, occ);
}
-static void netfs_queue_read(struct netfs_io_request *rreq,
- struct netfs_io_subrequest *subreq,
- bool last_subreq)
+/**
+ * netfs_mark_read_submission - Mark a read subrequest as being ready for submission
+ * @subreq: The subrequest to be marked
+ *
+ * Calling this marks a read subrequest as being ready for submission and makes
+ * it available to the collection thread. After calling this, the filesystem's
+ * ->issue_read() method must invoke netfs_read_subreq_terminated() to end the
+ * subrequest and must return -EIOCBQUEUED.
+ */
+void netfs_mark_read_submission(struct netfs_io_subrequest *subreq)
{
+ struct netfs_io_request *rreq = subreq->rreq;
struct netfs_io_stream *stream = &rreq->io_streams[0];
+ _enter("R=%08x[%x]", rreq->debug_id, subreq->debug_index);
+
__set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
/* We add to the end of the list whilst the collector may be walking
@@ -170,45 +197,57 @@ static void netfs_queue_read(struct netfs_io_request *rreq,
* remove entries off of the front.
*/
spin_lock(&rreq->lock);
- list_add_tail(&subreq->rreq_link, &stream->subrequests);
- if (list_is_first(&subreq->rreq_link, &stream->subrequests)) {
- if (!stream->active) {
- stream->collected_to = subreq->start;
- /* Store list pointers before active flag */
- smp_store_release(&stream->active, true);
+ if (list_empty(&subreq->rreq_link)) {
+ list_add_tail(&subreq->rreq_link, &stream->subrequests);
+ if (list_is_first(&subreq->rreq_link, &stream->subrequests)) {
+ if (!stream->active) {
+ stream->collected_to = subreq->start;
+ /* Store list pointers before active flag */
+ smp_store_release(&stream->active, true);
+ }
}
}
- if (last_subreq) {
+ rreq->submitted += subreq->len;
+ stream->issue_from = subreq->start + subreq->len;
+ if (!stream->buffered) {
smp_wmb(); /* Write lists before ALL_QUEUED. */
set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
+ trace_netfs_rreq(rreq, netfs_rreq_trace_all_queued);
}
spin_unlock(&rreq->lock);
+
+ trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
}
+EXPORT_SYMBOL(netfs_mark_read_submission);
-static void netfs_issue_read(struct netfs_io_request *rreq,
- struct netfs_io_subrequest *subreq)
+static int netfs_issue_read(struct netfs_io_request *rreq,
+ struct netfs_io_subrequest *subreq)
{
- bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
- iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq,
- subreq->content.slot, subreq->content.offset, subreq->len);
+ struct netfs_io_stream *stream = &rreq->io_streams[0];
+
+ _enter("R=%08x[%x]", rreq->debug_id, subreq->debug_index);
switch (subreq->source) {
case NETFS_DOWNLOAD_FROM_SERVER:
- rreq->netfs_ops->issue_read(subreq);
- break;
- case NETFS_READ_FROM_CACHE:
- netfs_read_cache_to_pagecache(rreq, subreq);
- break;
+ return rreq->netfs_ops->issue_read(subreq);
+ case NETFS_READ_FROM_CACHE: {
+ struct netfs_cache_resources *cres = &rreq->cache_resources;
+
+ netfs_stat(&netfs_n_rh_read);
+ cres->ops->issue_read(subreq);
+ return -EIOCBQUEUED;
+ }
default:
- bvecq_zero(&rreq->dispatch_cursor, subreq->len);
+ stream->issue_from = subreq->start + subreq->len;
+ stream->buffered = 0;
+ netfs_mark_read_submission(subreq);
+ bvecq_zero(&stream->dispatch_cursor, subreq->len);
subreq->transferred = subreq->len;
subreq->error = 0;
- iov_iter_zero(subreq->len, &subreq->io_iter);
- subreq->transferred = subreq->len;
netfs_read_subreq_terminated(subreq);
- break;
+ return 0;
}
}
@@ -228,21 +267,18 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
.cached_to[1] = ULLONG_MAX,
};
struct fscache_occupancy *occ = &_occ;
+ struct netfs_io_stream *stream = &rreq->io_streams[0];
struct netfs_inode *ictx = netfs_inode(rreq->inode);
- unsigned long long start = rreq->start;
- ssize_t size = rreq->len;
int ret = 0;
_enter("R=%08x", rreq->debug_id);
- bvecq_pos_set(&rreq->dispatch_cursor, &rreq->load_cursor);
- bvecq_pos_set(&rreq->collect_cursor, &rreq->dispatch_cursor);
+ bvecq_pos_set(&stream->dispatch_cursor, &rreq->load_cursor);
+ bvecq_pos_set(&rreq->collect_cursor, &rreq->load_cursor);
do {
- int (*prepare_read)(struct netfs_io_subrequest *subreq) = NULL;
struct netfs_io_subrequest *subreq;
- unsigned long long hole_to, cache_to;
- ssize_t slice;
+ unsigned long long hole_to, cache_to, stop;
/* If we don't have any, find out the next couple of data
* extents from the cache, containing of following the
@@ -251,7 +287,7 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
*/
hole_to = occ->cached_from[0];
cache_to = occ->cached_to[0];
- if (start >= cache_to) {
+ if (stream->issue_from >= cache_to) {
/* Extent exhausted; shuffle down. */
int i;
@@ -279,36 +315,33 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
break;
}
- subreq->start = start;
- subreq->len = size;
+ subreq->start = stream->issue_from;
+ stop = stream->issue_from + stream->buffered;
_debug("rsub %llx %llx-%llx", subreq->start, hole_to, cache_to);
- if (start >= hole_to && start < cache_to) {
+ if (stream->issue_from >= hole_to && stream->issue_from < cache_to) {
/* Overlap with a cached region, where the cache may
* record a block of zeroes.
*/
- _debug("cached s=%llx c=%llx l=%zx", start, cache_to, size);
- subreq->len = umin(cache_to - start, size);
+ _debug("cached s=%llx c=%llx l=%zx",
+ stream->issue_from, cache_to, stream->buffered);
+ subreq->len = umin(cache_to - stream->issue_from, stream->buffered);
subreq->len = round_up(subreq->len, occ->granularity);
if (occ->cached_type[0] == FSCACHE_EXTENT_ZERO) {
subreq->source = NETFS_FILL_WITH_ZEROES;
netfs_stat(&netfs_n_rh_zero);
} else {
subreq->source = NETFS_READ_FROM_CACHE;
- prepare_read = rreq->cache_resources.ops->prepare_read;
}
-
- trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
-
} else if ((subreq->start >= ictx->zero_point ||
subreq->start >= rreq->i_size) &&
- size > 0) {
+ subreq->start < stop) {
/* If this range lies beyond the zero-point, that part
* can just be cleared locally.
*/
- _debug("zero %llx-%llx", start, start + size);
- subreq->len = size;
+ _debug("zero %llx-%llx", subreq->start, stop);
+ subreq->len = stream->buffered;
subreq->source = NETFS_FILL_WITH_ZEROES;
if (rreq->cache_resources.ops)
__set_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
@@ -319,10 +352,10 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
* that part can just be cleared locally.
*/
unsigned long long zlimit = umin(rreq->i_size, ictx->zero_point);
- unsigned long long limit = min3(zlimit, start + size, hole_to);
+ unsigned long long limit = min3(zlimit, stop, hole_to);
_debug("limit %llx %llx", rreq->i_size, ictx->zero_point);
- _debug("download %llx-%llx", start, start + size);
+ _debug("download %llx-%llx", subreq->start, stop);
subreq->len = umin(limit - subreq->start, ULONG_MAX);
subreq->source = NETFS_DOWNLOAD_FROM_SERVER;
if (rreq->cache_resources.ops)
@@ -330,10 +363,10 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
netfs_stat(&netfs_n_rh_download);
}
- if (size == 0) {
+ if (subreq->len == 0) {
pr_err("ZERO-LEN READ: R=%08x[%x] l=%zx/%zx s=%llx z=%llx i=%llx",
rreq->debug_id, subreq->debug_index,
- subreq->len, size,
+ subreq->len, stream->buffered,
subreq->start, ictx->zero_point, rreq->i_size);
trace_netfs_sreq(subreq, netfs_sreq_trace_cancel);
/* Not queued - release both refs. */
@@ -342,24 +375,8 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
break;
}
- rreq->io_streams[0].sreq_max_len = MAX_RW_COUNT;
- rreq->io_streams[0].sreq_max_segs = INT_MAX;
-
- if (prepare_read) {
- ret = prepare_read(subreq);
- if (ret < 0) {
- subreq->error = ret;
- /* Not queued - release both refs. */
- netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
- netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
- break;
- }
- trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
- }
-
- slice = netfs_prepare_read_iterator(subreq);
- if (slice < 0) {
- ret = slice;
+ ret = netfs_issue_read(rreq, subreq);
+ if (ret != 0 && ret != -EIOCBQUEUED) {
subreq->error = ret;
trace_netfs_sreq(subreq, netfs_sreq_trace_cancel);
/* Not queued - release both refs. */
@@ -367,18 +384,12 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
break;
}
- size -= slice;
- start += slice;
+ ret = 0;
- trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
-
- netfs_queue_read(rreq, subreq, size <= 0);
- netfs_issue_read(rreq, subreq);
- netfs_maybe_bulk_drop_ra_refs(rreq);
cond_resched();
- } while (size > 0);
+ } while (stream->buffered > 0);
- if (unlikely(size > 0)) {
+ if (unlikely(stream->buffered > 0)) {
smp_wmb(); /* Write lists before ALL_QUEUED. */
set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
netfs_wake_collector(rreq);
@@ -388,7 +399,7 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
cmpxchg(&rreq->error, 0, ret);
bvecq_pos_unset(&rreq->load_cursor);
- bvecq_pos_unset(&rreq->dispatch_cursor);
+ bvecq_pos_unset(&stream->dispatch_cursor);
}
/**
@@ -409,17 +420,22 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
void netfs_readahead(struct readahead_control *ractl)
{
struct netfs_io_request *rreq;
+ struct netfs_io_stream *stream;
struct netfs_inode *ictx = netfs_inode(ractl->mapping->host);
unsigned long long start = readahead_pos(ractl);
ssize_t added;
size_t size = readahead_length(ractl);
int ret;
+ _enter("");
+
rreq = netfs_alloc_request(ractl->mapping, ractl->file, start, size,
NETFS_READAHEAD);
if (IS_ERR(rreq))
return;
+ stream = &rreq->io_streams[0];
+
__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &rreq->flags);
ret = netfs_begin_cache_read(rreq, ictx);
@@ -446,6 +462,8 @@ void netfs_readahead(struct readahead_control *ractl)
rreq->submitted = rreq->start + added;
rreq->cleaned_to = rreq->start;
rreq->front_folio_order = get_order(rreq->load_cursor.bvecq->bv[0].bv_len);
+ stream->issue_from = rreq->start;
+ stream->buffered = added;
netfs_read_to_pagecache(rreq);
netfs_maybe_bulk_drop_ra_refs(rreq);
@@ -461,6 +479,7 @@ EXPORT_SYMBOL(netfs_readahead);
*/
static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct folio *folio)
{
+ struct netfs_io_stream *stream = &rreq->io_streams[0];
struct bvecq *bq;
size_t fsize = folio_size(folio);
@@ -470,6 +489,8 @@ static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct fo
bq = rreq->load_cursor.bvecq;
bvec_set_folio(&bq->bv[bq->nr_slots++], folio, fsize, 0);
rreq->submitted = rreq->start + fsize;
+ stream->issue_from = rreq->start;
+ stream->buffered = fsize;
return 0;
}
@@ -479,6 +500,7 @@ static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct fo
static int netfs_read_gaps(struct file *file, struct folio *folio)
{
struct netfs_io_request *rreq;
+ struct netfs_io_stream *stream;
struct address_space *mapping = folio->mapping;
struct netfs_folio *finfo = netfs_folio_info(folio);
struct netfs_inode *ctx = netfs_inode(mapping->host);
@@ -499,6 +521,7 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)
ret = PTR_ERR(rreq);
goto alloc_error;
}
+ stream = &rreq->io_streams[0];
ret = netfs_begin_cache_read(rreq, ctx);
if (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS)
@@ -546,6 +569,8 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)
}
rreq->submitted = rreq->start + flen;
+ stream->issue_from = rreq->start;
+ stream->buffered = flen;
netfs_read_to_pagecache(rreq);
@@ -618,6 +643,7 @@ int netfs_read_folio(struct file *file, struct folio *folio)
goto discard;
netfs_read_to_pagecache(rreq);
+
ret = netfs_wait_for_read(rreq);
netfs_put_request(rreq, netfs_rreq_trace_put_return);
return ret < 0 ? ret : 0;
diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c
index bce3e7109ec1..855c14118c53 100644
--- a/fs/netfs/buffered_write.c
+++ b/fs/netfs/buffered_write.c
@@ -114,8 +114,8 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
.range_start = iocb->ki_pos,
.range_end = iocb->ki_pos + iter->count,
};
- struct netfs_io_request *wreq = NULL;
- struct folio *folio = NULL, *writethrough = NULL;
+ struct netfs_writethrough *writethrough = NULL;
+ struct folio *folio = NULL;
unsigned int bdp_flags = (iocb->ki_flags & IOCB_NOWAIT) ? BDP_ASYNC : 0;
ssize_t written = 0, ret, ret2;
loff_t pos = iocb->ki_pos;
@@ -132,15 +132,13 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
goto out;
}
- wreq = netfs_begin_writethrough(iocb, iter->count);
- if (IS_ERR(wreq)) {
+ writethrough = netfs_begin_writethrough(iocb, iter->count);
+ if (IS_ERR(writethrough)) {
wbc_detach_inode(&wbc);
- ret = PTR_ERR(wreq);
- wreq = NULL;
+ ret = PTR_ERR(writethrough);
+ writethrough = NULL;
goto out;
}
- if (!is_sync_kiocb(iocb))
- wreq->iocb = iocb;
netfs_stat(&netfs_n_wh_writethrough);
} else {
netfs_stat(&netfs_n_wh_buffered_write);
@@ -264,7 +262,7 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
* a file that's open for reading as ->read_folio() then has to
* be able to flush it.
*/
- if ((file->f_mode & FMODE_READ) ||
+ if (//(file->f_mode & FMODE_READ) ||
netfs_is_cache_enabled(ctx)) {
if (finfo) {
netfs_stat(&netfs_n_wh_wstream_conflict);
@@ -355,13 +353,12 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
pos += copied;
written += copied;
- if (likely(!wreq)) {
+ if (likely(!writethrough)) {
folio_mark_dirty(folio);
folio_unlock(folio);
} else {
- netfs_advance_writethrough(wreq, &wbc, folio, copied,
- offset + copied == flen,
- &writethrough);
+ netfs_advance_writethrough(writethrough, &wbc, folio, copied,
+ offset + copied == flen);
/* Folio unlocked */
}
retry:
@@ -385,8 +382,8 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter,
ctx->ops->post_modify(inode);
}
- if (unlikely(wreq)) {
- ret2 = netfs_end_writethrough(wreq, &wbc, writethrough);
+ if (unlikely(writethrough)) {
+ ret2 = netfs_end_writethrough(writethrough, &wbc);
wbc_detach_inode(&wbc);
if (ret2 == -EIOCBQUEUED)
return ret2;
diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c
index 05d09ba3d0d0..872e44227368 100644
--- a/fs/netfs/direct_read.c
+++ b/fs/netfs/direct_read.c
@@ -16,6 +16,28 @@
#include <linux/netfs.h>
#include "internal.h"
+int netfs_prepare_unbuffered_read_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs)
+{
+ struct netfs_io_request *rreq = subreq->rreq;
+ struct netfs_io_stream *stream = &rreq->io_streams[0];
+ size_t len;
+
+ bvecq_pos_set(&subreq->dispatch_pos, &stream->dispatch_cursor);
+ bvecq_pos_set(&subreq->content, &stream->dispatch_cursor);
+ len = bvecq_slice(&stream->dispatch_cursor, subreq->len, max_segs,
+ &subreq->nr_segs);
+
+ if (len < subreq->len) {
+ subreq->len = len;
+ trace_netfs_sreq(subreq, netfs_sreq_trace_limited);
+ }
+
+ stream->buffered -= subreq->len;
+ stream->issue_from += subreq->len;
+ return 0;
+}
+
/*
* Perform a read to a buffer from the server, slicing up the region to be read
* according to the network rsize.
@@ -23,11 +45,9 @@
static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
{
struct netfs_io_stream *stream = &rreq->io_streams[0];
- unsigned long long start = rreq->start;
- ssize_t size = rreq->len;
int ret = 0;
- bvecq_pos_set(&rreq->dispatch_cursor, &rreq->load_cursor);
+ bvecq_pos_transfer(&stream->dispatch_cursor, &rreq->load_cursor);
do {
struct netfs_io_subrequest *subreq;
@@ -39,66 +59,36 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
}
subreq->source = NETFS_DOWNLOAD_FROM_SERVER;
- subreq->start = start;
- subreq->len = size;
-
- __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
-
- spin_lock(&rreq->lock);
- list_add_tail(&subreq->rreq_link, &stream->subrequests);
- if (list_is_first(&subreq->rreq_link, &stream->subrequests)) {
- if (!stream->active) {
- stream->collected_to = subreq->start;
- /* Store list pointers before active flag */
- smp_store_release(&stream->active, true);
- }
- }
- trace_netfs_sreq(subreq, netfs_sreq_trace_added);
- spin_unlock(&rreq->lock);
+ subreq->start = stream->issue_from;
+ subreq->len = stream->buffered;
netfs_stat(&netfs_n_rh_download);
- if (rreq->netfs_ops->prepare_read) {
- ret = rreq->netfs_ops->prepare_read(subreq);
- if (ret < 0) {
- netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
- break;
- }
- }
- bvecq_pos_set(&subreq->dispatch_pos, &rreq->dispatch_cursor);
- bvecq_pos_set(&subreq->content, &rreq->dispatch_cursor);
- subreq->len = bvecq_slice(&rreq->dispatch_cursor,
- umin(size, stream->sreq_max_len),
- stream->sreq_max_segs,
- &subreq->nr_segs);
-
- size -= subreq->len;
- start += subreq->len;
- rreq->submitted += subreq->len;
- if (size <= 0) {
- smp_wmb(); /* Write lists before ALL_QUEUED. */
- set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
+ ret = rreq->netfs_ops->issue_read(subreq);
+ if (ret != 0 && ret != -EIOCBQUEUED) {
+ subreq->error = ret;
+ trace_netfs_sreq(subreq, netfs_sreq_trace_cancel);
+ /* Not queued - release both refs. */
+ netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
+ netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
+ break;
}
- iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq,
- subreq->content.slot, subreq->content.offset, subreq->len);
-
- rreq->netfs_ops->issue_read(subreq);
-
+ ret = 0;
if (test_bit(NETFS_RREQ_PAUSE, &rreq->flags))
netfs_wait_for_paused_read(rreq);
if (test_bit(NETFS_RREQ_FAILED, &rreq->flags))
break;
cond_resched();
- } while (size > 0);
+ } while (stream->buffered > 0);
- if (unlikely(size > 0)) {
+ if (unlikely(stream->buffered > 0)) {
smp_wmb(); /* Write lists before ALL_QUEUED. */
set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
netfs_wake_collector(rreq);
}
- bvecq_pos_unset(&rreq->dispatch_cursor);
+ bvecq_pos_unset(&stream->dispatch_cursor);
return ret;
}
@@ -154,6 +144,7 @@ static ssize_t netfs_unbuffered_read(struct netfs_io_request *rreq, bool sync)
ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *iter)
{
struct netfs_io_request *rreq;
+ struct netfs_io_stream *stream;
ssize_t ret;
size_t orig_count = iov_iter_count(iter);
bool sync = is_sync_kiocb(iocb);
@@ -178,6 +169,8 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
netfs_stat(&netfs_n_rh_dio_read);
trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_dio_read);
+ stream = &rreq->io_streams[0];
+
/* If this is an async op, we have to keep track of the destination
* buffer for ourselves as the caller's iterator will be trashed when
* we return.
@@ -192,6 +185,10 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i
if (ret < 0)
goto error_put;
+ rreq->len = ret;
+ stream->buffered = ret;
+ stream->issue_from = rreq->start;
+
// TODO: Set up bounce buffer if needed
if (!sync) {
diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
index a61c6d6fd17f..b04b16d35c38 100644
--- a/fs/netfs/direct_write.c
+++ b/fs/netfs/direct_write.c
@@ -9,6 +9,32 @@
#include <linux/uio.h>
#include "internal.h"
+/*
+ * Prepare the buffer for an unbuffered/DIO write.
+ */
+int netfs_prepare_unbuffered_write_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs)
+{
+ struct netfs_io_stream *stream = &subreq->rreq->io_streams[subreq->stream_nr];
+ size_t len;
+
+ bvecq_pos_set(&subreq->dispatch_pos, &stream->dispatch_cursor);
+ bvecq_pos_set(&subreq->content, &stream->dispatch_cursor);
+ len = bvecq_slice(&stream->dispatch_cursor, subreq->len, max_segs,
+ &subreq->nr_segs);
+
+ if (len < subreq->len) {
+ subreq->len = len;
+ trace_netfs_sreq(subreq, netfs_sreq_trace_limited);
+ }
+
+ // TODO: Wait here for completion of prev subreq
+
+ stream->issue_from += subreq->len;
+ stream->buffered -= subreq->len;
+ return 0;
+}
+
/*
* Perform the cleanup rituals after an unbuffered write is complete.
*/
@@ -74,9 +100,9 @@ static void netfs_unbuffered_write_collect(struct netfs_io_request *wreq,
wreq->transferred += subreq->transferred;
if (subreq->transferred < subreq->len) {
- bvecq_pos_unset(&wreq->dispatch_cursor);
- bvecq_pos_transfer(&wreq->dispatch_cursor, &subreq->dispatch_pos);
- bvecq_pos_advance(&wreq->dispatch_cursor, subreq->transferred);
+ bvecq_pos_unset(&stream->dispatch_cursor);
+ bvecq_pos_transfer(&stream->dispatch_cursor, &subreq->dispatch_pos);
+ bvecq_pos_advance(&stream->dispatch_cursor, subreq->transferred);
}
stream->collected_to = subreq->start + subreq->transferred;
@@ -85,6 +111,7 @@ static void netfs_unbuffered_write_collect(struct netfs_io_request *wreq,
trace_netfs_collect_stream(wreq, stream);
trace_netfs_collect_state(wreq, wreq->collected_to, 0);
+ /* TODO: Progressively clean up wreq->direct_bq */
}
/*
@@ -103,60 +130,60 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)
_enter("%llx", wreq->len);
- bvecq_pos_set(&wreq->dispatch_cursor, &wreq->load_cursor);
- bvecq_pos_set(&wreq->collect_cursor, &wreq->dispatch_cursor);
+ stream->issue_from = wreq->start;
+ stream->buffered = wreq->len;
+ bvecq_pos_set(&stream->dispatch_cursor, &wreq->load_cursor);
+ bvecq_pos_set(&wreq->collect_cursor, &stream->dispatch_cursor);
if (wreq->origin == NETFS_DIO_WRITE)
inode_dio_begin(wreq->inode);
- stream->collected_to = wreq->start;
-
for (;;) {
bool retry = false;
if (!subreq) {
- netfs_prepare_write(wreq, stream, wreq->start + wreq->transferred);
- subreq = stream->construct;
- stream->construct = NULL;
- } else {
- bvecq_pos_set(&subreq->dispatch_pos, &wreq->dispatch_cursor);
+ subreq = netfs_alloc_write_subreq(wreq, stream);
+ if (!subreq)
+ return -ENOMEM;
}
- /* Check if (re-)preparation failed. */
- if (unlikely(test_bit(NETFS_SREQ_FAILED, &subreq->flags))) {
- netfs_write_subrequest_terminated(subreq, subreq->error);
- wreq->error = subreq->error;
+ ret = stream->issue_write(subreq);
+ switch (ret) {
+ case 0:
+ /* Already completed synchronously. */
break;
- }
-
- subreq->len = bvecq_slice(&wreq->dispatch_cursor, stream->sreq_max_len,
- stream->sreq_max_segs, &subreq->nr_segs);
- bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
-
- iov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE,
- subreq->content.bvecq, subreq->content.slot,
- subreq->content.offset,
- subreq->len);
-
- if (!iov_iter_count(&subreq->io_iter))
+ case -EIOCBQUEUED:
+ /* Async, need to wait. */
+ ret = netfs_wait_for_in_progress_subreq(wreq, subreq);
+ if (ret < 0) {
+ if (ret == -EAGAIN) {
+ retry = true;
+ break;
+ }
+
+ list_del_init(&subreq->rreq_link);
+ ret = subreq->error;
+ netfs_put_subrequest(subreq, netfs_sreq_trace_put_failed);
+ subreq = NULL;
+ goto failed;
+ }
break;
-
- trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
- stream->issue_write(subreq);
-
- /* Async, need to wait. */
- netfs_wait_for_in_progress_stream(wreq, stream);
-
- if (test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) {
+ case -EAGAIN:
+ /* Need to retry. */
+ __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
retry = true;
- } else if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) {
- ret = subreq->error;
+ break;
+ default:
+ /* Probably failed before dispatch. */
+ subreq->error = ret;
wreq->error = ret;
- netfs_see_subrequest(subreq, netfs_sreq_trace_see_failed);
+ __set_bit(NETFS_SREQ_FAILED, &subreq->flags);
+ trace_netfs_sreq(subreq, netfs_sreq_trace_cancel);
+ list_del_init(&subreq->rreq_link);
+ netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
subreq = NULL;
- break;
+ goto failed;
}
- ret = 0;
if (!retry) {
netfs_unbuffered_write_collect(wreq, stream, subreq);
@@ -171,20 +198,21 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)
continue;
}
- /* We need to retry the last subrequest, so first reset the
- * iterator, taking into account what, if anything, we managed
- * to transfer.
+ /* We need to retry the last subrequest, so first wind back the
+ * buffer position.
*/
subreq->error = -EAGAIN;
trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
bvecq_pos_unset(&subreq->content);
- bvecq_pos_unset(&wreq->dispatch_cursor);
- bvecq_pos_transfer(&wreq->dispatch_cursor, &subreq->dispatch_pos);
+ bvecq_pos_unset(&stream->dispatch_cursor);
+ bvecq_pos_transfer(&stream->dispatch_cursor, &subreq->dispatch_pos);
if (subreq->transferred > 0) {
- wreq->transferred += subreq->transferred;
- bvecq_pos_advance(&wreq->dispatch_cursor, subreq->transferred);
+ wreq->transferred += subreq->transferred;
+ stream->issue_from -= subreq->len - subreq->transferred;
+ stream->buffered += subreq->len - subreq->transferred;
+ bvecq_pos_advance(&stream->dispatch_cursor, subreq->transferred);
}
if (stream->source == NETFS_UPLOAD_TO_SERVER &&
@@ -192,25 +220,21 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)
wreq->netfs_ops->retry_request(wreq, stream);
__clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
- __clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags);
__clear_bit(NETFS_SREQ_FAILED, &subreq->flags);
- subreq->start = wreq->start + wreq->transferred;
- subreq->len = wreq->len - wreq->transferred;
+ __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
+ subreq->start = stream->issue_from;
+ subreq->len = stream->buffered;
subreq->transferred = 0;
subreq->retry_count += 1;
- stream->sreq_max_len = UINT_MAX;
- stream->sreq_max_segs = INT_MAX;
netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
- if (stream->prepare_write)
- stream->prepare_write(subreq);
__set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
netfs_stat(&netfs_n_wh_retry_write_subreq);
}
- bvecq_pos_unset(&wreq->dispatch_cursor);
- bvecq_pos_unset(&wreq->load_cursor);
+failed:
+ bvecq_pos_unset(&stream->dispatch_cursor);
netfs_unbuffered_write_done(wreq);
_leave(" = %d", ret);
return ret;
@@ -254,6 +278,7 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
if (IS_ERR(wreq))
return PTR_ERR(wreq);
+ wreq->len = iov_iter_count(iter);
wreq->io_streams[0].avail = true;
trace_netfs_write(wreq, (iocb->ki_flags & IOCB_DIRECT ?
netfs_write_trace_dio_write :
@@ -264,9 +289,7 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
* we have to save the source buffer as the iterator is only
* good until we return. In such a case, extract an iterator
* to represent as much of the the output buffer as we can
- * manage. Note that the extraction might not be able to
- * allocate a sufficiently large bvec array and may shorten the
- * request.
+ * manage. Note that the extraction may shorten the request.
*/
ssize_t n = netfs_extract_iter(iter, len, INT_MAX, iocb->ki_pos,
&wreq->load_cursor.bvecq, 0);
@@ -281,8 +304,6 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
wreq->load_cursor.bvecq->max_slots);
}
- __set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags);
-
/* Copy the data into the bounce buffer and encrypt it. */
// TODO
diff --git a/fs/netfs/fscache_io.c b/fs/netfs/fscache_io.c
index 37f05b4d3469..70b10ac23a27 100644
--- a/fs/netfs/fscache_io.c
+++ b/fs/netfs/fscache_io.c
@@ -239,10 +239,6 @@ void __fscache_write_to_cache(struct fscache_cookie *cookie,
fscache_access_io_write) < 0)
goto abandon_free;
- ret = cres->ops->prepare_write(cres, &start, &len, len, i_size, false);
- if (ret < 0)
- goto abandon_end;
-
/* TODO: Consider clearing page bits now for space the write isn't
* covering. This is more complicated than it appears when THPs are
* taken into account.
@@ -252,8 +248,6 @@ void __fscache_write_to_cache(struct fscache_cookie *cookie,
fscache_write(cres, start, &iter, fscache_wreq_done, wreq);
return;
-abandon_end:
- return fscache_wreq_done(wreq, ret);
abandon_free:
kfree(wreq);
abandon:
diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
index ddae82f94ce0..ecf7cd5b5ca1 100644
--- a/fs/netfs/internal.h
+++ b/fs/netfs/internal.h
@@ -34,6 +34,18 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
void netfs_update_i_size(struct netfs_inode *ctx, struct inode *inode,
loff_t pos, size_t copied);
+/*
+ * direct_read.c
+ */
+int netfs_prepare_unbuffered_read_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs);
+
+/*
+ * direct_write.c
+ */
+int netfs_prepare_unbuffered_write_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs);
+
/*
* main.c
*/
@@ -70,6 +82,8 @@ struct bvecq *netfs_buffer_make_space(struct netfs_io_request *rreq,
enum netfs_bvecq_trace trace);
void netfs_wake_collector(struct netfs_io_request *rreq);
void netfs_subreq_clear_in_progress(struct netfs_io_subrequest *subreq);
+int netfs_wait_for_in_progress_subreq(struct netfs_io_request *rreq,
+ struct netfs_io_subrequest *subreq);
void netfs_wait_for_in_progress_stream(struct netfs_io_request *rreq,
struct netfs_io_stream *stream);
ssize_t netfs_wait_for_read(struct netfs_io_request *rreq);
@@ -113,16 +127,53 @@ void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error);
/*
* read_pgpriv2.c
*/
+#ifdef CONFIG_NETFS_PGPRIV2
+int netfs_prepare_pgpriv2_write_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs);
void netfs_pgpriv2_copy_to_cache(struct netfs_io_request *rreq, struct folio *folio);
void netfs_pgpriv2_end_copy_to_cache(struct netfs_io_request *rreq);
bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *wreq);
+static inline bool netfs_using_pgpriv2(const struct netfs_io_request *rreq)
+{
+ return test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags);
+}
+#else
+static inline int netfs_prepare_pgpriv2_write_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs)
+{
+ return -EIO;
+}
+static inline void netfs_pgpriv2_copy_to_cache(struct netfs_io_request *rreq, struct folio *folio)
+{
+}
+static inline void netfs_pgpriv2_end_copy_to_cache(struct netfs_io_request *rreq)
+{
+}
+static inline bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *wreq)
+{
+ return true;
+}
+static inline bool netfs_using_pgpriv2(const struct netfs_io_request *rreq)
+{
+ return false;
+}
+#endif
/*
* read_retry.c
*/
+int netfs_prepare_buffered_read_retry_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs);
+int netfs_reset_for_read_retry(struct netfs_io_subrequest *subreq);
void netfs_retry_reads(struct netfs_io_request *rreq);
void netfs_unlock_abandoned_read_pages(struct netfs_io_request *rreq);
+/*
+ * read_single.c
+ */
+int netfs_prepare_read_single_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs);
+
/*
* stats.c
*/
@@ -194,30 +245,25 @@ void netfs_write_collection_worker(struct work_struct *work);
/*
* write_issue.c
*/
+struct netfs_writethrough;
struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
struct file *file,
loff_t start,
enum netfs_io_origin origin);
-void netfs_prepare_write(struct netfs_io_request *wreq,
- struct netfs_io_stream *stream,
- loff_t start);
-void netfs_reissue_write(struct netfs_io_stream *stream,
- struct netfs_io_subrequest *subreq);
-void netfs_issue_write(struct netfs_io_request *wreq,
- struct netfs_io_stream *stream);
-size_t netfs_advance_write(struct netfs_io_request *wreq,
- struct netfs_io_stream *stream,
- loff_t start, size_t len, bool to_eof);
-struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len);
-int netfs_advance_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
- struct folio *folio, size_t copied, bool to_page_end,
- struct folio **writethrough_cache);
-ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
- struct folio *writethrough_cache);
+struct netfs_io_subrequest *netfs_alloc_write_subreq(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream);
+struct netfs_writethrough *netfs_begin_writethrough(struct kiocb *iocb, size_t len);
+int netfs_advance_writethrough(struct netfs_writethrough *wthru,
+ struct writeback_control *wbc,
+ struct folio *folio, size_t copied, bool to_page_end);
+ssize_t netfs_end_writethrough(struct netfs_writethrough *wthru,
+ struct writeback_control *wbc);
/*
* write_retry.c
*/
+int netfs_prepare_write_retry_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs);
void netfs_retry_writes(struct netfs_io_request *wreq);
/*
diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c
index 7969c0b1f9a9..69164e8b8e57 100644
--- a/fs/netfs/iterator.c
+++ b/fs/netfs/iterator.c
@@ -102,14 +102,14 @@ ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t max_se
}
if (got == 0) {
- pr_err("extract_pages gave nothing from %zu, %zu\n",
+ pr_err("extract_pages gave nothing from %zx, %zx\n",
extracted, orig_len);
ret = -EIO;
goto out;
}
- if (got > orig_len - extracted) {
- pr_err("extract_pages rc=%zd more than %zu\n",
+ if (got > orig_len) {
+ pr_err("extract_pages rc=%zx more than %zx\n",
got, orig_len);
goto out;
}
diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c
index a19724389147..796dc227c2b2 100644
--- a/fs/netfs/misc.c
+++ b/fs/netfs/misc.c
@@ -232,6 +232,37 @@ void netfs_subreq_clear_in_progress(struct netfs_io_subrequest *subreq)
netfs_wake_collector(rreq);
}
+/*
+ * Wait for a subrequest to come to completion.
+ */
+int netfs_wait_for_in_progress_subreq(struct netfs_io_request *rreq,
+ struct netfs_io_subrequest *subreq)
+{
+ if (netfs_check_subreq_in_progress(subreq)) {
+ DEFINE_WAIT(myself);
+
+ trace_netfs_rreq(rreq, netfs_rreq_trace_wait_quiesce);
+ for (;;) {
+ prepare_to_wait(&rreq->waitq, &myself, TASK_UNINTERRUPTIBLE);
+
+ if (!netfs_check_subreq_in_progress(subreq))
+ break;
+
+ trace_netfs_sreq(subreq, netfs_sreq_trace_wait_for);
+ schedule();
+ }
+
+ trace_netfs_rreq(rreq, netfs_rreq_trace_waited_quiesce);
+ finish_wait(&rreq->waitq, &myself);
+ }
+
+ if (test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags))
+ return -EAGAIN;
+ if (test_bit(NETFS_SREQ_FAILED, &subreq->flags))
+ return subreq->error;
+ return 0;
+}
+
/*
* Wait for all outstanding I/O in a stream to quiesce.
*/
@@ -361,7 +392,7 @@ static ssize_t netfs_wait_for_in_progress(struct netfs_io_request *rreq,
case NETFS_UNBUFFERED_WRITE:
break;
default:
- if (rreq->submitted < rreq->len) {
+ if (rreq->transferred < rreq->len) {
trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read);
ret = -EIO;
}
diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c
index eff431cd7d6a..3db79943762d 100644
--- a/fs/netfs/objects.c
+++ b/fs/netfs/objects.c
@@ -46,8 +46,6 @@ struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
rreq->i_size = i_size_read(inode);
rreq->debug_id = atomic_inc_return(&debug_ids);
rreq->wsize = INT_MAX;
- rreq->io_streams[0].sreq_max_len = ULONG_MAX;
- rreq->io_streams[0].sreq_max_segs = 0;
spin_lock_init(&rreq->lock);
INIT_LIST_HEAD(&rreq->io_streams[0].subrequests);
INIT_LIST_HEAD(&rreq->io_streams[1].subrequests);
@@ -134,8 +132,10 @@ static void netfs_deinit_request(struct netfs_io_request *rreq)
if (rreq->cache_resources.ops)
rreq->cache_resources.ops->end_operation(&rreq->cache_resources);
bvecq_pos_unset(&rreq->load_cursor);
- bvecq_pos_unset(&rreq->dispatch_cursor);
bvecq_pos_unset(&rreq->collect_cursor);
+ bvecq_pos_unset(&rreq->retry_cursor);
+ for (int i = 0; i < NR_IO_STREAMS; i++)
+ bvecq_pos_unset(&rreq->io_streams[i].dispatch_cursor);
if (atomic_dec_and_test(&ictx->io_count))
wake_up_var(&ictx->io_count);
@@ -226,6 +226,7 @@ static void netfs_free_subrequest(struct netfs_io_subrequest *subreq)
struct netfs_io_request *rreq = subreq->rreq;
trace_netfs_sreq(subreq, netfs_sreq_trace_free);
+ WARN_ON_ONCE(!list_empty(&subreq->rreq_link));
if (rreq->netfs_ops->free_subrequest)
rreq->netfs_ops->free_subrequest(subreq);
bvecq_pos_unset(&subreq->dispatch_pos);
diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
index 6d49f9a6b1f0..fbb0425ecb89 100644
--- a/fs/netfs/read_collect.c
+++ b/fs/netfs/read_collect.c
@@ -36,6 +36,7 @@ static void netfs_clear_unread(struct netfs_io_subrequest *subreq)
if (subreq->start + subreq->transferred >= subreq->rreq->i_size)
__set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags);
+ trace_netfs_rreq(subreq->rreq, netfs_rreq_trace_zero_unread);
}
/*
@@ -58,7 +59,7 @@ static void netfs_unlock_read_folio(struct netfs_io_request *rreq,
flush_dcache_folio(folio);
folio_mark_uptodate(folio);
- if (!test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) {
+ if (!netfs_using_pgpriv2(rreq)) {
finfo = netfs_folio_info(folio);
if (finfo) {
trace_netfs_folio(folio, netfs_folio_trace_filled_gaps);
@@ -264,8 +265,7 @@ static void netfs_collect_read_results(struct netfs_io_request *rreq)
transferred = front->len;
trace_netfs_rreq(rreq, netfs_rreq_trace_set_abandon);
}
- if (front->start + transferred >= rreq->cleaned_to + fsize ||
- test_bit(NETFS_SREQ_HIT_EOF, &front->flags))
+ if (front->start + transferred >= rreq->cleaned_to + fsize)
netfs_read_unlock_folios(rreq, ¬es);
} else {
stream->collected_to = front->start + transferred;
@@ -381,31 +381,6 @@ static void netfs_rreq_assess_dio(struct netfs_io_request *rreq)
inode_dio_end(rreq->inode);
}
-/*
- * Do processing after reading a monolithic single object.
- */
-static void netfs_rreq_assess_single(struct netfs_io_request *rreq)
-{
- struct netfs_io_stream *stream = &rreq->io_streams[0];
-
- if (!rreq->error && stream->source == NETFS_DOWNLOAD_FROM_SERVER &&
- fscache_resources_valid(&rreq->cache_resources)) {
- trace_netfs_rreq(rreq, netfs_rreq_trace_dirty);
- netfs_single_mark_inode_dirty(rreq->inode);
- }
-
- if (rreq->iocb) {
- rreq->iocb->ki_pos += rreq->transferred;
- if (rreq->iocb->ki_complete) {
- trace_netfs_rreq(rreq, netfs_rreq_trace_ki_complete);
- rreq->iocb->ki_complete(
- rreq->iocb, rreq->error ? rreq->error : rreq->transferred);
- }
- }
- if (rreq->netfs_ops->done)
- rreq->netfs_ops->done(rreq);
-}
-
/*
* Perform the collection of subrequests and folios.
*
@@ -441,7 +416,7 @@ bool netfs_read_collection(struct netfs_io_request *rreq)
netfs_rreq_assess_dio(rreq);
break;
case NETFS_READ_SINGLE:
- netfs_rreq_assess_single(rreq);
+ WARN_ON_ONCE(1);
break;
default:
break;
diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c
index fb783318318e..5f4d1a21afc5 100644
--- a/fs/netfs/read_pgpriv2.c
+++ b/fs/netfs/read_pgpriv2.c
@@ -13,8 +13,39 @@
#include <linux/task_io_accounting_ops.h>
#include "internal.h"
+int netfs_prepare_pgpriv2_write_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs)
+{
+ struct netfs_io_request *creq = subreq->rreq;
+ struct netfs_io_stream *stream = &creq->io_streams[1];
+ size_t len;
+
+ bvecq_pos_set(&subreq->dispatch_pos, &stream->dispatch_cursor);
+ bvecq_pos_set(&subreq->content, &stream->dispatch_cursor);
+ len = bvecq_slice(&stream->dispatch_cursor, subreq->len, max_segs,
+ &subreq->nr_segs);
+
+ if (len < subreq->len) {
+ subreq->len = len;
+ trace_netfs_sreq(subreq, netfs_sreq_trace_limited);
+ }
+
+ // TODO: Wait here for completion of prev subreq
+
+ stream->issue_from += subreq->len;
+ stream->buffered -= subreq->len;
+ if (stream->buffered == 0) {
+ smp_wmb(); /* Write lists before ALL_QUEUED. */
+ set_bit(NETFS_RREQ_ALL_QUEUED, &creq->flags);
+ }
+ return 0;
+}
+
/*
- * [DEPRECATED] Copy a folio to the cache with PG_private_2 set.
+ * [DEPRECATED] Copy a folio to the cache with PG_private_2 set. Note that the
+ * folio won't necessarily be contiguous with the previous one as there might
+ * be a mixture of folios read from the cache and downloaded from the server
+ * (or just zeroed).
*/
static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio *folio)
{
@@ -24,7 +55,6 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio
size_t dio_size = PAGE_SIZE;
size_t fsize = folio_size(folio), flen = fsize;
loff_t fpos = folio_pos(folio), i_size;
- bool to_eof = false;
_enter("");
@@ -44,12 +74,8 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio
if (fpos + fsize > creq->i_size)
creq->i_size = i_size;
- if (flen > i_size - fpos) {
+ if (flen > i_size - fpos)
flen = i_size - fpos;
- to_eof = true;
- } else if (flen == i_size - fpos) {
- to_eof = true;
- }
flen = round_up(flen, dio_size);
@@ -57,7 +83,6 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio
trace_netfs_folio(folio, netfs_folio_trace_store_copy);
-
/* Institute a new bvec queue segment if the current one is full or if
* we encounter a discontiguity. The discontiguity break is important
* when it comes to bulk unlocking folios by file range.
@@ -79,40 +104,13 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio
/* Attach the folio to the rolling buffer. */
slot = queue->nr_slots;
bvec_set_folio(&queue->bv[slot], folio, fsize, 0);
- /* Order incrementing the slot counter after the slot is filled. */
- smp_store_release(&queue->nr_slots, slot + 1);
+ queue->nr_slots = slot + 1;
creq->load_cursor.slot = slot + 1;
creq->load_cursor.offset = 0;
trace_netfs_bv_slot(queue, slot);
+ trace_netfs_wback(creq, folio, 0);
- cache->submit_off = 0;
- cache->submit_len = flen;
-
- /* Attach the folio to one or more subrequests. For a big folio, we
- * could end up with thousands of subrequests if the wsize is small -
- * but we might need to wait during the creation of subrequests for
- * network resources (eg. SMB credits).
- */
- do {
- ssize_t part;
-
- creq->dispatch_cursor.offset = cache->submit_off;
-
- atomic64_set(&creq->issued_to, fpos + cache->submit_off);
- part = netfs_advance_write(creq, cache, fpos + cache->submit_off,
- cache->submit_len, to_eof);
- cache->submit_off += part;
- if (part > cache->submit_len)
- cache->submit_len = 0;
- else
- cache->submit_len -= part;
- } while (cache->submit_len > 0);
-
- bvecq_pos_step(&creq->dispatch_cursor);
- atomic64_set(&creq->issued_to, fpos + fsize);
-
- if (flen < fsize)
- netfs_issue_write(creq, cache);
+ cache->buffered += flen;
}
/*
@@ -122,6 +120,7 @@ static struct netfs_io_request *netfs_pgpriv2_begin_copy_to_cache(
struct netfs_io_request *rreq, struct folio *folio)
{
struct netfs_io_request *creq;
+ struct netfs_io_stream *cache;
if (!fscache_resources_valid(&rreq->cache_resources))
goto cancel;
@@ -131,12 +130,15 @@ static struct netfs_io_request *netfs_pgpriv2_begin_copy_to_cache(
if (IS_ERR(creq))
goto cancel;
- if (!creq->io_streams[1].avail)
+ cache = &creq->io_streams[1];
+ if (!cache->avail)
+ goto cancel_put;
+
+ if (bvecq_buffer_init(&creq->load_cursor, GFP_KERNEL) < 0)
goto cancel_put;
- bvecq_buffer_init(&creq->load_cursor, GFP_KERNEL);
- bvecq_pos_set(&creq->dispatch_cursor, &creq->load_cursor);
- bvecq_pos_set(&creq->collect_cursor, &creq->dispatch_cursor);
+ bvecq_pos_set(&cache->dispatch_cursor, &creq->load_cursor);
+ bvecq_pos_set(&creq->collect_cursor, &creq->load_cursor);
__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &creq->flags);
trace_netfs_copy2cache(rreq, creq);
@@ -171,19 +173,43 @@ void netfs_pgpriv2_copy_to_cache(struct netfs_io_request *rreq, struct folio *fo
netfs_pgpriv2_copy_folio(creq, folio);
}
+/*
+ * Issue all pending writes on the cache stream.
+ */
+static int netfs_pgpriv2_issue_stream(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream)
+{
+ int ret;
+
+ atomic64_set_release(&stream->issued_to, wreq->start);
+
+ do {
+ struct netfs_io_subrequest *subreq;
+
+ subreq = netfs_alloc_write_subreq(wreq, stream);
+ if (!subreq)
+ return -ENOMEM;
+
+ ret = stream->issue_write(subreq);
+ if (ret < 0 && ret != -EIOCBQUEUED)
+ break;
+ } while (stream->buffered > 0);
+
+ return ret;
+}
+
/*
* [DEPRECATED] End writing to the cache, flushing out any outstanding writes.
*/
void netfs_pgpriv2_end_copy_to_cache(struct netfs_io_request *rreq)
{
struct netfs_io_request *creq = rreq->copy_to_cache;
+ struct netfs_io_stream *stream = &creq->io_streams[1];
if (IS_ERR_OR_NULL(creq))
return;
- netfs_issue_write(creq, &creq->io_streams[1]);
- smp_wmb(); /* Write lists before ALL_QUEUED. */
- set_bit(NETFS_RREQ_ALL_QUEUED, &creq->flags);
+ netfs_pgpriv2_issue_stream(creq, stream);
trace_netfs_rreq(rreq, netfs_rreq_trace_end_copy_to_cache);
if (list_empty_careful(&creq->io_streams[1].subrequests))
netfs_wake_collector(creq);
diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c
index 6f2eb14aac72..b3bc924ffe8e 100644
--- a/fs/netfs/read_retry.c
+++ b/fs/netfs/read_retry.c
@@ -9,19 +9,55 @@
#include <linux/slab.h>
#include "internal.h"
-static void netfs_reissue_read(struct netfs_io_request *rreq,
- struct netfs_io_subrequest *subreq)
+/*
+ * Prepare the I/O buffer on a buffered read subrequest for the filesystem to
+ * use as a bvec queue.
+ */
+int netfs_prepare_buffered_read_retry_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs)
{
+ struct netfs_io_request *rreq = subreq->rreq;
+ size_t len;
+
+ bvecq_pos_set(&subreq->dispatch_pos, &rreq->retry_cursor);
bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
- iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq,
- subreq->content.slot, subreq->content.offset, subreq->len);
- iov_iter_advance(&subreq->io_iter, subreq->transferred);
+ len = bvecq_slice(&rreq->retry_cursor, subreq->len, max_segs,
+ &subreq->nr_segs);
+ if (len < subreq->len) {
+ subreq->len = len;
+ trace_netfs_sreq(subreq, netfs_sreq_trace_limited);
+ }
+ rreq->retry_buffered -= subreq->len;
+ rreq->retry_start += subreq->len;
+ return 0;
+}
- subreq->error = 0;
+/*
+ * Reset the state of the subrequest and discard any buffering so that we can
+ * retry (where this may include sending it to the server instead of the
+ * cache).
+ */
+int netfs_reset_for_read_retry(struct netfs_io_subrequest *subreq)
+{
+ trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
+
+ if (subreq->retry_count > 3) {
+ trace_netfs_sreq(subreq, netfs_sreq_trace_too_many_retries);
+ return subreq->error;
+ }
+
+ subreq->retry_count++;
__clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
+ __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
+ __clear_bit(NETFS_SREQ_FAILED, &subreq->flags);
__set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
- netfs_stat(&netfs_n_rh_retry_read_subreq);
- subreq->rreq->netfs_ops->issue_read(subreq);
+ bvecq_pos_unset(&subreq->content);
+ bvecq_pos_unset(&subreq->dispatch_pos);
+ subreq->error = 0;
+ subreq->transferred = 0;
+ netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
+ netfs_stat(&netfs_n_wh_retry_write_subreq);
+ return 0;
}
/*
@@ -32,8 +68,8 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
{
struct netfs_io_subrequest *subreq;
struct netfs_io_stream *stream = &rreq->io_streams[0];
- struct bvecq_pos dispatch_cursor = {};
struct list_head *next;
+ int ret;
_enter("R=%x", rreq->debug_id);
@@ -43,47 +79,19 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
if (rreq->netfs_ops->retry_request)
rreq->netfs_ops->retry_request(rreq, NULL);
- /* If there's no renegotiation to do, just resend each retryable subreq
- * up to the first permanently failed one.
- */
- if (!rreq->netfs_ops->prepare_read &&
- !rreq->cache_resources.ops) {
- list_for_each_entry(subreq, &stream->subrequests, rreq_link) {
- if (test_bit(NETFS_SREQ_FAILED, &subreq->flags))
- break;
- if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) {
- __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
- subreq->retry_count++;
- netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
- netfs_reissue_read(rreq, subreq);
- }
- }
- return;
- }
-
/* Okay, we need to renegotiate all the download requests and flip any
* failed cache reads over to being download requests and negotiate
- * those also. All fully successful subreqs have been removed from the
- * list and any spare data from those has been donated.
- *
- * What we do is decant the list and rebuild it one subreq at a time so
- * that we don't end up with donations jumping over a gap we're busy
- * populating with smaller subrequests. In the event that the subreq
- * we just launched finishes before we insert the next subreq, it'll
- * fill in rreq->prev_donated instead.
- *
- * Note: Alternatively, we could split the tail subrequest right before
- * we reissue it and fix up the donations under lock.
+ * those also.
*/
next = stream->subrequests.next;
do {
struct netfs_io_subrequest *from, *to, *tmp;
- unsigned long long start, len;
- size_t part;
- bool boundary = false, subreq_superfluous = false;
+ unsigned long long start;
+ size_t len;
+ bool subreq_superfluous = false;
- bvecq_pos_unset(&dispatch_cursor);
+ bvecq_pos_unset(&rreq->retry_cursor);
/* Go through the subreqs and find the next span of contiguous
* buffer that we then rejig (cifs, for example, needs the
@@ -98,8 +106,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
rreq->debug_id, from->debug_index,
from->start, from->transferred, from->len);
- if (test_bit(NETFS_SREQ_FAILED, &from->flags) ||
- !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) {
+ if (!test_bit(NETFS_SREQ_NEED_RETRY, &from->flags)) {
subreq = from;
goto abandon;
}
@@ -107,68 +114,53 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
list_for_each_continue(next, &stream->subrequests) {
subreq = list_entry(next, struct netfs_io_subrequest, rreq_link);
if (subreq->start + subreq->transferred != start + len ||
- test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags) ||
!test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags))
break;
to = subreq;
len += to->len;
}
- _debug(" - range: %llx-%llx %llx", start, start + len - 1, len);
+ _debug(" - range: %llx-%llx %zx", start, start + len - 1, len);
/* Determine the set of buffers we're going to use. Each
- * subreq gets a subset of a single overall contiguous buffer.
+ * subreq takes a subset of a single overall contiguous buffer.
*/
- bvecq_pos_transfer(&dispatch_cursor, &from->dispatch_pos);
- bvecq_pos_advance(&dispatch_cursor, from->transferred);
+ bvecq_pos_transfer(&rreq->retry_cursor, &from->dispatch_pos);
+ bvecq_pos_advance(&rreq->retry_cursor, from->transferred);
+ rreq->retry_start = start;
+ rreq->retry_buffered = len;
/* Work through the sublist. */
subreq = from;
list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) {
- if (!len) {
+ if (rreq->retry_buffered == 0) {
subreq_superfluous = true;
break;
}
subreq->source = NETFS_DOWNLOAD_FROM_SERVER;
- subreq->start = start - subreq->transferred;
- subreq->len = len + subreq->transferred;
- __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
- __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
- subreq->retry_count++;
+ subreq->start = rreq->retry_start;
+ subreq->len = rreq->retry_buffered;
- bvecq_pos_unset(&subreq->dispatch_pos);
- bvecq_pos_unset(&subreq->content);
-
- trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
-
- /* Renegotiate max_len (rsize) */
- stream->sreq_max_len = subreq->len;
- stream->sreq_max_segs = INT_MAX;
- if (rreq->netfs_ops->prepare_read &&
- rreq->netfs_ops->prepare_read(subreq) < 0) {
- trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed);
+ ret = netfs_reset_for_read_retry(subreq);
+ if (ret < 0) {
__set_bit(NETFS_SREQ_FAILED, &subreq->flags);
+ rreq->error = ret;
goto abandon;
}
- bvecq_pos_set(&subreq->dispatch_pos, &dispatch_cursor);
- part = bvecq_slice(&dispatch_cursor,
- umin(len, stream->sreq_max_len),
- stream->sreq_max_segs,
- &subreq->nr_segs);
- subreq->len = subreq->transferred + part;
-
- len -= part;
- start += part;
- if (!len) {
- if (boundary)
- __set_bit(NETFS_SREQ_BOUNDARY, &subreq->flags);
- } else {
- __clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags);
+ netfs_stat(&netfs_n_rh_download);
+ ret = rreq->netfs_ops->issue_read(subreq);
+ if (ret < 0 && ret != -EIOCBQUEUED) {
+ if (ret == -ENOMEM)
+ goto abandon;
+ subreq->error = ret;
+ if (ret != -EAGAIN) {
+ __set_bit(NETFS_SREQ_FAILED, &subreq->flags);
+ goto abandon_after;
+ }
+ __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
+ netfs_read_subreq_terminated(subreq);
}
-
- netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
- netfs_reissue_read(rreq, subreq);
if (subreq == to) {
subreq_superfluous = false;
break;
@@ -178,7 +170,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
/* If we managed to use fewer subreqs, we can discard the
* excess; if we used the same number, then we're done.
*/
- if (!len) {
+ if (rreq->retry_buffered == 0) {
if (!subreq_superfluous)
continue;
list_for_each_entry_safe_from(subreq, tmp,
@@ -194,7 +186,8 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
}
/* We ran out of subrequests, so we need to allocate some more
- * and insert them after.
+ * and insert them after. They must start with being marked
+ * for retry to switch to the retry cursor.
*/
do {
subreq = netfs_alloc_subrequest(rreq);
@@ -203,8 +196,8 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
goto abandon_after;
}
subreq->source = NETFS_DOWNLOAD_FROM_SERVER;
- subreq->start = start;
- subreq->len = len;
+ subreq->start = rreq->retry_start;
+ subreq->len = rreq->retry_buffered;
subreq->stream_nr = stream->stream_nr;
subreq->retry_count = 1;
@@ -216,37 +209,26 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
to = list_next_entry(to, rreq_link);
trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
- stream->sreq_max_len = umin(len, rreq->rsize);
- stream->sreq_max_segs = INT_MAX;
-
netfs_stat(&netfs_n_rh_download);
- if (rreq->netfs_ops->prepare_read(subreq) < 0) {
- trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed);
- __set_bit(NETFS_SREQ_FAILED, &subreq->flags);
- goto abandon;
+ ret = rreq->netfs_ops->issue_read(subreq);
+ if (ret < 0 && ret != -EIOCBQUEUED) {
+ if (ret == -ENOMEM)
+ goto abandon;
+ subreq->error = ret;
+ if (ret != -EAGAIN) {
+ __set_bit(NETFS_SREQ_FAILED, &subreq->flags);
+ goto abandon_after;
+ }
+ __set_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
+ netfs_read_subreq_terminated(subreq);
}
- bvecq_pos_set(&subreq->dispatch_pos, &dispatch_cursor);
- part = bvecq_slice(&dispatch_cursor,
- umin(len, stream->sreq_max_len),
- stream->sreq_max_segs,
- &subreq->nr_segs);
- subreq->len = subreq->transferred + part;
-
- len -= part;
- start += part;
- if (!len && boundary) {
- __set_bit(NETFS_SREQ_BOUNDARY, &to->flags);
- boundary = false;
- }
-
- netfs_reissue_read(rreq, subreq);
- } while (len);
+ } while (rreq->retry_buffered > 0);
} while (!list_is_head(next, &stream->subrequests));
out:
- bvecq_pos_unset(&dispatch_cursor);
+ bvecq_pos_unset(&rreq->retry_cursor);
return;
/* If we hit an error, fail all remaining incomplete subrequests */
@@ -295,8 +277,6 @@ void netfs_unlock_abandoned_read_pages(struct netfs_io_request *rreq)
struct bvecq *p;
for (p = rreq->collect_cursor.bvecq; p; p = p->next) {
- if (!p->free)
- continue;
for (int slot = 0; slot < p->nr_slots; slot++) {
if (!p->bv[slot].bv_page)
continue;
@@ -310,6 +290,7 @@ void netfs_unlock_abandoned_read_pages(struct netfs_io_request *rreq)
}
trace_netfs_folio(folio, netfs_folio_trace_abandon);
folio_unlock(folio);
+ p->bv[slot].bv_page = NULL;
}
}
}
diff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c
index b386cae77ece..52b9e12a820a 100644
--- a/fs/netfs/read_single.c
+++ b/fs/netfs/read_single.c
@@ -16,6 +16,19 @@
#include <linux/netfs.h>
#include "internal.h"
+int netfs_prepare_read_single_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs)
+{
+ struct netfs_io_request *rreq = subreq->rreq;
+ struct netfs_io_stream *stream = &rreq->io_streams[0];
+
+ bvecq_pos_set(&subreq->dispatch_pos, &rreq->load_cursor);
+ bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
+
+ stream->issue_from += subreq->len;
+ return 0;
+}
+
/**
* netfs_single_mark_inode_dirty - Mark a single, monolithic object inode dirty
* @inode: The inode to mark
@@ -58,24 +71,12 @@ static int netfs_single_begin_cache_read(struct netfs_io_request *rreq, struct n
return fscache_begin_read_operation(&rreq->cache_resources, netfs_i_cookie(ctx));
}
-static void netfs_single_read_cache(struct netfs_io_request *rreq,
- struct netfs_io_subrequest *subreq)
-{
- struct netfs_cache_resources *cres = &rreq->cache_resources;
-
- _enter("R=%08x[%x]", rreq->debug_id, subreq->debug_index);
- netfs_stat(&netfs_n_rh_read);
- cres->ops->read(cres, subreq->start, &subreq->io_iter, NETFS_READ_HOLE_FAIL,
- netfs_cache_read_terminated, subreq);
-}
-
/*
* Perform a read to a buffer from the cache or the server. Only a single
* subreq is permitted as the object must be fetched in a single transaction.
*/
static int netfs_single_dispatch_read(struct netfs_io_request *rreq)
{
- struct netfs_io_stream *stream = &rreq->io_streams[0];
struct fscache_occupancy occ = {
.query_from = 0,
.query_to = rreq->len,
@@ -85,76 +86,79 @@ static int netfs_single_dispatch_read(struct netfs_io_request *rreq)
.cached_to[1] = ULLONG_MAX,
};
struct netfs_io_subrequest *subreq;
- int ret = 0;
+ int ret;
+
+ ret = netfs_read_query_cache(rreq, &occ);
+ if (ret < 0)
+ return ret;
subreq = netfs_alloc_subrequest(rreq);
if (!subreq)
return -ENOMEM;
- subreq->source = NETFS_DOWNLOAD_FROM_SERVER;
subreq->start = 0;
subreq->len = rreq->len;
- bvecq_pos_set(&subreq->dispatch_pos, &rreq->dispatch_cursor);
- bvecq_pos_set(&subreq->content, &rreq->dispatch_cursor);
-
- iov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq,
- subreq->content.slot, subreq->content.offset, subreq->len);
+ trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
/* Try to use the cache if the cache content matches the size of the
* remote file.
*/
- netfs_read_query_cache(rreq, &occ);
if (occ.cached_from[0] == 0 &&
- occ.cached_to[0] == rreq->len)
- subreq->source = NETFS_READ_FROM_CACHE;
+ occ.cached_to[0] == rreq->len) {
+ struct netfs_cache_resources *cres = &rreq->cache_resources;
- __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
+ subreq->source = NETFS_READ_FROM_CACHE;
+ netfs_stat(&netfs_n_rh_read);
+ ret = cres->ops->issue_read(subreq);
+ if (ret == -EIOCBQUEUED)
+ ret = netfs_wait_for_in_progress_subreq(rreq, subreq);
+ if (ret == -ENOMEM)
+ goto cancel;
+ if (ret == 0)
+ goto success;
+
+ /* Didn't manage to retrieve from the cache, so toss it to the
+ * server instead.
+ */
+ if (netfs_reset_for_read_retry(subreq) < 0)
+ goto cancel;
+ }
- spin_lock(&rreq->lock);
- list_add_tail(&subreq->rreq_link, &stream->subrequests);
- trace_netfs_sreq(subreq, netfs_sreq_trace_added);
- /* Store list pointers before active flag */
- smp_store_release(&stream->active, true);
- spin_unlock(&rreq->lock);
+ __set_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags);
- switch (subreq->source) {
- case NETFS_DOWNLOAD_FROM_SERVER:
+ /* Try to send it to the cache. */
+ for (;;) {
+ subreq->source = NETFS_DOWNLOAD_FROM_SERVER;
netfs_stat(&netfs_n_rh_download);
- if (rreq->netfs_ops->prepare_read) {
- ret = rreq->netfs_ops->prepare_read(subreq);
- if (ret < 0)
- goto cancel;
- }
-
- rreq->netfs_ops->issue_read(subreq);
- rreq->submitted += subreq->len;
- break;
- case NETFS_READ_FROM_CACHE:
- if (rreq->cache_resources.ops->prepare_read) {
- ret = rreq->cache_resources.ops->prepare_read(subreq);
- if (ret < 0)
- goto cancel;
- }
-
- trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
- netfs_single_read_cache(rreq, subreq);
- rreq->submitted += subreq->len;
- ret = 0;
- break;
- default:
- pr_warn("Unexpected single-read source %u\n", subreq->source);
- WARN_ON_ONCE(true);
- ret = -EIO;
- break;
+ ret = rreq->netfs_ops->issue_read(subreq);
+ if (ret == -EIOCBQUEUED)
+ ret = netfs_wait_for_in_progress_subreq(rreq, subreq);
+ if (ret == 0)
+ goto success;
+ if (ret == -ENOMEM)
+ goto cancel;
+ if (ret != -EAGAIN)
+ goto failed;
+ if (netfs_reset_for_read_retry(subreq) < 0)
+ goto cancel;
}
- smp_wmb(); /* Write lists before ALL_QUEUED. */
- set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
- return ret;
+success:
+ rreq->transferred = subreq->transferred;
+ list_del_init(&subreq->rreq_link);
+ netfs_put_subrequest(subreq, netfs_sreq_trace_put_consumed);
+ return 0;
cancel:
+ rreq->error = ret;
+ list_del_init(&subreq->rreq_link);
netfs_put_subrequest(subreq, netfs_sreq_trace_put_cancel);
return ret;
+failed:
+ rreq->error = ret;
+ list_del_init(&subreq->rreq_link);
+ netfs_put_subrequest(subreq, netfs_sreq_trace_put_failed);
+ return ret;
}
/**
@@ -185,7 +189,7 @@ ssize_t netfs_read_single(struct inode *inode, struct file *file, struct iov_ite
if (IS_ERR(rreq))
return PTR_ERR(rreq);
- ret = netfs_extract_iter(iter, rreq->len, INT_MAX, 0, &rreq->dispatch_cursor.bvecq, 0);
+ ret = netfs_extract_iter(iter, rreq->len, INT_MAX, 0, &rreq->load_cursor.bvecq, 0);
if (ret < 0)
goto cleanup_free;
@@ -196,9 +200,29 @@ ssize_t netfs_read_single(struct inode *inode, struct file *file, struct iov_ite
netfs_stat(&netfs_n_rh_read_single);
trace_netfs_read(rreq, 0, rreq->len, netfs_read_trace_read_single);
- netfs_single_dispatch_read(rreq);
+ ret = netfs_single_dispatch_read(rreq);
+
+ trace_netfs_rreq(rreq, netfs_rreq_trace_complete);
+ if (ret == 0) {
+ task_io_account_read(rreq->transferred);
+
+ if (test_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags) &&
+ fscache_resources_valid(&rreq->cache_resources)) {
+ trace_netfs_rreq(rreq, netfs_rreq_trace_dirty);
+ netfs_single_mark_inode_dirty(rreq->inode);
+ }
+ ret = rreq->transferred;
+ }
+
+ if (rreq->netfs_ops->done)
+ rreq->netfs_ops->done(rreq);
+
+ netfs_wake_rreq_flag(rreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_ip);
+ /* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */
+ netfs_put_request(rreq, netfs_rreq_trace_put_work_ip);
+
+ trace_netfs_rreq(rreq, netfs_rreq_trace_done);
- ret = netfs_wait_for_read(rreq);
netfs_put_request(rreq, netfs_rreq_trace_put_return);
return ret;
diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c
index fb8daf50c86d..bfca6d48361f 100644
--- a/fs/netfs/write_collect.c
+++ b/fs/netfs/write_collect.c
@@ -28,8 +28,8 @@ static void netfs_dump_request(const struct netfs_io_request *rreq)
rreq->origin, rreq->error);
pr_err(" st=%llx tsl=%zx/%llx/%llx\n",
rreq->start, rreq->transferred, rreq->submitted, rreq->len);
- pr_err(" cci=%llx/%llx/%llx\n",
- rreq->cleaned_to, rreq->collected_to, atomic64_read(&rreq->issued_to));
+ pr_err(" cci=%llx/%llx\n",
+ rreq->cleaned_to, rreq->collected_to);
pr_err(" iw=%pSR\n", rreq->netfs_ops->issue_write);
for (int i = 0; i < NR_IO_STREAMS; i++) {
const struct netfs_io_subrequest *sreq;
@@ -38,8 +38,9 @@ static void netfs_dump_request(const struct netfs_io_request *rreq)
pr_err(" str[%x] s=%x e=%d acnf=%u,%u,%u,%u\n",
s->stream_nr, s->source, s->error,
s->avail, s->active, s->need_retry, s->failed);
- pr_err(" str[%x] ct=%llx t=%zx\n",
- s->stream_nr, s->collected_to, s->transferred);
+ pr_err(" str[%x] it=%llx ct=%llx t=%zx\n",
+ s->stream_nr, atomic64_read(&s->issued_to),
+ s->collected_to, s->transferred);
list_for_each_entry(sreq, &s->subrequests, rreq_link) {
pr_err(" sreq[%x:%x] sc=%u s=%llx t=%zx/%zx r=%d f=%lx\n",
sreq->stream_nr, sreq->debug_index, sreq->source,
@@ -56,7 +57,7 @@ static void netfs_dump_request(const struct netfs_io_request *rreq)
*/
int netfs_folio_written_back(struct folio *folio)
{
- enum netfs_folio_trace why = netfs_folio_trace_clear;
+ enum netfs_folio_trace why = netfs_folio_trace_endwb;
struct netfs_inode *ictx = netfs_inode(folio->mapping->host);
struct netfs_folio *finfo;
struct netfs_group *group = NULL;
@@ -76,13 +77,13 @@ int netfs_folio_written_back(struct folio *folio)
group = finfo->netfs_group;
gcount++;
kfree(finfo);
- why = netfs_folio_trace_clear_s;
+ why = netfs_folio_trace_endwb_s;
goto end_wb;
}
if ((group = netfs_folio_group(folio))) {
if (group == NETFS_FOLIO_COPY_TO_CACHE) {
- why = netfs_folio_trace_clear_cc;
+ why = netfs_folio_trace_endwb_cc;
folio_detach_private(folio);
goto end_wb;
}
@@ -95,7 +96,7 @@ int netfs_folio_written_back(struct folio *folio)
if (!folio_test_dirty(folio)) {
folio_detach_private(folio);
gcount++;
- why = netfs_folio_trace_clear_g;
+ why = netfs_folio_trace_endwb_g;
}
}
@@ -222,9 +223,7 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
trace_netfs_rreq(wreq, netfs_rreq_trace_collect);
reassess_streams:
- /* Order reading the issued_to point before reading the queue it refers to. */
- issued_to = atomic64_read_acquire(&wreq->issued_to);
- smp_rmb();
+ issued_to = ULLONG_MAX;
collected_to = ULLONG_MAX;
if (wreq->origin == NETFS_WRITEBACK ||
wreq->origin == NETFS_WRITETHROUGH ||
@@ -239,14 +238,26 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
* to the tail whilst we're doing this.
*/
for (s = 0; s < NR_IO_STREAMS; s++) {
+ unsigned long long s_issued_to;
+
stream = &wreq->io_streams[s];
- /* Read active flag before list pointers */
+ /* Read active flag before issued_to */
if (!smp_load_acquire(&stream->active))
continue;
- front = list_first_entry_or_null(&stream->subrequests,
- struct netfs_io_subrequest, rreq_link);
- while (front) {
+ for (;;) {
+ /* Order reading the issued_to point before reading the
+ * queue it refers to.
+ */
+ s_issued_to = atomic64_read_acquire(&stream->issued_to);
+ if (s_issued_to < issued_to)
+ issued_to = s_issued_to;
+
+ front = list_first_entry_or_null(&stream->subrequests,
+ struct netfs_io_subrequest, rreq_link);
+ if (!front)
+ break;
+
trace_netfs_collect_sreq(wreq, front);
//_debug("sreq [%x] %llx %zx/%zx",
// front->debug_index, front->start, front->transferred, front->len);
diff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c
index d4c4bee4299e..ec84d2bcabeb 100644
--- a/fs/netfs/write_issue.c
+++ b/fs/netfs/write_issue.c
@@ -36,6 +36,39 @@
#include <linux/pagemap.h>
#include "internal.h"
+#define NOTE_UPLOAD_AVAIL 0x001 /* Upload is available */
+#define NOTE_CACHE_AVAIL 0x002 /* Local cache is available */
+#define NOTE_CACHE_COPY 0x004 /* Copy folio to cache */
+#define NOTE_UPLOAD 0x008 /* Upload folio to server */
+#define NOTE_UPLOAD_STARTED 0x010 /* Upload started */
+#define NOTE_STREAMW 0x020 /* Folio is from a streaming write */
+#define NOTE_DISCONTIG_BEFORE 0x040 /* Folio discontiguous with the previous folio */
+#define NOTE_DISCONTIG_AFTER 0x080 /* Folio discontiguous with the next folio */
+#define NOTE_TO_EOF 0x100 /* Data in folio ends at EOF */
+#define NOTE_FLUSH_ANYWAY 0x200 /* Flush data, even if not hit estimated limit */
+
+#define NOTES__KEEP_MASK (NOTE_UPLOAD_AVAIL | NOTE_CACHE_AVAIL | NOTE_UPLOAD_STARTED)
+
+struct netfs_wb_params {
+ unsigned long long last_end; /* End file pos of previous folio */
+ unsigned long long folio_start; /* File pos of folio */
+ unsigned int folio_len; /* Length of folio */
+ unsigned int dirty_offset; /* Offset of dirty region in folio */
+ unsigned int dirty_len; /* Length of dirty region in folio */
+ unsigned int notes; /* Notes on applicability */
+ struct bvecq_pos dispatch_cursor; /* Folio queue anchor for issue_at */
+ struct netfs_write_estimate estimates[2];
+};
+
+struct netfs_writethrough {
+ struct netfs_wb_params params;
+ struct netfs_io_request *wreq;
+ struct folio *in_progress;
+};
+
+static int netfs_prepare_write_single_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs);
+
/*
* Kill all dirty folios in the event of an unrecoverable error, starting with
* a locked folio we've already obtained from writeback_iter().
@@ -115,65 +148,48 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,
wreq->io_streams[0].stream_nr = 0;
wreq->io_streams[0].source = NETFS_UPLOAD_TO_SERVER;
- wreq->io_streams[0].prepare_write = ictx->ops->prepare_write;
+ wreq->io_streams[0].applicable = NOTE_UPLOAD;
+ wreq->io_streams[0].estimate_write = ictx->ops->estimate_write;
wreq->io_streams[0].issue_write = ictx->ops->issue_write;
wreq->io_streams[0].collected_to = start;
wreq->io_streams[0].transferred = 0;
wreq->io_streams[1].stream_nr = 1;
wreq->io_streams[1].source = NETFS_WRITE_TO_CACHE;
+ wreq->io_streams[1].applicable = NOTE_CACHE_COPY;
wreq->io_streams[1].collected_to = start;
wreq->io_streams[1].transferred = 0;
if (fscache_resources_valid(&wreq->cache_resources)) {
wreq->io_streams[1].avail = true;
wreq->io_streams[1].active = true;
- wreq->io_streams[1].prepare_write = wreq->cache_resources.ops->prepare_write_subreq;
+ wreq->io_streams[1].estimate_write = wreq->cache_resources.ops->estimate_write;
wreq->io_streams[1].issue_write = wreq->cache_resources.ops->issue_write;
}
return wreq;
}
-/**
- * netfs_prepare_write_failed - Note write preparation failed
- * @subreq: The subrequest to mark
- *
- * Mark a subrequest to note that preparation for write failed.
- */
-void netfs_prepare_write_failed(struct netfs_io_subrequest *subreq)
-{
- __set_bit(NETFS_SREQ_FAILED, &subreq->flags);
- trace_netfs_sreq(subreq, netfs_sreq_trace_prep_failed);
-}
-EXPORT_SYMBOL(netfs_prepare_write_failed);
-
/*
- * Prepare a write subrequest. We need to allocate a new subrequest
- * if we don't have one.
+ * Allocate and prepare a write subrequest.
*/
-void netfs_prepare_write(struct netfs_io_request *wreq,
- struct netfs_io_stream *stream,
- loff_t start)
+struct netfs_io_subrequest *netfs_alloc_write_subreq(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream)
{
struct netfs_io_subrequest *subreq;
subreq = netfs_alloc_subrequest(wreq);
subreq->source = stream->source;
- subreq->start = start;
+ subreq->start = stream->issue_from;
+ subreq->len = stream->buffered;
subreq->stream_nr = stream->stream_nr;
- bvecq_pos_set(&subreq->dispatch_pos, &wreq->dispatch_cursor);
-
_enter("R=%x[%x]", wreq->debug_id, subreq->debug_index);
trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
- stream->sreq_max_len = UINT_MAX;
- stream->sreq_max_segs = INT_MAX;
switch (stream->source) {
case NETFS_UPLOAD_TO_SERVER:
netfs_stat(&netfs_n_wh_upload);
- stream->sreq_max_len = wreq->wsize;
break;
case NETFS_WRITE_TO_CACHE:
netfs_stat(&netfs_n_wh_write);
@@ -183,9 +199,6 @@ void netfs_prepare_write(struct netfs_io_request *wreq,
break;
}
- if (stream->prepare_write)
- stream->prepare_write(subreq);
-
__set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
/* We add to the end of the list whilst the collector may be walking
@@ -194,84 +207,46 @@ void netfs_prepare_write(struct netfs_io_request *wreq,
*/
spin_lock(&wreq->lock);
list_add_tail(&subreq->rreq_link, &stream->subrequests);
- if (list_is_first(&subreq->rreq_link, &stream->subrequests)) {
- if (!stream->active) {
- stream->collected_to = subreq->start;
- /* Write list pointers before active flag */
- smp_store_release(&stream->active, true);
- }
- }
+ if (list_is_first(&subreq->rreq_link, &stream->subrequests) &&
+ stream->collected_to == 0)
+ stream->collected_to = subreq->start;
spin_unlock(&wreq->lock);
-
- stream->construct = subreq;
+ return subreq;
}
/*
- * Set the I/O iterator for the filesystem/cache to use and dispatch the I/O
- * operation. The operation may be asynchronous and should call
- * netfs_write_subrequest_terminated() when complete.
+ * Prepare the buffer for a buffered write.
*/
-static void netfs_do_issue_write(struct netfs_io_stream *stream,
- struct netfs_io_subrequest *subreq)
+static int netfs_prepare_buffered_write_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs)
{
struct netfs_io_request *wreq = subreq->rreq;
+ struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr];
+ ssize_t len;
- _enter("R=%x[%x],%zx", wreq->debug_id, subreq->debug_index, subreq->len);
-
- if (test_bit(NETFS_SREQ_FAILED, &subreq->flags))
- return netfs_write_subrequest_terminated(subreq, subreq->error);
-
- trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
- stream->issue_write(subreq);
-}
-
-void netfs_reissue_write(struct netfs_io_stream *stream,
- struct netfs_io_subrequest *subreq)
-{
- // TODO: Use encrypted buffer
- bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
- iov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE,
- subreq->content.bvecq, subreq->content.slot,
- subreq->content.offset,
- subreq->len);
- iov_iter_advance(&subreq->io_iter, subreq->transferred);
-
- subreq->retry_count++;
- subreq->error = 0;
- __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
- __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
- netfs_stat(&netfs_n_wh_retry_write_subreq);
- netfs_do_issue_write(stream, subreq);
-}
-
-void netfs_issue_write(struct netfs_io_request *wreq,
- struct netfs_io_stream *stream)
-{
- struct netfs_io_subrequest *subreq = stream->construct;
+ _enter("%zx,{,%u,%u},%u",
+ subreq->len, stream->dispatch_cursor.slot, stream->dispatch_cursor.offset, max_segs);
- if (!subreq)
- return;
+ bvecq_pos_set(&subreq->dispatch_pos, &stream->dispatch_cursor);
/* If we have a write to the cache, we need to round out the first and
* last entries (only those as the data will be on virtually contiguous
* folios) to cache DIO boundaries.
*/
if (subreq->source == NETFS_WRITE_TO_CACHE) {
- struct bvecq_pos tmp_pos;
struct bio_vec *bv;
struct bvecq *bq;
size_t dio_size = wreq->cache_resources.dio_size;
- size_t disp, len;
- int ret;
+ size_t disp, dlen;
- bvecq_pos_set(&tmp_pos, &subreq->dispatch_pos);
- ret = bvecq_extract(&tmp_pos, subreq->len, INT_MAX, &subreq->content.bvecq);
- bvecq_pos_unset(&tmp_pos);
- if (ret < 0) {
- netfs_write_subrequest_terminated(subreq, -ENOMEM);
- return;
- }
+ len = bvecq_extract(&stream->dispatch_cursor, subreq->len, max_segs,
+ &subreq->content.bvecq);
+ if (len < 0)
+ return -ENOMEM;
+
+ _debug("extract %zx/%zx", len, subreq->len);
+ subreq->len = len;
/* Round the first entry down. */
bq = subreq->content.bvecq;
@@ -289,96 +264,276 @@ void netfs_issue_write(struct netfs_io_request *wreq,
while (bq->next)
bq = bq->next;
bv = &bq->bv[bq->nr_slots - 1];
- len = round_up(bv->bv_len, dio_size);
- if (len > bv->bv_len) {
- subreq->len += len - bv->bv_len;
- bv->bv_len = len;
+ dlen = round_up(bv->bv_len, dio_size);
+ if (dlen > bv->bv_len) {
+ subreq->len += dlen - bv->bv_len;
+ bv->bv_len = dlen;
}
} else {
- bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
+ bvecq_pos_set(&subreq->content, &stream->dispatch_cursor);
+ len = bvecq_slice(&stream->dispatch_cursor, subreq->len, max_segs,
+ &subreq->nr_segs);
+
+ if (len < subreq->len) {
+ subreq->len = len;
+ trace_netfs_sreq(subreq, netfs_sreq_trace_limited);
+ }
}
- iov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE,
- subreq->content.bvecq, subreq->content.slot,
- subreq->content.offset,
- subreq->len);
+ stream->issue_from += len;
+ stream->buffered -= len;
+ if (stream->buffered == 0) {
+ stream->buffering = false;
+ bvecq_pos_unset(&stream->dispatch_cursor);
+ }
+ /* Order loading the queue before updating the issue_to point */
+ atomic64_set_release(&stream->issued_to, stream->issue_from);
+ return 0;
+}
+
+/**
+ * netfs_prepare_write_buffer - Get the buffer for a subrequest
+ * @subreq: The subrequest to get the buffer for
+ * @max_segs: Maximum number of segments in buffer (or INT_MAX)
+ *
+ * Extract a slice of buffer from the stream and attach it to the subrequest as
+ * a bio_vec queue. The maximum amount of data attached is set by
+ * @subreq->len, but this may be shortened if @max_segs would be exceeded.
+ */
+int netfs_prepare_write_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs)
+{
+ struct netfs_io_request *rreq = subreq->rreq;
+
+ switch (rreq->origin) {
+ case NETFS_WRITEBACK:
+ case NETFS_WRITETHROUGH:
+ if (test_bit(NETFS_RREQ_RETRYING, &rreq->flags))
+ return netfs_prepare_write_retry_buffer(subreq, max_segs);
+ return netfs_prepare_buffered_write_buffer(subreq, max_segs);
+
+ case NETFS_UNBUFFERED_WRITE:
+ case NETFS_DIO_WRITE:
+ return netfs_prepare_unbuffered_write_buffer(subreq, max_segs);
- stream->construct = NULL;
- netfs_do_issue_write(stream, subreq);
+ case NETFS_WRITEBACK_SINGLE:
+ return netfs_prepare_write_single_buffer(subreq, max_segs);
+
+ case NETFS_PGPRIV2_COPY_TO_CACHE:
+ return netfs_prepare_pgpriv2_write_buffer(subreq, max_segs);
+
+ default:
+ WARN_ON_ONCE(1);
+ return -EIO;
+ }
}
+EXPORT_SYMBOL(netfs_prepare_write_buffer);
/*
- * Add data to the write subrequest, dispatching each as we fill it up or if it
- * is discontiguous with the previous. We only fill one part at a time so that
- * we can avoid overrunning the credits obtained (cifs) and try to parallelise
- * content-crypto preparation with network writes.
+ * Issue writes for a stream.
*/
-size_t netfs_advance_write(struct netfs_io_request *wreq,
- struct netfs_io_stream *stream,
- loff_t start, size_t len, bool to_eof)
+static int netfs_issue_writes(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream,
+ struct netfs_wb_params *params)
{
- struct netfs_io_subrequest *subreq = stream->construct;
- size_t part;
+ struct netfs_write_estimate *estimate = ¶ms->estimates[stream->stream_nr];
+
+ for (;;) {
+ struct netfs_io_subrequest *subreq;
+ int ret;
+
+ subreq = netfs_alloc_write_subreq(wreq, stream);
+ if (!subreq)
+ return -ENOMEM;
- if (!stream->avail) {
- _leave("no write");
- return len;
+ ret = stream->issue_write(subreq);
+ if (ret < 0 && ret != -EIOCBQUEUED)
+ return ret;
+
+ if (stream->buffered == 0) {
+ if (stream->stream_nr == 0)
+ params->notes &= ~NOTE_UPLOAD_STARTED;
+ return 0;
+ }
+
+ if (!(params->notes & NOTE_FLUSH_ANYWAY)) {
+ estimate->issue_at = ULLONG_MAX;
+ estimate->max_segs = INT_MAX;
+ stream->estimate_write(wreq, stream, estimate);
+ if (stream->issue_from + stream->buffered < estimate->issue_at &&
+ estimate->max_segs > 0)
+ return 0;
+ }
+ }
+}
+
+/*
+ * Issue pending writes on a stream.
+ */
+static int netfs_issue_stream(struct netfs_io_request *wreq,
+ struct netfs_wb_params *params, int s)
+{
+ struct netfs_write_estimate *estimate = ¶ms->estimates[s];
+ struct netfs_io_stream *stream = &wreq->io_streams[s];
+ unsigned long long dirty_start;
+ bool discontig_before = params->notes & NOTE_DISCONTIG_BEFORE;
+ int ret;
+
+ _enter("%x", params->notes);
+
+ /* If the current folio doesn't contribute to this stream, see if we
+ * need to flush it.
+ */
+ if (!(params->notes & stream->applicable)) {
+ if (!stream->buffering) {
+ atomic64_set_release(&stream->issued_to,
+ params->folio_start + params->folio_len);
+ return 0;
+ }
+ discontig_before = true;
+ }
+
+ /* Issue writes if we meet a discontiguity before the current folio.
+ * Even if the filesystem can do sparse/vectored writes, we still
+ * generate a subreq per contiguous region rather than generating
+ * separate extent lists.
+ */
+ if (stream->buffering && discontig_before) {
+ params->notes |= NOTE_FLUSH_ANYWAY;
+ ret = netfs_issue_writes(wreq, stream, params);
+ if (ret < 0)
+ return ret;
+ stream->buffering = false;
+ params->notes &= ~NOTE_FLUSH_ANYWAY;
+ }
+
+ if (!(params->notes & stream->applicable)) {
+ atomic64_set_release(&stream->issued_to,
+ params->folio_start + params->folio_len);
+ return 0;
+ }
+
+ /* If we're not currently buffering on this stream, we need to get an
+ * estimate of when we need to issue a write. It might be within the
+ * starting folio.
+ */
+ dirty_start = params->folio_start + params->dirty_offset;
+ if (!stream->buffering) {
+ stream->buffering = true;
+ stream->issue_from = dirty_start;
+ bvecq_pos_set(&stream->dispatch_cursor, ¶ms->dispatch_cursor);
+ estimate->issue_at = ULLONG_MAX;
+ estimate->max_segs = INT_MAX;
+ stream->estimate_write(wreq, stream, estimate);
+ }
+
+ stream->buffered += params->dirty_len;
+ estimate->max_segs--;
+
+ /* Poke the filesystem to issue writes when we hit the limit it set or
+ * if the data ends before the end of the page.
+ */
+ if (params->notes & NOTE_DISCONTIG_AFTER)
+ params->notes |= NOTE_FLUSH_ANYWAY;
+ _debug("[%u] %llx + %zx >= %llx, %u %x",
+ s, stream->issue_from, stream->buffered, estimate->issue_at,
+ estimate->max_segs, params->notes);
+ if (stream->issue_from + stream->buffered >= estimate->issue_at ||
+ estimate->max_segs <= 0 ||
+ (params->notes & NOTE_FLUSH_ANYWAY)) {
+ ret = netfs_issue_writes(wreq, stream, params);
+ if (ret < 0)
+ return ret;
}
- _enter("R=%x[%x]", wreq->debug_id, subreq ? subreq->debug_index : 0);
+ return 0;
+}
+
+/*
+ * See which streams need writes issuing and issue them.
+ */
+static int netfs_issue_streams(struct netfs_io_request *wreq,
+ struct netfs_wb_params *params)
+{
+ int ret = 0, ret2;
+
+ _enter("%x", params->notes);
- if (subreq && start != subreq->start + subreq->len) {
- netfs_issue_write(wreq, stream);
- subreq = NULL;
+ for (int s = 0; s < NR_IO_STREAMS; s++) {
+ ret2 = netfs_issue_stream(wreq, params, s);
+ if (ret2 < 0)
+ ret = ret2;
}
+ return ret;
+}
- if (!stream->construct)
- netfs_prepare_write(wreq, stream, start);
- subreq = stream->construct;
+/*
+ * End the issuing of writes, let the collector know we're done.
+ */
+static void netfs_end_issue_write(struct netfs_io_request *wreq,
+ struct netfs_wb_params *params)
+{
+ bool needs_poke = true;
- part = umin(stream->sreq_max_len - subreq->len, len);
- _debug("part %zx/%zx %zx/%zx", subreq->len, stream->sreq_max_len, part, len);
- subreq->len += part;
- subreq->nr_segs++;
+ params->notes |= NOTE_FLUSH_ANYWAY;
+
+ for (int s = 0; s < NR_IO_STREAMS; s++) {
+ struct netfs_io_stream *stream = &wreq->io_streams[s];
+ int ret;
+
+ if (stream->buffering) {
+ ret = netfs_issue_writes(wreq, stream, params);
+ if (ret < 0) {
+ /* Leave the error somewhere the completion
+ * path can pick it up if there isn't already
+ * another error logged.
+ */
+ cmpxchg(&wreq->error, 0, ret);
+ }
+ stream->buffering = false;
+ }
+ }
+
+ smp_wmb(); /* Write subreq lists before ALL_QUEUED. */
+ set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
+
+ for (int s = 0; s < NR_IO_STREAMS; s++) {
+ struct netfs_io_stream *stream = &wreq->io_streams[s];
- if (subreq->len >= stream->sreq_max_len ||
- subreq->nr_segs >= stream->sreq_max_segs ||
- to_eof) {
- netfs_issue_write(wreq, stream);
- subreq = NULL;
+ if (!stream->active)
+ continue;
+ if (!list_empty(&stream->subrequests))
+ needs_poke = false;
}
- return part;
+ if (needs_poke)
+ netfs_wake_collector(wreq);
}
/*
- * Write some of a pending folio data back to the server.
+ * Queue a folio for writeback.
*/
-static int netfs_write_folio(struct netfs_io_request *wreq,
- struct writeback_control *wbc,
- struct folio *folio)
+static int netfs_queue_wb_folio(struct netfs_io_request *wreq,
+ struct writeback_control *wbc,
+ struct folio *folio,
+ struct netfs_wb_params *params)
{
- struct netfs_io_stream *upload = &wreq->io_streams[0];
- struct netfs_io_stream *cache = &wreq->io_streams[1];
- struct netfs_io_stream *stream;
struct netfs_group *fgroup; /* TODO: Use this with ceph */
struct netfs_folio *finfo;
struct bvecq *queue = wreq->load_cursor.bvecq;
unsigned int slot;
size_t fsize = folio_size(folio), flen = fsize, foff = 0;
loff_t fpos = folio_pos(folio), i_size;
- bool to_eof = false, streamw = false;
- bool debug = false;
int ret;
- _enter("");
+ _enter("%x", params->notes);
/* Institute a new bvec queue segment if the current one is full or if
* we encounter a discontiguity. The discontiguity break is important
* when it comes to bulk unlocking folios by file range.
*/
if (bvecq_is_full(queue) ||
- (fpos != wreq->last_end && wreq->last_end > 0)) {
+ (fpos != params->last_end && params->last_end > 0)) {
ret = bvecq_buffer_make_space(&wreq->load_cursor, GFP_NOFS);
if (ret < 0) {
folio_unlock(folio);
@@ -387,10 +542,10 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
queue = wreq->load_cursor.bvecq;
queue->fpos = fpos;
- if (fpos != wreq->last_end)
+ if (fpos != params->last_end)
queue->discontig = true;
- bvecq_pos_move(&wreq->dispatch_cursor, queue);
- wreq->dispatch_cursor.slot = 0;
+ bvecq_pos_move(¶ms->dispatch_cursor, queue);
+ params->dispatch_cursor.slot = 0;
}
/* netfs_perform_write() may shift i_size around the page or from out
@@ -418,23 +573,36 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
if (finfo) {
foff = finfo->dirty_offset;
flen = foff + finfo->dirty_len;
- streamw = true;
+ params->notes |= NOTE_STREAMW;
+ if (foff > 0)
+ params->notes |= NOTE_DISCONTIG_BEFORE;
+ if (flen < fsize)
+ params->notes |= NOTE_DISCONTIG_AFTER;
}
+ if (params->last_end && fpos != params->last_end)
+ params->notes |= NOTE_DISCONTIG_BEFORE;
+ params->last_end = fpos + fsize;
+
if (wreq->origin == NETFS_WRITETHROUGH) {
- to_eof = false;
if (flen > i_size - fpos)
flen = i_size - fpos;
+ /* EOF may be changing. */
} else if (flen > i_size - fpos) {
flen = i_size - fpos;
- if (!streamw)
+ if (!(params->notes & NOTE_STREAMW))
folio_zero_segment(folio, flen, fsize);
- to_eof = true;
+ params->notes |= NOTE_TO_EOF;
} else if (flen == i_size - fpos) {
- to_eof = true;
+ params->notes |= NOTE_TO_EOF;
}
flen -= foff;
+ params->folio_start = fpos;
+ params->folio_len = fsize;
+ params->dirty_offset = foff;
+ params->dirty_len = flen;
+
_debug("folio %zx %zx %zx", foff, flen, fsize);
/* Deal with discontinuities in the stream of dirty pages. These can
@@ -454,22 +622,31 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
* write-back group.
*/
if (fgroup == NETFS_FOLIO_COPY_TO_CACHE) {
- netfs_issue_write(wreq, upload);
+ if (!(params->notes & NOTE_CACHE_AVAIL)) {
+ trace_netfs_folio(folio, netfs_folio_trace_cancel_copy);
+ goto cancel_folio;
+ }
+ params->notes |= NOTE_CACHE_COPY;
+ trace_netfs_folio(folio, netfs_folio_trace_store_copy);
} else if (fgroup != wreq->group) {
/* We can't write this page to the server yet. */
kdebug("wrong group");
- folio_redirty_for_writepage(wbc, folio);
- folio_unlock(folio);
- netfs_issue_write(wreq, upload);
- netfs_issue_write(wreq, cache);
- return 0;
+ goto skip_folio;
+ } else if (!(params->notes & (NOTE_UPLOAD_AVAIL | NOTE_CACHE_AVAIL))) {
+ trace_netfs_folio(folio, netfs_folio_trace_cancel_store);
+ goto cancel_folio_discard;
+ } else {
+ if (params->notes & NOTE_UPLOAD_STARTED) {
+ params->notes |= NOTE_UPLOAD;
+ trace_netfs_folio(folio, netfs_folio_trace_store_plus);
+ } else {
+ params->notes |= NOTE_UPLOAD | NOTE_UPLOAD_STARTED;
+ trace_netfs_folio(folio, netfs_folio_trace_store);
+ }
+ if (params->notes & NOTE_CACHE_AVAIL)
+ params->notes |= NOTE_CACHE_COPY;
}
- if (foff > 0)
- netfs_issue_write(wreq, upload);
- if (streamw)
- netfs_issue_write(wreq, cache);
-
/* Flip the page to the writeback state and unlock. If we're called
* from write-through, then the page has already been put into the wb
* state.
@@ -478,129 +655,37 @@ static int netfs_write_folio(struct netfs_io_request *wreq,
folio_start_writeback(folio);
folio_unlock(folio);
- if (fgroup == NETFS_FOLIO_COPY_TO_CACHE) {
- if (!cache->avail) {
- trace_netfs_folio(folio, netfs_folio_trace_cancel_copy);
- netfs_issue_write(wreq, upload);
- netfs_folio_written_back(folio);
- return 0;
- }
- trace_netfs_folio(folio, netfs_folio_trace_store_copy);
- } else if (!upload->avail && !cache->avail) {
- trace_netfs_folio(folio, netfs_folio_trace_cancel_store);
- netfs_folio_written_back(folio);
- return 0;
- } else if (!upload->construct) {
- trace_netfs_folio(folio, netfs_folio_trace_store);
- } else {
- trace_netfs_folio(folio, netfs_folio_trace_store_plus);
- }
-
/* Attach the folio to the rolling buffer. */
slot = queue->nr_slots;
- bvec_set_folio(&queue->bv[slot], folio, flen, 0);
+ bvec_set_folio(&queue->bv[slot], folio, flen, foff);
queue->nr_slots = slot + 1;
wreq->load_cursor.slot = slot + 1;
wreq->load_cursor.offset = 0;
- wreq->last_end = fpos + foff + flen;
trace_netfs_bv_slot(queue, slot);
+ trace_netfs_wback(wreq, folio, params->notes);
- /* Move the submission point forward to allow for write-streaming data
- * not starting at the front of the page. We don't do write-streaming
- * with the cache as the cache requires DIO alignment.
- *
- * Also skip uploading for data that's been read and just needs copying
- * to the cache.
- */
- for (int s = 0; s < NR_IO_STREAMS; s++) {
- stream = &wreq->io_streams[s];
- stream->submit_off = 0;
- stream->submit_len = flen;
- if (!stream->avail ||
- (stream->source == NETFS_WRITE_TO_CACHE && streamw) ||
- (stream->source == NETFS_UPLOAD_TO_SERVER &&
- fgroup == NETFS_FOLIO_COPY_TO_CACHE)) {
- stream->submit_off = UINT_MAX;
- stream->submit_len = 0;
- }
- }
-
- /* Attach the folio to one or more subrequests. For a big folio, we
- * could end up with thousands of subrequests if the wsize is small -
- * but we might need to wait during the creation of subrequests for
- * network resources (eg. SMB credits).
- */
- for (;;) {
- ssize_t part;
- size_t lowest_off = ULONG_MAX;
- int choose_s = -1;
-
- /* Always add to the lowest-submitted stream first. */
- for (int s = 0; s < NR_IO_STREAMS; s++) {
- stream = &wreq->io_streams[s];
- if (stream->submit_len > 0 &&
- stream->submit_off < lowest_off) {
- lowest_off = stream->submit_off;
- choose_s = s;
- }
- }
-
- if (choose_s < 0)
- break;
- stream = &wreq->io_streams[choose_s];
-
- /* Advance the cursor. */
- wreq->dispatch_cursor.offset = stream->submit_off;
-
- atomic64_set(&wreq->issued_to, fpos + foff + stream->submit_off);
- part = netfs_advance_write(wreq, stream, fpos + foff + stream->submit_off,
- stream->submit_len, to_eof);
- stream->submit_off += part;
- if (part > stream->submit_len)
- stream->submit_len = 0;
- else
- stream->submit_len -= part;
- if (part > 0)
- debug = true;
- }
-
- bvecq_pos_step(&wreq->dispatch_cursor);
- /* Order loading the queue before updating the issue_to point */
- atomic64_set_release(&wreq->issued_to, fpos + fsize);
-
- if (!debug)
- kdebug("R=%x: No submit", wreq->debug_id);
-
- if (foff + flen < fsize)
- for (int s = 0; s < NR_IO_STREAMS; s++)
- netfs_issue_write(wreq, &wreq->io_streams[s]);
-
- _leave(" = 0");
+out:
+ _leave(" = %x", params->notes);
return 0;
-}
-/*
- * End the issuing of writes, letting the collector know we're done.
- */
-static void netfs_end_issue_write(struct netfs_io_request *wreq)
-{
- bool needs_poke = true;
-
- smp_wmb(); /* Write subreq lists before ALL_QUEUED. */
- set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
-
- for (int s = 0; s < NR_IO_STREAMS; s++) {
- struct netfs_io_stream *stream = &wreq->io_streams[s];
-
- if (!stream->active)
- continue;
- if (!list_empty(&stream->subrequests))
- needs_poke = false;
- netfs_issue_write(wreq, stream);
- }
-
- if (needs_poke)
- netfs_wake_collector(wreq);
+skip_folio:
+ ret = folio_redirty_for_writepage(wbc, folio);
+ folio_unlock(folio);
+ if (ret < 0)
+ return ret;
+ params->notes |= NOTE_DISCONTIG_BEFORE;
+ goto out;
+cancel_folio_discard:
+ netfs_put_group(fgroup);
+cancel_folio:
+ folio_detach_private(folio);
+ kfree(finfo);
+ folio_unlock(folio);
+ folio_cancel_dirty(folio);
+ if (wreq->origin == NETFS_WRITETHROUGH)
+ folio_end_writeback(folio);
+ params->notes |= NOTE_DISCONTIG_BEFORE;
+ goto out;
}
/*
@@ -611,6 +696,7 @@ int netfs_writepages(struct address_space *mapping,
{
struct netfs_inode *ictx = netfs_inode(mapping->host);
struct netfs_io_request *wreq = NULL;
+ struct netfs_wb_params params = {};
struct folio *folio;
int error = 0;
@@ -636,35 +722,48 @@ int netfs_writepages(struct address_space *mapping,
if (bvecq_buffer_init(&wreq->load_cursor, GFP_NOFS) < 0)
goto nomem;
- bvecq_pos_set(&wreq->dispatch_cursor, &wreq->load_cursor);
- bvecq_pos_set(&wreq->collect_cursor, &wreq->dispatch_cursor);
+ bvecq_pos_set(¶ms.dispatch_cursor, &wreq->load_cursor);
+ bvecq_pos_set(&wreq->collect_cursor, &wreq->load_cursor);
__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
trace_netfs_write(wreq, netfs_write_trace_writeback);
netfs_stat(&netfs_n_wh_writepages);
- do {
- _debug("wbiter %lx %llx", folio->index, atomic64_read(&wreq->issued_to));
+ if (wreq->io_streams[1].avail)
+ params.notes |= NOTE_CACHE_AVAIL;
- /* It appears we don't have to handle cyclic writeback wrapping. */
- WARN_ON_ONCE(wreq && folio_pos(folio) < atomic64_read(&wreq->issued_to));
+ do {
+ _debug("wbiter %lx", folio->index);
if (netfs_folio_group(folio) != NETFS_FOLIO_COPY_TO_CACHE &&
unlikely(!test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))) {
set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags);
wreq->netfs_ops->begin_writeback(wreq);
+ if (wreq->io_streams[0].avail) {
+ params.notes |= NOTE_UPLOAD_AVAIL;
+ /* Order setting the active flag after other fields. */
+ smp_store_release(&wreq->io_streams[0].active, true);
+ }
}
- error = netfs_write_folio(wreq, wbc, folio);
+ params.notes &= NOTES__KEEP_MASK;
+ error = netfs_queue_wb_folio(wreq, wbc, folio, ¶ms);
+ if (error < 0)
+ break;
+ error = netfs_issue_streams(wreq, ¶ms);
if (error < 0)
break;
+
+ bvecq_pos_step(¶ms.dispatch_cursor);
} while ((folio = writeback_iter(mapping, wbc, folio, &error)));
- netfs_end_issue_write(wreq);
+ netfs_end_issue_write(wreq, ¶ms);
mutex_unlock(&ictx->wb_lock);
bvecq_pos_unset(&wreq->load_cursor);
- bvecq_pos_unset(&wreq->dispatch_cursor);
+ bvecq_pos_unset(¶ms.dispatch_cursor);
+ for (int i = 0; i < NR_IO_STREAMS; i++)
+ bvecq_pos_unset(&wreq->io_streams[i].dispatch_cursor);
netfs_wake_collector(wreq);
netfs_put_request(wreq, netfs_rreq_trace_put_return);
@@ -686,32 +785,55 @@ EXPORT_SYMBOL(netfs_writepages);
/*
* Begin a write operation for writing through the pagecache.
*/
-struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len)
+struct netfs_writethrough *netfs_begin_writethrough(struct kiocb *iocb, size_t len)
{
+ struct netfs_writethrough *wthru = NULL;
struct netfs_io_request *wreq = NULL;
struct netfs_inode *ictx = netfs_inode(file_inode(iocb->ki_filp));
+ wthru = kzalloc_obj(struct netfs_writethrough);
+ if (!wthru)
+ return ERR_PTR(-ENOMEM);
+
mutex_lock(&ictx->wb_lock);
wreq = netfs_create_write_req(iocb->ki_filp->f_mapping, iocb->ki_filp,
iocb->ki_pos, NETFS_WRITETHROUGH);
if (IS_ERR(wreq)) {
mutex_unlock(&ictx->wb_lock);
- return wreq;
+ kfree(wthru);
+ return ERR_CAST(wreq);
}
+ wthru->wreq = wreq;
if (bvecq_buffer_init(&wreq->load_cursor, GFP_NOFS) < 0) {
netfs_put_failed_request(wreq);
mutex_unlock(&ictx->wb_lock);
+ kfree(wthru);
return ERR_PTR(-ENOMEM);
}
- bvecq_pos_set(&wreq->dispatch_cursor, &wreq->load_cursor);
- bvecq_pos_set(&wreq->collect_cursor, &wreq->dispatch_cursor);
+ bvecq_pos_set(&wthru->params.dispatch_cursor, &wreq->load_cursor);
+ bvecq_pos_set(&wreq->collect_cursor, &wreq->load_cursor);
+
+ if (wreq->io_streams[1].avail)
+ wthru->params.notes |= NOTE_CACHE_AVAIL;
wreq->io_streams[0].avail = true;
trace_netfs_write(wreq, netfs_write_trace_writethrough);
- return wreq;
+ if (!is_sync_kiocb(iocb))
+ wreq->iocb = iocb;
+
+ if (unlikely(!test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))) {
+ set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags);
+ /* Don't call ->begin_writeback() as ->init_request() gets file*. */
+ if (wreq->io_streams[0].avail) {
+ wthru->params.notes |= NOTE_UPLOAD_AVAIL;
+ /* Order setting the active flag after other fields. */
+ smp_store_release(&wreq->io_streams[0].active, true);
+ }
+ }
+ return wthru;
}
/*
@@ -720,14 +842,17 @@ struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len
* to the request. If we've added more than wsize then we need to create a new
* subrequest.
*/
-int netfs_advance_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
- struct folio *folio, size_t copied, bool to_page_end,
- struct folio **writethrough_cache)
+int netfs_advance_writethrough(struct netfs_writethrough *wthru,
+ struct writeback_control *wbc,
+ struct folio *folio, size_t copied, bool to_page_end)
{
+ struct netfs_io_request *wreq = wthru->wreq;
+ int ret;
+
_enter("R=%x ws=%u cp=%zu tp=%u",
wreq->debug_id, wreq->wsize, copied, to_page_end);
- if (!*writethrough_cache) {
+ if (!wthru->in_progress) {
if (folio_test_dirty(folio))
/* Sigh. mmap. */
folio_clear_dirty_for_io(folio);
@@ -738,63 +863,113 @@ int netfs_advance_writethrough(struct netfs_io_request *wreq, struct writeback_c
trace_netfs_folio(folio, netfs_folio_trace_wthru);
else
trace_netfs_folio(folio, netfs_folio_trace_wthru_plus);
- *writethrough_cache = folio;
+ wthru->in_progress = folio;
}
wreq->len += copied;
if (!to_page_end)
return 0;
- *writethrough_cache = NULL;
- return netfs_write_folio(wreq, wbc, folio);
+ wthru->in_progress = NULL;
+ wthru->params.notes &= NOTES__KEEP_MASK;
+ ret = netfs_queue_wb_folio(wreq, wbc, folio, &wthru->params);
+ if (ret < 0)
+ return ret;
+ return netfs_issue_streams(wreq, &wthru->params);
}
/*
* End a write operation used when writing through the pagecache.
*/
-ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc,
- struct folio *writethrough_cache)
+ssize_t netfs_end_writethrough(struct netfs_writethrough *wthru,
+ struct writeback_control *wbc)
{
+ struct netfs_io_request *wreq = wthru->wreq;
struct netfs_inode *ictx = netfs_inode(wreq->inode);
ssize_t ret;
_enter("R=%x", wreq->debug_id);
- if (writethrough_cache)
- netfs_write_folio(wreq, wbc, writethrough_cache);
+ if (wthru->in_progress) {
+ wthru->params.notes &= NOTES__KEEP_MASK;
+ ret = netfs_queue_wb_folio(wreq, wbc, wthru->in_progress, &wthru->params);
+ if (ret == 0)
+ ret = netfs_issue_streams(wreq, &wthru->params);
+ wthru->in_progress = NULL;
+ }
- netfs_end_issue_write(wreq);
+ netfs_end_issue_write(wreq, &wthru->params);
mutex_unlock(&ictx->wb_lock);
bvecq_pos_unset(&wreq->load_cursor);
- bvecq_pos_unset(&wreq->dispatch_cursor);
+ bvecq_pos_unset(&wthru->params.dispatch_cursor);
+ for (int i = 0; i < NR_IO_STREAMS; i++)
+ bvecq_pos_unset(&wreq->io_streams[i].dispatch_cursor);
if (wreq->iocb)
ret = -EIOCBQUEUED;
else
ret = netfs_wait_for_write(wreq);
netfs_put_request(wreq, netfs_rreq_trace_put_return);
+ kfree(wthru);
return ret;
}
+/*
+ * Prepare a buffer for a single monolithic write.
+ */
+static int netfs_prepare_write_single_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs)
+{
+ struct netfs_io_request *wreq = subreq->rreq;
+ struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr];
+ struct bio_vec *bv;
+ struct bvecq *bq;
+ size_t dio_size = wreq->cache_resources.dio_size;
+ size_t dlen;
+
+ bvecq_pos_set(&subreq->dispatch_pos, &stream->dispatch_cursor);
+ bvecq_pos_set(&subreq->content, &subreq->dispatch_pos);
+
+ /* Round the end of the last entry up. */
+ bq = subreq->content.bvecq;
+ while (bq->next)
+ bq = bq->next;
+ bv = &bq->bv[bq->nr_slots - 1];
+ dlen = round_up(bv->bv_len, dio_size);
+ if (dlen > bv->bv_len) {
+ subreq->len += dlen - bv->bv_len;
+ bv->bv_len = dlen;
+ }
+
+ stream->buffered = 0;
+ stream->issue_from = subreq->len;
+ wreq->submitted = subreq->len;
+ return 0;
+}
+
/**
* netfs_writeback_single - Write back a monolithic payload
* @mapping: The mapping to write from
* @wbc: Hints from the VM
- * @iter: Data to write.
+ * @iter: Data to write
+ * @len: Amount of data to write
*
* Write a monolithic, non-pagecache object back to the server and/or
* the cache. There's a maximum of one subrequest per stream.
*/
int netfs_writeback_single(struct address_space *mapping,
struct writeback_control *wbc,
- struct iov_iter *iter)
+ struct iov_iter *iter,
+ size_t len)
{
struct netfs_io_request *wreq;
struct netfs_inode *ictx = netfs_inode(mapping->host);
int ret;
+ _enter("%zx,%zx", iov_iter_count(iter), len);
+
if (!mutex_trylock(&ictx->wb_lock)) {
if (wbc->sync_mode == WB_SYNC_NONE) {
netfs_stat(&netfs_n_wb_lock_skip);
@@ -809,23 +984,24 @@ int netfs_writeback_single(struct address_space *mapping,
ret = PTR_ERR(wreq);
goto couldnt_start;
}
- wreq->len = iov_iter_count(iter);
- ret = netfs_extract_iter(iter, wreq->len, INT_MAX, 0, &wreq->dispatch_cursor.bvecq, 0);
+ wreq->len = len;
+
+ ret = netfs_extract_iter(iter, len, INT_MAX, 0, &wreq->load_cursor.bvecq, 0);
if (ret < 0)
goto cleanup_free;
- if (ret < wreq->len) {
+ if (ret < len) {
ret = -EIO;
goto cleanup_free;
}
- bvecq_pos_set(&wreq->collect_cursor, &wreq->dispatch_cursor);
+ bvecq_pos_set(&wreq->collect_cursor, &wreq->load_cursor);
__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);
trace_netfs_write(wreq, netfs_write_trace_writeback_single);
netfs_stat(&netfs_n_wh_writepages);
- if (__test_and_set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))
+ if (test_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags))
wreq->netfs_ops->begin_writeback(wreq);
for (int s = 0; s < NR_IO_STREAMS; s++) {
@@ -835,13 +1011,22 @@ int netfs_writeback_single(struct address_space *mapping,
if (!stream->avail)
continue;
- netfs_prepare_write(wreq, stream, 0);
+ stream->issue_from = 0;
+ stream->buffered = len;
+
+ subreq = netfs_alloc_write_subreq(wreq, stream);
+ if (!subreq) {
+ ret = -ENOMEM;
+ break;
+ }
+
+ bvecq_pos_set(&stream->dispatch_cursor, &wreq->load_cursor);
- subreq = stream->construct;
- subreq->len = wreq->len;
- stream->submit_len = subreq->len;
+ ret = stream->issue_write(subreq);
+ if (ret < 0 && ret != -EIOCBQUEUED)
+ netfs_write_subrequest_terminated(subreq, ret);
- netfs_issue_write(wreq, stream);
+ bvecq_pos_unset(&stream->dispatch_cursor);
}
wreq->submitted = wreq->len;
diff --git a/fs/netfs/write_retry.c b/fs/netfs/write_retry.c
index 5df5c34d4610..096ddf7a2e5c 100644
--- a/fs/netfs/write_retry.c
+++ b/fs/netfs/write_retry.c
@@ -12,12 +12,43 @@
#include "internal.h"
/*
- * Perform retries on the streams that need it.
+ * Prepare the write buffer for a retry. We can't necessarily reuse the write
+ * buffer from the previous run of a subrequest because the filesystem is
+ * permitted to modify it (add headers/trailers, encrypt it). Further, the
+ * subrequest may now be a different size (e.g. cifs has to negotiate for
+ * maximum transfer size). Also, we can't look at *stream as that may still
+ * refer to the source material being broken up into original subrequests.
+ */
+int netfs_prepare_write_retry_buffer(struct netfs_io_subrequest *subreq,
+ unsigned int max_segs)
+{
+ struct netfs_io_request *wreq = subreq->rreq;
+ struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr];
+ size_t len;
+
+ bvecq_pos_set(&subreq->dispatch_pos, &wreq->retry_cursor);
+ bvecq_pos_set(&subreq->content, &wreq->retry_cursor);
+ len = bvecq_slice(&wreq->retry_cursor, subreq->len, max_segs, &subreq->nr_segs);
+
+ if (len < subreq->len) {
+ subreq->len = len;
+ trace_netfs_sreq(subreq, netfs_sreq_trace_limited);
+ }
+
+ stream->issue_from += len;
+ stream->buffered -= len;
+ if (stream->buffered == 0)
+ bvecq_pos_unset(&wreq->retry_cursor);
+ return 0;
+}
+
+/*
+ * Perform retries on the streams that need it. This only has to deal with
+ * buffered writes; unbuffered write retry is handled in direct_write.c.
*/
static void netfs_retry_write_stream(struct netfs_io_request *wreq,
struct netfs_io_stream *stream)
{
- struct bvecq_pos dispatch_cursor = {};
struct list_head *next;
_enter("R=%x[%x:]", wreq->debug_id, stream->stream_nr);
@@ -32,30 +63,14 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
if (unlikely(stream->failed))
return;
- /* If there's no renegotiation to do, just resend each failed subreq. */
- if (!stream->prepare_write) {
- struct netfs_io_subrequest *subreq;
-
- list_for_each_entry(subreq, &stream->subrequests, rreq_link) {
- if (test_bit(NETFS_SREQ_FAILED, &subreq->flags))
- break;
- if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) {
- netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
- netfs_reissue_write(stream, subreq);
- }
- }
- return;
- }
-
next = stream->subrequests.next;
do {
struct netfs_io_subrequest *subreq = NULL, *from, *to, *tmp;
unsigned long long start, len;
- size_t part;
- bool boundary = false;
+ int ret;
- bvecq_pos_unset(&dispatch_cursor);
+ bvecq_pos_unset(&wreq->retry_cursor);
/* Go through the stream and find the next span of contiguous
* data that we then rejig (cifs, for example, needs the wsize
@@ -73,7 +88,6 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
list_for_each_continue(next, &stream->subrequests) {
subreq = list_entry(next, struct netfs_io_subrequest, rreq_link);
if (subreq->start + subreq->transferred != start + len ||
- test_bit(NETFS_SREQ_BOUNDARY, &subreq->flags) ||
!test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags))
break;
to = subreq;
@@ -83,43 +97,40 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
/* Determine the set of buffers we're going to use. Each
* subreq gets a subset of a single overall contiguous buffer.
*/
- bvecq_pos_transfer(&dispatch_cursor, &from->dispatch_pos);
- bvecq_pos_advance(&dispatch_cursor, from->transferred);
+ bvecq_pos_transfer(&wreq->retry_cursor, &from->dispatch_pos);
+ bvecq_pos_advance(&wreq->retry_cursor, from->transferred);
+ wreq->retry_start = start;
+ wreq->retry_buffered = len;
/* Work through the sublist. */
subreq = from;
list_for_each_entry_from(subreq, &stream->subrequests, rreq_link) {
- if (!len)
+ if (!wreq->retry_buffered)
break;
- subreq->start = start;
- subreq->len = len;
- __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
- trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
-
bvecq_pos_unset(&subreq->dispatch_pos);
bvecq_pos_unset(&subreq->content);
+ subreq->content.bvecq = NULL;
+ subreq->content.slot = 0;
+ subreq->content.offset = 0;
- /* Renegotiate max_len (wsize) */
- stream->sreq_max_len = len;
- stream->sreq_max_segs = INT_MAX;
- stream->prepare_write(subreq);
-
- bvecq_pos_set(&subreq->dispatch_pos, &dispatch_cursor);
- part = bvecq_slice(&dispatch_cursor,
- umin(len, stream->sreq_max_len),
- stream->sreq_max_segs,
- &subreq->nr_segs);
- subreq->len = subreq->transferred + part;
- subreq->transferred = 0;
- len -= part;
- start += part;
- if (len && subreq == to &&
- __test_and_clear_bit(NETFS_SREQ_BOUNDARY, &to->flags))
- boundary = true;
-
+ __clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);
+ __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);
+ __clear_bit(NETFS_SREQ_FAILED, &subreq->flags);
+ __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
+ subreq->start = wreq->retry_start;
+ subreq->len = wreq->retry_buffered;
+ subreq->transferred = 0;
+ subreq->retry_count += 1;
+ subreq->error = 0;
+
+ netfs_stat(&netfs_n_wh_retry_write_subreq);
+ trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);
- netfs_reissue_write(stream, subreq);
+ ret = stream->issue_write(subreq);
+ if (ret < 0 && ret != -EIOCBQUEUED)
+ netfs_write_subrequest_terminated(subreq, ret);
+
if (subreq == to)
break;
}
@@ -160,12 +171,9 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
to = list_next_entry(to, rreq_link);
trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
- stream->sreq_max_len = len;
- stream->sreq_max_segs = INT_MAX;
switch (stream->source) {
case NETFS_UPLOAD_TO_SERVER:
netfs_stat(&netfs_n_wh_upload);
- stream->sreq_max_len = umin(len, wreq->wsize);
break;
case NETFS_WRITE_TO_CACHE:
netfs_stat(&netfs_n_wh_write);
@@ -174,32 +182,16 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,
WARN_ON_ONCE(1);
}
- stream->prepare_write(subreq);
-
- bvecq_pos_set(&subreq->dispatch_pos, &dispatch_cursor);
- part = bvecq_slice(&dispatch_cursor,
- umin(len, stream->sreq_max_len),
- stream->sreq_max_segs,
- &subreq->nr_segs);
- subreq->len = subreq->transferred + part;
-
- len -= part;
- start += part;
- if (!len && boundary) {
- __set_bit(NETFS_SREQ_BOUNDARY, &to->flags);
- boundary = false;
- }
-
- netfs_reissue_write(stream, subreq);
- if (!len)
- break;
+ ret = stream->issue_write(subreq);
+ if (ret < 0 && ret != -EIOCBQUEUED)
+ netfs_write_subrequest_terminated(subreq, ret);
} while (len);
} while (!list_is_head(next, &stream->subrequests));
out:
- bvecq_pos_unset(&dispatch_cursor);
+ bvecq_pos_unset(&wreq->retry_cursor);
}
/*
@@ -237,4 +229,6 @@ void netfs_retry_writes(struct netfs_io_request *wreq)
netfs_retry_write_stream(wreq, stream);
}
}
+
+ pr_notice("Retrying\n");
}
diff --git a/fs/nfs/Kconfig b/fs/nfs/Kconfig
index 12cb0ca738af..ae463867cf01 100644
--- a/fs/nfs/Kconfig
+++ b/fs/nfs/Kconfig
@@ -173,6 +173,7 @@ config NFS_FSCACHE
bool "Provide NFS client caching support"
depends on NFS_FS
select NETFS_SUPPORT
+ select NETFS_PGPRIV2
select FSCACHE
help
Say Y here if you want NFS data to be cached locally on disc through
diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
index 9b7fdad4a920..bc82821d77a3 100644
--- a/fs/nfs/fscache.c
+++ b/fs/nfs/fscache.c
@@ -273,8 +273,6 @@ static int nfs_netfs_init_request(struct netfs_io_request *rreq, struct file *fi
rreq->debug_id = atomic_inc_return(&nfs_netfs_debug_id);
/* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */
__set_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags);
- rreq->io_streams[0].sreq_max_len = NFS_SB(rreq->inode->i_sb)->rsize;
-
return 0;
}
@@ -296,8 +294,9 @@ static struct nfs_netfs_io_data *nfs_netfs_alloc(struct netfs_io_subrequest *sre
return netfs;
}
-static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq)
+static int nfs_netfs_issue_read(struct netfs_io_subrequest *sreq)
{
+ struct netfs_io_request *rreq = sreq->rreq;
struct nfs_netfs_io_data *netfs;
struct nfs_pageio_descriptor pgio;
struct inode *inode = sreq->rreq->inode;
@@ -307,6 +306,13 @@ static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq)
pgoff_t start, last;
int err;
+ if (sreq->len > NFS_SB(rreq->inode->i_sb)->rsize)
+ sreq->len = NFS_SB(rreq->inode->i_sb)->rsize;
+
+ err = netfs_prepare_read_buffer(sreq, INT_MAX);
+ if (err < 0)
+ return err;
+
start = (sreq->start + sreq->transferred) >> PAGE_SHIFT;
last = ((sreq->start + sreq->len - sreq->transferred - 1) >> PAGE_SHIFT);
@@ -314,14 +320,15 @@ static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq)
&nfs_async_read_completion_ops);
netfs = nfs_netfs_alloc(sreq);
- if (!netfs) {
- sreq->error = -ENOMEM;
- return netfs_read_subreq_terminated(sreq);
- }
+ if (!netfs)
+ return -ENOMEM;
+
+ /* After this point, we're not allowed to return an error. */
+ netfs_mark_read_submission(sreq);
pgio.pg_netfs = netfs; /* used in completion */
- xa_for_each_range(&sreq->rreq->mapping->i_pages, idx, page, start, last) {
+ xa_for_each_range(&rreq->mapping->i_pages, idx, page, start, last) {
/* nfs_read_add_folio() may schedule() due to pNFS layout and other RPCs */
err = nfs_read_add_folio(&pgio, ctx, page_folio(page));
if (err < 0) {
@@ -332,6 +339,7 @@ static void nfs_netfs_issue_read(struct netfs_io_subrequest *sreq)
out:
nfs_pageio_complete_read(&pgio);
nfs_netfs_put(netfs);
+ return -EIOCBQUEUED;
}
void nfs_netfs_initiate_read(struct nfs_pgio_header *hdr)
diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
index 3990a9012264..dc9120802edb 100644
--- a/fs/smb/client/cifssmb.c
+++ b/fs/smb/client/cifssmb.c
@@ -1466,8 +1466,7 @@ cifs_readv_callback(struct TCP_Server_Info *server, struct mid_q_entry *mid)
struct netfs_inode *ictx = netfs_inode(rdata->rreq->inode);
struct cifs_tcon *tcon = tlink_tcon(rdata->req->cfile->tlink);
struct smb_rqst rqst = { .rq_iov = rdata->iov,
- .rq_nvec = 1,
- .rq_iter = rdata->subreq.io_iter };
+ .rq_nvec = 1};
struct cifs_credits credits = {
.value = 1,
.instance = 0,
@@ -1481,6 +1480,11 @@ cifs_readv_callback(struct TCP_Server_Info *server, struct mid_q_entry *mid)
__func__, mid->mid, mid->mid_state, rdata->result,
rdata->subreq.len);
+ if (rdata->got_bytes)
+ iov_iter_bvec_queue(&rqst.rq_iter, ITER_DEST,
+ rdata->subreq.content.bvecq, rdata->subreq.content.slot,
+ rdata->subreq.content.offset, rdata->subreq.len);
+
switch (mid->mid_state) {
case MID_RESPONSE_RECEIVED:
/* result already set, check signature */
@@ -2002,7 +2006,10 @@ cifs_async_writev(struct cifs_io_subrequest *wdata)
rqst.rq_iov = iov;
rqst.rq_nvec = 1;
- rqst.rq_iter = wdata->subreq.io_iter;
+
+ iov_iter_bvec_queue(&rqst.rq_iter, ITER_SOURCE,
+ wdata->subreq.content.bvecq, wdata->subreq.content.slot,
+ wdata->subreq.content.offset, wdata->subreq.len);
cifs_dbg(FYI, "async write at %llu %zu bytes\n",
wdata->subreq.start, wdata->subreq.len);
diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
index cffcf82c1b69..a933c12b39ea 100644
--- a/fs/smb/client/file.c
+++ b/fs/smb/client/file.c
@@ -44,18 +44,34 @@ static int cifs_reopen_file(struct cifsFileInfo *cfile, bool can_flush);
* Prepare a subrequest to upload to the server. We need to allocate credits
* so that we know the maximum amount of data that we can include in it.
*/
-static void cifs_prepare_write(struct netfs_io_subrequest *subreq)
+static int cifs_estimate_write(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream,
+ struct netfs_write_estimate *estimate)
+{
+ struct cifs_sb_info *cifs_sb = CIFS_SB(wreq->inode->i_sb);
+
+ estimate->issue_at = stream->issue_from + cifs_sb->ctx->wsize;
+ return 0;
+}
+
+/*
+ * Issue a subrequest to upload to the server.
+ */
+static int cifs_issue_write(struct netfs_io_subrequest *subreq)
{
struct cifs_io_subrequest *wdata =
container_of(subreq, struct cifs_io_subrequest, subreq);
struct cifs_io_request *req = wdata->req;
- struct netfs_io_stream *stream = &req->rreq.io_streams[subreq->stream_nr];
struct TCP_Server_Info *server;
struct cifsFileInfo *open_file = req->cfile;
- struct cifs_sb_info *cifs_sb = CIFS_SB(wdata->rreq->inode->i_sb);
- size_t wsize = req->rreq.wsize;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(subreq->rreq->inode->i_sb);
+ unsigned int max_segs = INT_MAX;
+ size_t len;
int rc;
+ if (cifs_forced_shutdown(cifs_sb))
+ return smb_EIO(smb_eio_trace_forced_shutdown);
+
if (!wdata->have_xid) {
wdata->xid = get_xid();
wdata->have_xid = true;
@@ -74,18 +90,16 @@ static void cifs_prepare_write(struct netfs_io_subrequest *subreq)
if (rc < 0) {
if (rc == -EAGAIN)
goto retry;
- subreq->error = rc;
- return netfs_prepare_write_failed(subreq);
+ return rc;
}
}
- rc = server->ops->wait_mtu_credits(server, wsize, &stream->sreq_max_len,
- &wdata->credits);
- if (rc < 0) {
- subreq->error = rc;
- return netfs_prepare_write_failed(subreq);
- }
+ len = umin(subreq->len, cifs_sb->ctx->wsize);
+ rc = server->ops->wait_mtu_credits(server, len, &len, &wdata->credits);
+ if (rc < 0)
+ return rc;
+ subreq->len = len;
wdata->credits.rreq_debug_id = subreq->rreq->debug_id;
wdata->credits.rreq_debug_index = subreq->debug_index;
wdata->credits.in_flight_check = 1;
@@ -101,39 +115,29 @@ static void cifs_prepare_write(struct netfs_io_subrequest *subreq)
const struct smbdirect_socket_parameters *sp =
smbd_get_parameters(server->smbd_conn);
- stream->sreq_max_segs = sp->max_frmr_depth;
+ max_segs = sp->max_frmr_depth;
}
#endif
-}
-
-/*
- * Issue a subrequest to upload to the server.
- */
-static void cifs_issue_write(struct netfs_io_subrequest *subreq)
-{
- struct cifs_io_subrequest *wdata =
- container_of(subreq, struct cifs_io_subrequest, subreq);
- struct cifs_sb_info *sbi = CIFS_SB(subreq->rreq->inode->i_sb);
- int rc;
- if (cifs_forced_shutdown(sbi)) {
- rc = smb_EIO(smb_eio_trace_forced_shutdown);
- goto fail;
+ rc = netfs_prepare_write_buffer(subreq, max_segs);
+ if (rc < 0) {
+ add_credits_and_wake_if(wdata->server, &wdata->credits, 0);
+ return rc;
}
- rc = adjust_credits(wdata->server, wdata, cifs_trace_rw_credits_issue_write_adjust);
+ rc = adjust_credits(server, wdata, cifs_trace_rw_credits_issue_write_adjust);
if (rc)
- goto fail;
+ goto fail_with_credits;
rc = -EAGAIN;
if (wdata->req->cfile->invalidHandle)
- goto fail;
+ goto fail_with_credits;
wdata->server->ops->async_writev(wdata);
out:
- return;
+ return -EIOCBQUEUED;
-fail:
+fail_with_credits:
if (rc == -EAGAIN)
trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
else
@@ -149,17 +153,25 @@ static void cifs_netfs_invalidate_cache(struct netfs_io_request *wreq)
}
/*
- * Negotiate the size of a read operation on behalf of the netfs library.
+ * Issue a read operation on behalf of the netfs helper functions. We're asked
+ * to make a read of a certain size at a point in the file. We are permitted
+ * to only read a portion of that, but as long as we read something, the netfs
+ * helper will call us again so that we can issue another read.
*/
-static int cifs_prepare_read(struct netfs_io_subrequest *subreq)
+static int cifs_issue_read(struct netfs_io_subrequest *subreq)
{
struct netfs_io_request *rreq = subreq->rreq;
struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq);
struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq);
- struct TCP_Server_Info *server;
+ struct TCP_Server_Info *server = rdata->server;
struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb);
- size_t size;
- int rc = 0;
+ unsigned int max_segs = INT_MAX;
+ size_t len;
+ int rc;
+
+ cifs_dbg(FYI, "%s: op=%08x[%x] mapping=%p len=%zu/%zu\n",
+ __func__, rreq->debug_id, subreq->debug_index, rreq->mapping,
+ subreq->transferred, subreq->len);
if (!rdata->have_xid) {
rdata->xid = get_xid();
@@ -173,17 +185,15 @@ static int cifs_prepare_read(struct netfs_io_subrequest *subreq)
cifs_negotiate_rsize(server, cifs_sb->ctx,
tlink_tcon(req->cfile->tlink));
- rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize,
- &size, &rdata->credits);
+ len = umin(subreq->len, cifs_sb->ctx->rsize);
+ rc = server->ops->wait_mtu_credits(server, len, &len, &rdata->credits);
if (rc)
return rc;
- rreq->io_streams[0].sreq_max_len = size;
-
- rdata->credits.in_flight_check = 1;
+ subreq->len = len;
rdata->credits.rreq_debug_id = rreq->debug_id;
rdata->credits.rreq_debug_index = subreq->debug_index;
-
+ rdata->credits.in_flight_check = 1;
trace_smb3_rw_credits(rdata->rreq->debug_id,
rdata->subreq.debug_index,
rdata->credits.value,
@@ -195,33 +205,17 @@ static int cifs_prepare_read(struct netfs_io_subrequest *subreq)
const struct smbdirect_socket_parameters *sp =
smbd_get_parameters(server->smbd_conn);
- rreq->io_streams[0].sreq_max_segs = sp->max_frmr_depth;
+ max_segs = sp->max_frmr_depth;
}
#endif
- return 0;
-}
-
-/*
- * Issue a read operation on behalf of the netfs helper functions. We're asked
- * to make a read of a certain size at a point in the file. We are permitted
- * to only read a portion of that, but as long as we read something, the netfs
- * helper will call us again so that we can issue another read.
- */
-static void cifs_issue_read(struct netfs_io_subrequest *subreq)
-{
- struct netfs_io_request *rreq = subreq->rreq;
- struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq);
- struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq);
- struct TCP_Server_Info *server = rdata->server;
- int rc = 0;
- cifs_dbg(FYI, "%s: op=%08x[%x] mapping=%p len=%zu/%zu\n",
- __func__, rreq->debug_id, subreq->debug_index, rreq->mapping,
- subreq->transferred, subreq->len);
+ rc = netfs_prepare_read_buffer(subreq, max_segs);
+ if (rc < 0)
+ goto fail_with_credits;
rc = adjust_credits(server, rdata, cifs_trace_rw_credits_issue_read_adjust);
if (rc)
- goto failed;
+ goto fail_with_credits;
if (req->cfile->invalidHandle) {
do {
@@ -235,15 +229,24 @@ static void cifs_issue_read(struct netfs_io_subrequest *subreq)
subreq->rreq->origin != NETFS_DIO_READ)
__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
- trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+ /* After this point, we're not allowed to return an error. */
+ netfs_mark_read_submission(subreq);
+
rc = rdata->server->ops->async_readv(rdata);
- if (rc)
- goto failed;
- return;
+ if (rc) {
+ subreq->error = rc;
+ netfs_read_subreq_terminated(subreq);
+ }
+ return -EIOCBQUEUED;
+fail_with_credits:
+ if (rc == -EAGAIN)
+ trace_netfs_sreq(subreq, netfs_sreq_trace_retry);
+ else
+ trace_netfs_sreq(subreq, netfs_sreq_trace_fail);
+ add_credits_and_wake_if(rdata->server, &rdata->credits, 0);
failed:
- subreq->error = rc;
- netfs_read_subreq_terminated(subreq);
+ return rc;
}
/*
@@ -353,11 +356,10 @@ const struct netfs_request_ops cifs_req_ops = {
.init_request = cifs_init_request,
.free_request = cifs_free_request,
.free_subrequest = cifs_free_subrequest,
- .prepare_read = cifs_prepare_read,
.issue_read = cifs_issue_read,
.done = cifs_rreq_done,
.begin_writeback = cifs_begin_writeback,
- .prepare_write = cifs_prepare_write,
+ .estimate_write = cifs_estimate_write,
.issue_write = cifs_issue_write,
.invalidate_cache = cifs_netfs_invalidate_cache,
};
diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
index 0d19c8fc4c3d..d15f196df1e7 100644
--- a/fs/smb/client/smb2ops.c
+++ b/fs/smb/client/smb2ops.c
@@ -4705,6 +4705,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
unsigned int cur_page_idx;
unsigned int pad_len;
struct cifs_io_subrequest *rdata = mid->callback_data;
+ struct iov_iter iter;
struct smb2_hdr *shdr = (struct smb2_hdr *)buf;
size_t copied;
bool use_rdma_mr = false;
@@ -4777,6 +4778,10 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
pad_len = data_offset - server->vals->read_rsp_size;
+ iov_iter_bvec_queue(&iter, ITER_DEST,
+ rdata->subreq.content.bvecq, rdata->subreq.content.slot,
+ rdata->subreq.content.offset, rdata->subreq.len);
+
if (buf_len <= data_offset) {
/* read response payload is in pages */
cur_page_idx = pad_len / PAGE_SIZE;
@@ -4806,7 +4811,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
/* Copy the data to the output I/O iterator. */
rdata->result = cifs_copy_bvecq_to_iter(buffer, buffer_len,
- cur_off, &rdata->subreq.io_iter);
+ cur_off, &iter);
if (rdata->result != 0) {
if (is_offloaded)
mid->mid_state = MID_RESPONSE_MALFORMED;
@@ -4819,7 +4824,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
} else if (buf_len >= data_offset + data_len) {
/* read response payload is in buf */
WARN_ONCE(buffer, "read data can be either in buf or in buffer");
- copied = copy_to_iter(buf + data_offset, data_len, &rdata->subreq.io_iter);
+ copied = copy_to_iter(buf + data_offset, data_len, &iter);
if (copied == 0)
return smb_EIO2(smb_eio_trace_rx_copy_to_iter, copied, data_len);
rdata->got_bytes = copied;
diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
index c43ca74e8704..717d65d32dd3 100644
--- a/fs/smb/client/smb2pdu.c
+++ b/fs/smb/client/smb2pdu.c
@@ -4539,9 +4539,13 @@ smb2_new_read_req(void **buf, unsigned int *total_len,
*/
if (rdata && smb3_use_rdma_offload(io_parms)) {
struct smbdirect_buffer_descriptor_v1 *v1;
+ struct iov_iter iter;
bool need_invalidate = server->dialect == SMB30_PROT_ID;
- rdata->mr = smbd_register_mr(server->smbd_conn, &rdata->subreq.io_iter,
+ iov_iter_bvec_queue(&iter, ITER_DEST,
+ rdata->subreq.content.bvecq, rdata->subreq.content.slot,
+ rdata->subreq.content.offset, rdata->subreq.len);
+ rdata->mr = smbd_register_mr(server->smbd_conn, &iter,
true, need_invalidate);
if (!rdata->mr)
return -EAGAIN;
@@ -4606,9 +4610,10 @@ smb2_readv_callback(struct TCP_Server_Info *server, struct mid_q_entry *mid)
unsigned int rreq_debug_id = rdata->rreq->debug_id;
unsigned int subreq_debug_index = rdata->subreq.debug_index;
- if (rdata->got_bytes) {
- rqst.rq_iter = rdata->subreq.io_iter;
- }
+ if (rdata->got_bytes)
+ iov_iter_bvec_queue(&rqst.rq_iter, ITER_DEST,
+ rdata->subreq.content.bvecq, rdata->subreq.content.slot,
+ rdata->subreq.content.offset, rdata->subreq.len);
WARN_ONCE(rdata->server != server,
"rdata server %p != mid server %p",
@@ -5096,7 +5101,9 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
goto out;
rqst.rq_iov = iov;
- rqst.rq_iter = wdata->subreq.io_iter;
+ iov_iter_bvec_queue(&rqst.rq_iter, ITER_SOURCE,
+ wdata->subreq.content.bvecq, wdata->subreq.content.slot,
+ wdata->subreq.content.offset, wdata->subreq.len);
rqst.rq_iov[0].iov_len = total_len - 1;
rqst.rq_iov[0].iov_base = (char *)req;
@@ -5135,9 +5142,14 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
*/
if (smb3_use_rdma_offload(io_parms)) {
struct smbdirect_buffer_descriptor_v1 *v1;
+ struct iov_iter iter;
bool need_invalidate = server->dialect == SMB30_PROT_ID;
- wdata->mr = smbd_register_mr(server->smbd_conn, &wdata->subreq.io_iter,
+ iov_iter_bvec_queue(&iter, ITER_SOURCE,
+ wdata->subreq.content.bvecq, wdata->subreq.content.slot,
+ wdata->subreq.content.offset, wdata->subreq.len);
+
+ wdata->mr = smbd_register_mr(server->smbd_conn, &iter,
false, need_invalidate);
if (!wdata->mr) {
rc = -EAGAIN;
@@ -5176,8 +5188,8 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
smb2_set_replay(server, &rqst);
}
- cifs_dbg(FYI, "async write at %llu %u bytes iter=%zx\n",
- io_parms->offset, io_parms->length, iov_iter_count(&wdata->subreq.io_iter));
+ cifs_dbg(FYI, "async write at %llu %u bytes len=%zx\n",
+ io_parms->offset, io_parms->length, wdata->subreq.len);
if (wdata->credits.value > 0) {
shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->subreq.len,
diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c
index 05f8099047e1..dd1313736fcb 100644
--- a/fs/smb/client/transport.c
+++ b/fs/smb/client/transport.c
@@ -1264,12 +1264,19 @@ cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid)
}
#ifdef CONFIG_CIFS_SMB_DIRECT
- if (rdata->mr)
+ if (rdata->mr) {
length = data_len; /* An RDMA read is already done. */
- else
+ } else {
+#endif
+ struct iov_iter iter;
+
+ iov_iter_bvec_queue(&iter, ITER_DEST, rdata->subreq.content.bvecq,
+ rdata->subreq.content.slot, rdata->subreq.content.offset,
+ data_len);
+ length = cifs_read_iter_from_socket(server, &iter, data_len);
+#ifdef CONFIG_CIFS_SMB_DIRECT
+ }
#endif
- length = cifs_read_iter_from_socket(server, &rdata->subreq.io_iter,
- data_len);
if (length > 0)
rdata->got_bytes += length;
server->total_read += length;
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index 65e39f9b0c10..51c021975f0d 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -66,7 +66,7 @@ struct netfs_inode {
#endif
struct mutex wb_lock; /* Writeback serialisation */
loff_t remote_i_size; /* Size of the remote file */
- loff_t zero_point; /* Size after which we assume there's no data
+ unsigned long long zero_point; /* Size after which we assume there's no data
* on the server */
atomic_t io_count; /* Number of outstanding reqs */
unsigned long flags;
@@ -126,25 +126,39 @@ static inline struct netfs_group *netfs_folio_group(struct folio *folio)
return priv;
}
+/*
+ * Estimate of maximum write subrequest for writeback. The filesystem is
+ * responsible for filling this in when called from ->estimate_write(), though
+ * netfslib will preset infinite defaults.
+ */
+struct netfs_write_estimate {
+ unsigned long long issue_at; /* Point at which we must submit */
+ int max_segs; /* Max number of segments in a single RPC */
+};
+
/*
* Stream of I/O subrequests going to a particular destination, such as the
* server or the local cache. This is mainly intended for writing where we may
* have to write to multiple destinations concurrently.
*/
struct netfs_io_stream {
- /* Submission tracking */
- struct netfs_io_subrequest *construct; /* Op being constructed */
- size_t sreq_max_len; /* Maximum size of a subrequest */
- unsigned int sreq_max_segs; /* 0 or max number of segments in an iterator */
- unsigned int submit_off; /* Folio offset we're submitting from */
- unsigned int submit_len; /* Amount of data left to submit */
- void (*prepare_write)(struct netfs_io_subrequest *subreq);
- void (*issue_write)(struct netfs_io_subrequest *subreq);
+ /* Submission tracking (main dispatch only; not retry) */
+ struct bvecq_pos dispatch_cursor; /* Point from which buffers are dispatched */
+ unsigned long long issue_from; /* Current issue point */
+ size_t buffered; /* Amount in buffer */
+ u8 applicable; /* What sources are applicable (NOTE_* mask) */
+ bool buffering; /* T if buffering on this stream */
+ int (*estimate_write)(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream,
+ struct netfs_write_estimate *estimate);
+ int (*issue_write)(struct netfs_io_subrequest *subreq);
+ atomic64_t issued_to; /* Point to which can be considered issued */
+
/* Collection tracking */
struct list_head subrequests; /* Contributory I/O operations */
unsigned long long collected_to; /* Position we've collected results to */
size_t transferred; /* The amount transferred from this stream */
- unsigned short error; /* Aggregate error for the stream */
+ short error; /* Aggregate error for the stream */
enum netfs_io_source source; /* Where to read from/write to */
unsigned char stream_nr; /* Index of stream in parent table */
bool avail; /* T if stream is available */
@@ -180,14 +194,13 @@ struct netfs_io_subrequest {
struct list_head rreq_link; /* Link in rreq->subrequests */
struct bvecq_pos dispatch_pos; /* Bookmark in the combined queue of the start */
struct bvecq_pos content; /* The (copied) content of the subrequest */
- struct iov_iter io_iter; /* Iterator for this subrequest */
unsigned long long start; /* Where to start the I/O */
size_t len; /* Size of the I/O */
size_t transferred; /* Amount of data transferred */
+ unsigned int nr_segs; /* Number of segments in content */
refcount_t ref;
short error; /* 0 or error that occurred */
unsigned short debug_index; /* Index in list (for debugging output) */
- unsigned int nr_segs; /* Number of segs in io_iter */
u8 retry_count; /* The number of retries (0 on initial pass) */
enum netfs_io_source source; /* Where to read from/write to */
unsigned char stream_nr; /* I/O stream this belongs to */
@@ -196,7 +209,6 @@ struct netfs_io_subrequest {
#define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be cleared */
#define NETFS_SREQ_MADE_PROGRESS 4 /* Set if we transferred at least some data */
#define NETFS_SREQ_ONDEMAND 5 /* Set if it's from on-demand read mode */
-#define NETFS_SREQ_BOUNDARY 6 /* Set if ends on hard boundary (eg. ceph object) */
#define NETFS_SREQ_HIT_EOF 7 /* Set if short due to EOF */
#define NETFS_SREQ_IN_PROGRESS 8 /* Unlocked when the subrequest completes */
#define NETFS_SREQ_NEED_RETRY 9 /* Set if the filesystem requests a retry */
@@ -243,22 +255,25 @@ struct netfs_io_request {
struct netfs_group *group; /* Writeback group being written back */
struct bvecq_pos collect_cursor; /* Clear-up point of I/O buffer */
struct bvecq_pos load_cursor; /* Point at which new folios are loaded in */
- struct bvecq_pos dispatch_cursor; /* Point from which buffers are dispatched */
+ struct bvecq_pos retry_cursor; /* Point from which retries are dispatched */
wait_queue_head_t waitq; /* Processor waiter */
void *netfs_priv; /* Private data for the netfs */
void *netfs_priv2; /* Private data for the netfs */
- unsigned long long last_end; /* End pos of last folio submitted */
unsigned long long submitted; /* Amount submitted for I/O so far */
unsigned long long len; /* Length of the request */
size_t transferred; /* Amount to be indicated as transferred */
long error; /* 0 or error that occurred */
unsigned long long i_size; /* Size of the file */
unsigned long long start; /* Start position */
- atomic64_t issued_to; /* Write issuer folio cursor */
unsigned long long collected_to; /* Point we've collected to */
unsigned long long cache_coll_to; /* Point the cache has collected to */
unsigned long long cleaned_to; /* Position we've cleaned folios to */
unsigned long long abandon_to; /* Position to abandon folios to */
+#ifdef CONFIG_NETFS_PGPRIV2
+ unsigned long long last_end; /* End of last folio added */
+#endif
+ unsigned long long retry_start; /* Position to retry from */
+ size_t retry_buffered; /* Amount of data to retry */
pgoff_t no_unlock_folio; /* Don't unlock this folio after read */
unsigned int debug_id;
unsigned int rsize; /* Maximum read size (0 for none) */
@@ -282,8 +297,10 @@ struct netfs_io_request {
#define NETFS_RREQ_UPLOAD_TO_SERVER 11 /* Need to write to the server */
#define NETFS_RREQ_USE_IO_ITER 12 /* Use ->io_iter rather than ->i_pages */
#define NETFS_RREQ_NEED_PUT_RA_REFS 13 /* Need to put the folio refs RA gave us */
+#ifdef CONFIG_NETFS_PGPRIV2
#define NETFS_RREQ_USE_PGPRIV2 31 /* [DEPRECATED] Use PG_private_2 to mark
* write to cache on read */
+#endif
const struct netfs_request_ops *netfs_ops;
};
@@ -299,8 +316,7 @@ struct netfs_request_ops {
/* Read request handling */
void (*expand_readahead)(struct netfs_io_request *rreq);
- int (*prepare_read)(struct netfs_io_subrequest *subreq);
- void (*issue_read)(struct netfs_io_subrequest *subreq);
+ int (*issue_read)(struct netfs_io_subrequest *subreq);
bool (*is_still_valid)(struct netfs_io_request *rreq);
int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,
struct folio **foliop, void **_fsdata);
@@ -312,8 +328,10 @@ struct netfs_request_ops {
/* Write request handling */
void (*begin_writeback)(struct netfs_io_request *wreq);
- void (*prepare_write)(struct netfs_io_subrequest *subreq);
- void (*issue_write)(struct netfs_io_subrequest *subreq);
+ int (*estimate_write)(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream,
+ struct netfs_write_estimate *estimate);
+ int (*issue_write)(struct netfs_io_subrequest *subreq);
void (*retry_request)(struct netfs_io_request *wreq, struct netfs_io_stream *stream);
void (*invalidate_cache)(struct netfs_io_request *wreq);
};
@@ -348,8 +366,16 @@ struct netfs_cache_ops {
netfs_io_terminated_t term_func,
void *term_func_priv);
+ /* Estimate the amount of data that can be written in an op. */
+ int (*estimate_write)(struct netfs_io_request *wreq,
+ struct netfs_io_stream *stream,
+ struct netfs_write_estimate *estimate);
+
+ /* Read data from the cache for a netfs subrequest. */
+ int (*issue_read)(struct netfs_io_subrequest *subreq);
+
/* Write data to the cache from a netfs subrequest. */
- void (*issue_write)(struct netfs_io_subrequest *subreq);
+ int (*issue_write)(struct netfs_io_subrequest *subreq);
/* Expand readahead request */
void (*expand_readahead)(struct netfs_cache_resources *cres,
@@ -357,25 +383,6 @@ struct netfs_cache_ops {
unsigned long long *_len,
unsigned long long i_size);
- /* Prepare a read operation, shortening it to a cached/uncached
- * boundary as appropriate.
- */
- int (*prepare_read)(struct netfs_io_subrequest *subreq);
-
- /* Prepare a write subrequest, working out if we're allowed to do it
- * and finding out the maximum amount of data to gather before
- * attempting to submit. If we're not permitted to do it, the
- * subrequest should be marked failed.
- */
- void (*prepare_write_subreq)(struct netfs_io_subrequest *subreq);
-
- /* Prepare a write operation, working out what part of the write we can
- * actually do.
- */
- int (*prepare_write)(struct netfs_cache_resources *cres,
- loff_t *_start, size_t *_len, size_t upper_len,
- loff_t i_size, bool no_space_allocated_yet);
-
/* Prepare an on-demand read operation, shortening it to a cached/uncached
* boundary as appropriate.
*/
@@ -418,10 +425,9 @@ void netfs_single_mark_inode_dirty(struct inode *inode);
ssize_t netfs_read_single(struct inode *inode, struct file *file, struct iov_iter *iter);
int netfs_writeback_single(struct address_space *mapping,
struct writeback_control *wbc,
- struct iov_iter *iter);
+ struct iov_iter *iter, size_t len);
/* Address operations API */
-struct readahead_control;
void netfs_readahead(struct readahead_control *);
int netfs_read_folio(struct file *, struct folio *);
int netfs_write_begin(struct netfs_inode *, struct file *,
@@ -439,6 +445,7 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp);
vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group);
/* (Sub)request management API. */
+void netfs_mark_read_submission(struct netfs_io_subrequest *subreq);
void netfs_read_subreq_progress(struct netfs_io_subrequest *subreq);
void netfs_read_subreq_terminated(struct netfs_io_subrequest *subreq);
void netfs_get_subrequest(struct netfs_io_subrequest *subreq,
@@ -448,9 +455,8 @@ void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t max_segs,
unsigned long long fpos, struct bvecq **_bvecq_head,
iov_iter_extraction_t extraction_flags);
-size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,
- size_t max_size, size_t max_segs);
-void netfs_prepare_write_failed(struct netfs_io_subrequest *subreq);
+int netfs_prepare_read_buffer(struct netfs_io_subrequest *subreq, unsigned int max_segs);
+int netfs_prepare_write_buffer(struct netfs_io_subrequest *subreq, unsigned int max_segs);
void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error);
int netfs_start_io_read(struct inode *inode);
diff --git a/include/trace/events/cachefiles.h b/include/trace/events/cachefiles.h
index 4bba6fda1f8b..c080167451ab 100644
--- a/include/trace/events/cachefiles.h
+++ b/include/trace/events/cachefiles.h
@@ -70,6 +70,7 @@ enum cachefiles_coherency_trace {
enum cachefiles_trunc_trace {
cachefiles_trunc_clear_padding,
cachefiles_trunc_dio_adjust,
+ cachefiles_trunc_discard_tail,
cachefiles_trunc_expand_tmpfile,
cachefiles_trunc_shrink,
};
@@ -160,6 +161,7 @@ enum cachefiles_error_trace {
#define cachefiles_trunc_traces \
EM(cachefiles_trunc_clear_padding, "CLRPAD") \
EM(cachefiles_trunc_dio_adjust, "DIOADJ") \
+ EM(cachefiles_trunc_discard_tail, "DSCDTL") \
EM(cachefiles_trunc_expand_tmpfile, "EXPTMP") \
E_(cachefiles_trunc_shrink, "SHRINK")
diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
index eeb8386e0709..ba38cc102bd7 100644
--- a/include/trace/events/netfs.h
+++ b/include/trace/events/netfs.h
@@ -49,6 +49,7 @@
E_(NETFS_PGPRIV2_COPY_TO_CACHE, "2C")
#define netfs_rreq_traces \
+ EM(netfs_rreq_trace_all_queued, "ALL-Q ") \
EM(netfs_rreq_trace_assess, "ASSESS ") \
EM(netfs_rreq_trace_collect, "COLLECT") \
EM(netfs_rreq_trace_complete, "COMPLET") \
@@ -77,7 +78,8 @@
EM(netfs_rreq_trace_waited_quiesce, "DONE-QUIESCE") \
EM(netfs_rreq_trace_wake_ip, "WAKE-IP") \
EM(netfs_rreq_trace_wake_queue, "WAKE-Q ") \
- E_(netfs_rreq_trace_write_done, "WR-DONE")
+ EM(netfs_rreq_trace_write_done, "WR-DONE") \
+ E_(netfs_rreq_trace_zero_unread, "ZERO-UR")
#define netfs_sreq_sources \
EM(NETFS_SOURCE_UNKNOWN, "----") \
@@ -126,6 +128,7 @@
EM(netfs_sreq_trace_superfluous, "SPRFL") \
EM(netfs_sreq_trace_terminated, "TERM ") \
EM(netfs_sreq_trace_too_much, "!TOOM") \
+ EM(netfs_sreq_trace_too_many_retries, "!RETR") \
EM(netfs_sreq_trace_wait_for, "_WAIT") \
EM(netfs_sreq_trace_write, "WRITE") \
EM(netfs_sreq_trace_write_skip, "SKIP ") \
@@ -189,12 +192,12 @@
EM(netfs_folio_trace_alloc_buffer, "alloc-buf") \
EM(netfs_folio_trace_cancel_copy, "cancel-copy") \
EM(netfs_folio_trace_cancel_store, "cancel-store") \
- EM(netfs_folio_trace_clear, "clear") \
- EM(netfs_folio_trace_clear_cc, "clear-cc") \
- EM(netfs_folio_trace_clear_g, "clear-g") \
- EM(netfs_folio_trace_clear_s, "clear-s") \
EM(netfs_folio_trace_copy_to_cache, "mark-copy") \
EM(netfs_folio_trace_end_copy, "end-copy") \
+ EM(netfs_folio_trace_endwb, "endwb") \
+ EM(netfs_folio_trace_endwb_cc, "endwb-cc") \
+ EM(netfs_folio_trace_endwb_g, "endwb-g") \
+ EM(netfs_folio_trace_endwb_s, "endwb-s") \
EM(netfs_folio_trace_filled_gaps, "filled-gaps") \
EM(netfs_folio_trace_kill, "kill") \
EM(netfs_folio_trace_kill_cc, "kill-cc") \
@@ -381,10 +384,10 @@ TRACE_EVENT(netfs_sreq,
__entry->len = sreq->len;
__entry->transferred = sreq->transferred;
__entry->start = sreq->start;
- __entry->slot = sreq->dispatch_pos.slot;
+ __entry->slot = sreq->content.slot;
),
- TP_printk("R=%08x[%x] %s %s f=%03x s=%llx %zx/%zx qs=%u e=%d",
+ TP_printk("R=%08x[%x] %s %s f=%03x s=%llx %zx/%zx bv=%u e=%d",
__entry->rreq, __entry->index,
__print_symbolic(__entry->source, netfs_sreq_sources),
__print_symbolic(__entry->what, netfs_sreq_traces),
@@ -492,6 +495,7 @@ TRACE_EVENT(netfs_folio,
TP_STRUCT__entry(
__field(ino_t, ino)
__field(pgoff_t, index)
+ __field(unsigned long, pfn)
__field(unsigned int, nr)
__field(enum netfs_folio_trace, why)
),
@@ -502,13 +506,40 @@ TRACE_EVENT(netfs_folio,
__entry->why = why;
__entry->index = folio->index;
__entry->nr = folio_nr_pages(folio);
+ __entry->pfn = folio_pfn(folio);
),
- TP_printk("i=%05lx ix=%05lx-%05lx %s",
+ TP_printk("p=%lx i=%05lx ix=%05lx-%05lx %s",
+ __entry->pfn,
__entry->ino, __entry->index, __entry->index + __entry->nr - 1,
__print_symbolic(__entry->why, netfs_folio_traces))
);
+TRACE_EVENT(netfs_wback,
+ TP_PROTO(struct netfs_io_request *wreq, struct folio *folio, unsigned int notes),
+
+ TP_ARGS(wreq, folio, notes),
+
+ TP_STRUCT__entry(
+ __field(pgoff_t, index)
+ __field(unsigned int, wreq)
+ __field(unsigned int, nr)
+ __field(unsigned int, notes)
+ ),
+
+ TP_fast_assign(
+ __entry->wreq = wreq->debug_id;
+ __entry->notes = notes;
+ __entry->index = folio->index;
+ __entry->nr = folio_nr_pages(folio);
+ ),
+
+ TP_printk("R=%08x ix=%05lx-%05lx n=%02x",
+ __entry->wreq,
+ __entry->index, __entry->index + __entry->nr - 1,
+ __entry->notes)
+ );
+
TRACE_EVENT(netfs_write_iter,
TP_PROTO(const struct kiocb *iocb, const struct iov_iter *from),
@@ -751,7 +782,7 @@ TRACE_EVENT(netfs_collect_stream,
__entry->wreq = wreq->debug_id;
__entry->stream = stream->stream_nr;
__entry->collected_to = stream->collected_to;
- __entry->issued_to = atomic64_read(&wreq->issued_to);
+ __entry->issued_to = atomic64_read(&stream->issued_to);
),
TP_printk("R=%08x[%x:] cto=%llx ito=%llx",
@@ -775,7 +806,7 @@ TRACE_EVENT(netfs_bvecq,
__entry->trace = trace;
),
- TP_printk("fq=%x %s",
+ TP_printk("bq=%x %s",
__entry->id,
__print_symbolic(__entry->trace, netfs_bvecq_traces))
);
diff --git a/net/9p/client.c b/net/9p/client.c
index f0dcf252af7e..8d365c000553 100644
--- a/net/9p/client.c
+++ b/net/9p/client.c
@@ -1561,6 +1561,7 @@ void
p9_client_write_subreq(struct netfs_io_subrequest *subreq)
{
struct netfs_io_request *wreq = subreq->rreq;
+ struct iov_iter iter;
struct p9_fid *fid = wreq->netfs_priv;
struct p9_client *clnt = fid->clnt;
struct p9_req_t *req;
@@ -1571,14 +1572,17 @@ p9_client_write_subreq(struct netfs_io_subrequest *subreq)
p9_debug(P9_DEBUG_9P, ">>> TWRITE fid %d offset %llu len %d\n",
fid->fid, start, len);
+ iov_iter_bvec_queue(&iter, ITER_SOURCE, subreq->content.bvecq,
+ subreq->content.slot, subreq->content.offset, subreq->len);
+
/* Don't bother zerocopy for small IO (< 1024) */
if (clnt->trans_mod->zc_request && len > 1024) {
- req = p9_client_zc_rpc(clnt, P9_TWRITE, NULL, &subreq->io_iter,
+ req = p9_client_zc_rpc(clnt, P9_TWRITE, NULL, &iter,
0, wreq->len, P9_ZC_HDR_SZ, "dqd",
fid->fid, start, len);
} else {
req = p9_client_rpc(clnt, P9_TWRITE, "dqV", fid->fid,
- start, len, &subreq->io_iter);
+ start, len, &iter);
}
if (IS_ERR(req)) {
netfs_write_subrequest_terminated(subreq, PTR_ERR(req));
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 07/26] cachefiles: Fix excess dput() after end_removing()
[not found] ` <CA+yaA_=gpTnueByzFNYrqNL_qSC2rE4iGDjLHtJap-=_rhE3HQ@mail.gmail.com>
@ 2026-03-26 11:10 ` David Howells
0 siblings, 0 replies; 29+ messages in thread
From: David Howells @ 2026-03-26 11:10 UTC (permalink / raw)
To: Aditya
Cc: dhowells, Christian Brauner, Matthew Wilcox, Christoph Hellwig,
Paulo Alcantara, Jens Axboe, Leon Romanovsky, Steve French,
ChenXiaoSong, Marc Dionne, Eric Van Hensbergen,
Dominique Martinet, Ilya Dryomov, Trond Myklebust, netfs,
linux-afs, linux-cifs, linux-nfs, ceph-devel, v9fs, linux-erofs,
linux-fsdevel, linux-kernel, NeilBrown, Paulo Alcantara
Aditya <dev@adityakammati.me> wrote:
> Why are this bulk fixes
I'm not sure what you're asking. Are you wanting to know why the series
begins with a collection of fix patches? If so, it's because the main patches
(08-26) are dependent on them being applied first.
David
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 25/26] netfs: Limit the the minimum trigger for progress reporting
2026-03-26 10:45 ` [PATCH 25/26] netfs: Limit the the minimum trigger for progress reporting David Howells
@ 2026-03-26 14:19 ` ChenXiaoSong
0 siblings, 0 replies; 29+ messages in thread
From: ChenXiaoSong @ 2026-03-26 14:19 UTC (permalink / raw)
To: David Howells, Christian Brauner, Matthew Wilcox,
Christoph Hellwig
Cc: Paulo Alcantara, Jens Axboe, Leon Romanovsky, Steve French,
Marc Dionne, Eric Van Hensbergen, Dominique Martinet,
Ilya Dryomov, Trond Myklebust, netfs, linux-afs, linux-cifs,
linux-nfs, ceph-devel, v9fs, linux-erofs, linux-fsdevel,
linux-kernel, Paulo Alcantara
There are two "the"s in the subject.
On 3/26/26 18:45, David Howells wrote:
> [PATCH 25/26] netfs: Limit the the ...
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2026-03-26 14:20 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-26 10:45 [PATCH 00/26] netfs: Keep track of folios in a segmented bio_vec[] chain David Howells
2026-03-26 10:45 ` [PATCH 01/26] netfs: Fix NULL pointer dereference in netfs_unbuffered_write() on retry David Howells
2026-03-26 10:45 ` [PATCH 02/26] netfs: Fix kernel BUG in netfs_limit_iter() for ITER_KVEC iterators David Howells
2026-03-26 10:45 ` [PATCH 03/26] netfs: fix VM_BUG_ON_FOLIO() issue in netfs_write_begin() call David Howells
2026-03-26 10:45 ` [PATCH 04/26] netfs: fix error handling in netfs_extract_user_iter() David Howells
2026-03-26 10:45 ` [PATCH 05/26] netfs: Fix read abandonment during retry David Howells
2026-03-26 10:45 ` [PATCH 06/26] netfs: Fix the handling of stream->front by removing it David Howells
2026-03-26 10:45 ` [PATCH 07/26] cachefiles: Fix excess dput() after end_removing() David Howells
[not found] ` <CA+yaA_=gpTnueByzFNYrqNL_qSC2rE4iGDjLHtJap-=_rhE3HQ@mail.gmail.com>
2026-03-26 11:10 ` David Howells
2026-03-26 10:45 ` [PATCH 08/26] cachefiles: Don't rely on backing fs storage map for most use cases David Howells
2026-03-26 10:45 ` [PATCH 09/26] mm: Make readahead store folio count in readahead_control David Howells
2026-03-26 10:45 ` [PATCH 10/26] netfs: Bulk load the readahead-provided folios up front David Howells
2026-03-26 10:45 ` [PATCH 11/26] Add a function to kmap one page of a multipage bio_vec David Howells
2026-03-26 10:45 ` [PATCH 12/26] iov_iter: Add a segmented queue of bio_vec[] David Howells
2026-03-26 10:45 ` [PATCH 13/26] netfs: Add some tools for managing bvecq chains David Howells
2026-03-26 10:45 ` [PATCH 14/26] netfs: Add a function to extract from an iter into a bvecq David Howells
2026-03-26 10:45 ` [PATCH 15/26] afs: Use a bvecq to hold dir content rather than folioq David Howells
2026-03-26 10:45 ` [PATCH 16/26] cifs: Use a bvecq for buffering instead of a folioq David Howells
2026-03-26 10:45 ` [PATCH 17/26] cifs: Support ITER_BVECQ in smb_extract_iter_to_rdma() David Howells
2026-03-26 10:45 ` [PATCH 18/26] netfs: Switch to using bvecq rather than folio_queue and rolling_buffer David Howells
2026-03-26 10:45 ` [PATCH 19/26] cifs: Remove support for ITER_KVEC/BVEC/FOLIOQ from smb_extract_iter_to_rdma() David Howells
2026-03-26 10:45 ` [PATCH 20/26] netfs: Remove netfs_alloc/free_folioq_buffer() David Howells
2026-03-26 10:45 ` [PATCH 21/26] netfs: Remove netfs_extract_user_iter() David Howells
2026-03-26 10:45 ` [PATCH 22/26] iov_iter: Remove ITER_FOLIOQ David Howells
2026-03-26 10:45 ` [PATCH 23/26] netfs: Remove folio_queue and rolling_buffer David Howells
2026-03-26 10:45 ` [PATCH 24/26] netfs: Check for too much data being read David Howells
2026-03-26 10:45 ` [PATCH 25/26] netfs: Limit the the minimum trigger for progress reporting David Howells
2026-03-26 14:19 ` ChenXiaoSong
2026-03-26 10:45 ` [PATCH 26/26] netfs: Combine prepare and issue ops and grab the buffers on request David Howells
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox