All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Howells <dhowells@redhat.com>
To: Matthew Wilcox <willy@infradead.org>,
	Christoph Hellwig <hch@infradead.org>,
	Jens Axboe <axboe@kernel.dk>, Leon Romanovsky <leon@kernel.org>
Cc: David Howells <dhowells@redhat.com>,
	Christian Brauner <christian@brauner.io>,
	Paulo Alcantara <pc@manguebit.com>,
	netfs@lists.linux.dev, linux-afs@lists.infradead.org,
	linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org, v9fs@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	Paulo Alcantara <pc@manguebit.org>,
	Steve French <sfrench@samba.org>
Subject: [RFC PATCH 15/17] netfs: Remove folio_queue and rolling_buffer
Date: Wed,  4 Mar 2026 14:03:22 +0000	[thread overview]
Message-ID: <20260304140328.112636-16-dhowells@redhat.com> (raw)
In-Reply-To: <20260304140328.112636-1-dhowells@redhat.com>

Remove folio_queue and rolling_buffer as they're no longer used.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Matthew Wilcox <willy@infradead.org>
cc: Christoph Hellwig <hch@infradead.org>
cc: Steve French <sfrench@samba.org>
cc: linux-cifs@vger.kernel.org
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
---
 Documentation/core-api/folio_queue.rst | 209 ------------------
 Documentation/core-api/index.rst       |   1 -
 fs/netfs/iterator.c                    | 149 -------------
 fs/netfs/rolling_buffer.c              | 222 -------------------
 include/linux/folio_queue.h            | 282 -------------------------
 include/linux/rolling_buffer.h         |  61 ------
 6 files changed, 924 deletions(-)
 delete mode 100644 Documentation/core-api/folio_queue.rst
 delete mode 100644 fs/netfs/rolling_buffer.c
 delete mode 100644 include/linux/folio_queue.h
 delete mode 100644 include/linux/rolling_buffer.h

diff --git a/Documentation/core-api/folio_queue.rst b/Documentation/core-api/folio_queue.rst
deleted file mode 100644
index b7628896d2b6..000000000000
--- a/Documentation/core-api/folio_queue.rst
+++ /dev/null
@@ -1,209 +0,0 @@
-.. SPDX-License-Identifier: GPL-2.0+
-
-===========
-Folio Queue
-===========
-
-:Author: David Howells <dhowells@redhat.com>
-
-.. Contents:
-
- * Overview
- * Initialisation
- * Adding and removing folios
- * Querying information about a folio
- * Querying information about a folio_queue
- * Folio queue iteration
- * Folio marks
- * Lockless simultaneous production/consumption issues
-
-
-Overview
-========
-
-The folio_queue struct forms a single segment in a segmented list of folios
-that can be used to form an I/O buffer.  As such, the list can be iterated over
-using the ITER_FOLIOQ iov_iter type.
-
-The publicly accessible members of the structure are::
-
-	struct folio_queue {
-		struct folio_queue *next;
-		struct folio_queue *prev;
-		...
-	};
-
-A pair of pointers are provided, ``next`` and ``prev``, that point to the
-segments on either side of the segment being accessed.  Whilst this is a
-doubly-linked list, it is intentionally not a circular list; the outward
-sibling pointers in terminal segments should be NULL.
-
-Each segment in the list also stores:
-
- * an ordered sequence of folio pointers,
- * the size of each folio and
- * three 1-bit marks per folio,
-
-but these should not be accessed directly as the underlying data structure may
-change, but rather the access functions outlined below should be used.
-
-The facility can be made accessible by::
-
-	#include <linux/folio_queue.h>
-
-and to use the iterator::
-
-	#include <linux/uio.h>
-
-
-Initialisation
-==============
-
-A segment should be initialised by calling::
-
-	void folioq_init(struct folio_queue *folioq);
-
-with a pointer to the segment to be initialised.  Note that this will not
-necessarily initialise all the folio pointers, so care must be taken to check
-the number of folios added.
-
-
-Adding and removing folios
-==========================
-
-Folios can be set in the next unused slot in a segment struct by calling one
-of::
-
-	unsigned int folioq_append(struct folio_queue *folioq,
-				   struct folio *folio);
-
-	unsigned int folioq_append_mark(struct folio_queue *folioq,
-					struct folio *folio);
-
-Both functions update the stored folio count, store the folio and note its
-size.  The second function also sets the first mark for the folio added.  Both
-functions return the number of the slot used.  [!] Note that no attempt is made
-to check that the capacity wasn't overrun and the list will not be extended
-automatically.
-
-A folio can be excised by calling::
-
-	void folioq_clear(struct folio_queue *folioq, unsigned int slot);
-
-This clears the slot in the array and also clears all the marks for that folio,
-but doesn't change the folio count - so future accesses of that slot must check
-if the slot is occupied.
-
-
-Querying information about a folio
-==================================
-
-Information about the folio in a particular slot may be queried by the
-following function::
-
-	struct folio *folioq_folio(const struct folio_queue *folioq,
-				   unsigned int slot);
-
-If a folio has not yet been set in that slot, this may yield an undefined
-pointer.  The size of the folio in a slot may be queried with either of::
-
-	unsigned int folioq_folio_order(const struct folio_queue *folioq,
-					unsigned int slot);
-
-	size_t folioq_folio_size(const struct folio_queue *folioq,
-				 unsigned int slot);
-
-The first function returns the size as an order and the second as a number of
-bytes.
-
-
-Querying information about a folio_queue
-========================================
-
-Information may be retrieved about a particular segment with the following
-functions::
-
-	unsigned int folioq_nr_slots(const struct folio_queue *folioq);
-
-	unsigned int folioq_count(struct folio_queue *folioq);
-
-	bool folioq_full(struct folio_queue *folioq);
-
-The first function returns the maximum capacity of a segment.  It must not be
-assumed that this won't vary between segments.  The second returns the number
-of folios added to a segments and the third is a shorthand to indicate if the
-segment has been filled to capacity.
-
-Not that the count and fullness are not affected by clearing folios from the
-segment.  These are more about indicating how many slots in the array have been
-initialised, and it assumed that slots won't get reused, but rather the segment
-will get discarded as the queue is consumed.
-
-
-Folio marks
-===========
-
-Folios within a queue can also have marks assigned to them.  These marks can be
-used to note information such as if a folio needs folio_put() calling upon it.
-There are three marks available to be set for each folio.
-
-The marks can be set by::
-
-	void folioq_mark(struct folio_queue *folioq, unsigned int slot);
-	void folioq_mark2(struct folio_queue *folioq, unsigned int slot);
-
-Cleared by::
-
-	void folioq_unmark(struct folio_queue *folioq, unsigned int slot);
-	void folioq_unmark2(struct folio_queue *folioq, unsigned int slot);
-
-And the marks can be queried by::
-
-	bool folioq_is_marked(const struct folio_queue *folioq, unsigned int slot);
-	bool folioq_is_marked2(const struct folio_queue *folioq, unsigned int slot);
-
-The marks can be used for any purpose and are not interpreted by this API.
-
-
-Folio queue iteration
-=====================
-
-A list of segments may be iterated over using the I/O iterator facility using
-an ``iov_iter`` iterator of ``ITER_FOLIOQ`` type.  The iterator may be
-initialised with::
-
-	void iov_iter_folio_queue(struct iov_iter *i, unsigned int direction,
-				  const struct folio_queue *folioq,
-				  unsigned int first_slot, unsigned int offset,
-				  size_t count);
-
-This may be told to start at a particular segment, slot and offset within a
-queue.  The iov iterator functions will follow the next pointers when advancing
-and prev pointers when reverting when needed.
-
-
-Lockless simultaneous production/consumption issues
-===================================================
-
-If properly managed, the list can be extended by the producer at the head end
-and shortened by the consumer at the tail end simultaneously without the need
-to take locks.  The ITER_FOLIOQ iterator inserts appropriate barriers to aid
-with this.
-
-Care must be taken when simultaneously producing and consuming a list.  If the
-last segment is reached and the folios it refers to are entirely consumed by
-the IOV iterators, an iov_iter struct will be left pointing to the last segment
-with a slot number equal to the capacity of that segment.  The iterator will
-try to continue on from this if there's another segment available when it is
-used again, but care must be taken lest the segment got removed and freed by
-the consumer before the iterator was advanced.
-
-It is recommended that the queue always contain at least one segment, even if
-that segment has never been filled or is entirely spent.  This prevents the
-head and tail pointers from collapsing.
-
-
-API Function Reference
-======================
-
-.. kernel-doc:: include/linux/folio_queue.h
diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst
index 13769d5c40bf..16c529a33ac4 100644
--- a/Documentation/core-api/index.rst
+++ b/Documentation/core-api/index.rst
@@ -39,7 +39,6 @@ Library functionality that is used throughout the kernel.
    kref
    cleanup
    assoc_array
-   folio_queue
    xarray
    maple_tree
    idr
diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c
index 5ae9279a2dfb..eda6e2ca02e7 100644
--- a/fs/netfs/iterator.c
+++ b/fs/netfs/iterator.c
@@ -134,152 +134,3 @@ ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t max_se
 	return extracted ?: ret;
 }
 EXPORT_SYMBOL_GPL(netfs_extract_iter);
-
-#if 0
-/*
- * Select the span of a bvec iterator we're going to use.  Limit it by both maximum
- * size and maximum number of segments.  Returns the size of the span in bytes.
- */
-static size_t netfs_limit_bvec(const struct iov_iter *iter, size_t start_offset,
-			       size_t max_size, size_t max_segs)
-{
-	const struct bio_vec *bvecs = iter->bvec;
-	unsigned int nbv = iter->nr_segs, ix = 0, nsegs = 0;
-	size_t len, span = 0, n = iter->count;
-	size_t skip = iter->iov_offset + start_offset;
-
-	if (WARN_ON(!iov_iter_is_bvec(iter)) ||
-	    WARN_ON(start_offset > n) ||
-	    n == 0)
-		return 0;
-
-	while (n && ix < nbv && skip) {
-		len = bvecs[ix].bv_len;
-		if (skip < len)
-			break;
-		skip -= len;
-		n -= len;
-		ix++;
-	}
-
-	while (n && ix < nbv) {
-		len = min3(n, bvecs[ix].bv_len - skip, max_size);
-		span += len;
-		nsegs++;
-		ix++;
-		if (span >= max_size || nsegs >= max_segs)
-			break;
-		skip = 0;
-		n -= len;
-	}
-
-	return min(span, max_size);
-}
-
-/*
- * Select the span of an xarray iterator we're going to use.  Limit it by both
- * maximum size and maximum number of segments.  It is assumed that segments
- * can be larger than a page in size, provided they're physically contiguous.
- * Returns the size of the span in bytes.
- */
-static size_t netfs_limit_xarray(const struct iov_iter *iter, size_t start_offset,
-				 size_t max_size, size_t max_segs)
-{
-	struct folio *folio;
-	unsigned int nsegs = 0;
-	loff_t pos = iter->xarray_start + iter->iov_offset;
-	pgoff_t index = pos / PAGE_SIZE;
-	size_t span = 0, n = iter->count;
-
-	XA_STATE(xas, iter->xarray, index);
-
-	if (WARN_ON(!iov_iter_is_xarray(iter)) ||
-	    WARN_ON(start_offset > n) ||
-	    n == 0)
-		return 0;
-	max_size = min(max_size, n - start_offset);
-
-	rcu_read_lock();
-	xas_for_each(&xas, folio, ULONG_MAX) {
-		size_t offset, flen, len;
-		if (xas_retry(&xas, folio))
-			continue;
-		if (WARN_ON(xa_is_value(folio)))
-			break;
-		if (WARN_ON(folio_test_hugetlb(folio)))
-			break;
-
-		flen = folio_size(folio);
-		offset = offset_in_folio(folio, pos);
-		len = min(max_size, flen - offset);
-		span += len;
-		nsegs++;
-		if (span >= max_size || nsegs >= max_segs)
-			break;
-	}
-
-	rcu_read_unlock();
-	return min(span, max_size);
-}
-
-/*
- * Select the span of a folio queue iterator we're going to use.  Limit it by
- * both maximum size and maximum number of segments.  Returns the size of the
- * span in bytes.
- */
-static size_t netfs_limit_folioq(const struct iov_iter *iter, size_t start_offset,
-				 size_t max_size, size_t max_segs)
-{
-	const struct folio_queue *folioq = iter->folioq;
-	unsigned int nsegs = 0;
-	unsigned int slot = iter->folioq_slot;
-	size_t span = 0, n = iter->count;
-
-	if (WARN_ON(!iov_iter_is_folioq(iter)) ||
-	    WARN_ON(start_offset > n) ||
-	    n == 0)
-		return 0;
-	max_size = umin(max_size, n - start_offset);
-
-	if (slot >= folioq_nr_slots(folioq)) {
-		folioq = folioq->next;
-		slot = 0;
-	}
-
-	start_offset += iter->iov_offset;
-	do {
-		size_t flen = folioq_folio_size(folioq, slot);
-
-		if (start_offset < flen) {
-			span += flen - start_offset;
-			nsegs++;
-			start_offset = 0;
-		} else {
-			start_offset -= flen;
-		}
-		if (span >= max_size || nsegs >= max_segs)
-			break;
-
-		slot++;
-		if (slot >= folioq_nr_slots(folioq)) {
-			folioq = folioq->next;
-			slot = 0;
-		}
-	} while (folioq);
-
-	return umin(span, max_size);
-}
-
-size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,
-			size_t max_size, size_t max_segs)
-{
-	if (iov_iter_is_folioq(iter))
-		return netfs_limit_folioq(iter, start_offset, max_size, max_segs);
-	if (iov_iter_is_bvec(iter))
-		return netfs_limit_bvec(iter, start_offset, max_size, max_segs);
-	if (iov_iter_is_xarray(iter))
-		return netfs_limit_xarray(iter, start_offset, max_size, max_segs);
-	BUG();
-}
-EXPORT_SYMBOL(netfs_limit_iter);
-#endif
diff --git a/fs/netfs/rolling_buffer.c b/fs/netfs/rolling_buffer.c
deleted file mode 100644
index a17fbf9853a4..000000000000
--- a/fs/netfs/rolling_buffer.c
+++ /dev/null
@@ -1,222 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-/* Rolling buffer helpers
- *
- * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved.
- * Written by David Howells (dhowells@redhat.com)
- */
-
-#include <linux/bitops.h>
-#include <linux/pagemap.h>
-#include <linux/rolling_buffer.h>
-#include <linux/slab.h>
-#include "internal.h"
-
-static atomic_t debug_ids;
-
-/**
- * netfs_folioq_alloc - Allocate a folio_queue struct
- * @rreq_id: Associated debugging ID for tracing purposes
- * @gfp: Allocation constraints
- * @trace: Trace tag to indicate the purpose of the allocation
- *
- * Allocate, initialise and account the folio_queue struct and log a trace line
- * to mark the allocation.
- */
-struct folio_queue *netfs_folioq_alloc(unsigned int rreq_id, gfp_t gfp,
-				       unsigned int /*enum netfs_folioq_trace*/ trace)
-{
-	struct folio_queue *fq;
-
-	fq = kmalloc_obj(*fq, gfp);
-	if (fq) {
-		netfs_stat(&netfs_n_folioq);
-		folioq_init(fq, rreq_id);
-		fq->debug_id = atomic_inc_return(&debug_ids);
-		trace_netfs_folioq(fq, trace);
-	}
-	return fq;
-}
-EXPORT_SYMBOL(netfs_folioq_alloc);
-
-/**
- * netfs_folioq_free - Free a folio_queue struct
- * @folioq: The object to free
- * @trace: Trace tag to indicate which free
- *
- * Free and unaccount the folio_queue struct.
- */
-void netfs_folioq_free(struct folio_queue *folioq,
-		       unsigned int /*enum netfs_trace_folioq*/ trace)
-{
-	trace_netfs_folioq(folioq, trace);
-	netfs_stat_d(&netfs_n_folioq);
-	kfree(folioq);
-}
-EXPORT_SYMBOL(netfs_folioq_free);
-
-/*
- * Initialise a rolling buffer.  We allocate an empty folio queue struct to so
- * that the pointers can be independently driven by the producer and the
- * consumer.
- */
-int rolling_buffer_init(struct rolling_buffer *roll, unsigned int rreq_id,
-			unsigned int direction)
-{
-	struct folio_queue *fq;
-
-	fq = netfs_folioq_alloc(rreq_id, GFP_NOFS, netfs_trace_folioq_rollbuf_init);
-	if (!fq)
-		return -ENOMEM;
-
-	roll->head = fq;
-	roll->tail = fq;
-	iov_iter_folio_queue(&roll->iter, direction, fq, 0, 0, 0);
-	return 0;
-}
-
-/*
- * Add another folio_queue to a rolling buffer if there's no space left.
- */
-int rolling_buffer_make_space(struct rolling_buffer *roll)
-{
-	struct folio_queue *fq, *head = roll->head;
-
-	if (!folioq_full(head))
-		return 0;
-
-	fq = netfs_folioq_alloc(head->rreq_id, GFP_NOFS, netfs_trace_folioq_make_space);
-	if (!fq)
-		return -ENOMEM;
-	fq->prev = head;
-
-	roll->head = fq;
-	if (folioq_full(head)) {
-		/* Make sure we don't leave the master iterator pointing to a
-		 * block that might get immediately consumed.
-		 */
-		if (roll->iter.folioq == head &&
-		    roll->iter.folioq_slot == folioq_nr_slots(head)) {
-			roll->iter.folioq = fq;
-			roll->iter.folioq_slot = 0;
-		}
-	}
-
-	/* Make sure the initialisation is stored before the next pointer.
-	 *
-	 * [!] NOTE: After we set head->next, the consumer is at liberty to
-	 * immediately delete the old head.
-	 */
-	smp_store_release(&head->next, fq);
-	return 0;
-}
-
-/*
- * Decant the list of folios to read into a rolling buffer.
- */
-ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll,
-				    struct readahead_control *ractl,
-				    struct folio_batch *put_batch)
-{
-	struct folio_queue *fq;
-	struct page **vec;
-	int nr, ix, to;
-	ssize_t size = 0;
-
-	if (rolling_buffer_make_space(roll) < 0)
-		return -ENOMEM;
-
-	fq = roll->head;
-	vec = (struct page **)fq->vec.folios;
-	nr = __readahead_batch(ractl, vec + folio_batch_count(&fq->vec),
-			       folio_batch_space(&fq->vec));
-	ix = fq->vec.nr;
-	to = ix + nr;
-	fq->vec.nr = to;
-	for (; ix < to; ix++) {
-		struct folio *folio = folioq_folio(fq, ix);
-		unsigned int order = folio_order(folio);
-
-		fq->orders[ix] = order;
-		size += PAGE_SIZE << order;
-		trace_netfs_folio(folio, netfs_folio_trace_read);
-		if (!folio_batch_add(put_batch, folio))
-			folio_batch_release(put_batch);
-	}
-	WRITE_ONCE(roll->iter.count, roll->iter.count + size);
-
-	/* Store the counter after setting the slot. */
-	smp_store_release(&roll->next_head_slot, to);
-	return size;
-}
-
-/*
- * Append a folio to the rolling buffer.
- */
-ssize_t rolling_buffer_append(struct rolling_buffer *roll, struct folio *folio,
-			      unsigned int flags)
-{
-	ssize_t size = folio_size(folio);
-	int slot;
-
-	if (rolling_buffer_make_space(roll) < 0)
-		return -ENOMEM;
-
-	slot = folioq_append(roll->head, folio);
-	if (flags & ROLLBUF_MARK_1)
-		folioq_mark(roll->head, slot);
-	if (flags & ROLLBUF_MARK_2)
-		folioq_mark2(roll->head, slot);
-
-	WRITE_ONCE(roll->iter.count, roll->iter.count + size);
-
-	/* Store the counter after setting the slot. */
-	smp_store_release(&roll->next_head_slot, slot);
-	return size;
-}
-
-/*
- * Delete a spent buffer from a rolling queue and return the next in line.  We
- * don't return the last buffer to keep the pointers independent, but return
- * NULL instead.
- */
-struct folio_queue *rolling_buffer_delete_spent(struct rolling_buffer *roll)
-{
-	struct folio_queue *spent = roll->tail, *next = READ_ONCE(spent->next);
-
-	if (!next)
-		return NULL;
-	next->prev = NULL;
-	netfs_folioq_free(spent, netfs_trace_folioq_delete);
-	roll->tail = next;
-	return next;
-}
-
-/*
- * Clear out a rolling queue.  Folios that have mark 1 set are put.
- */
-void rolling_buffer_clear(struct rolling_buffer *roll)
-{
-	struct folio_batch fbatch;
-	struct folio_queue *p;
-
-	folio_batch_init(&fbatch);
-
-	while ((p = roll->tail)) {
-		roll->tail = p->next;
-		for (int slot = 0; slot < folioq_count(p); slot++) {
-			struct folio *folio = folioq_folio(p, slot);
-
-			if (!folio)
-				continue;
-			if (folioq_is_marked(p, slot)) {
-				trace_netfs_folio(folio, netfs_folio_trace_put);
-				if (!folio_batch_add(&fbatch, folio))
-					folio_batch_release(&fbatch);
-			}
-		}
-
-		netfs_folioq_free(p, netfs_trace_folioq_clear);
-	}
-
-	folio_batch_release(&fbatch);
-}
diff --git a/include/linux/folio_queue.h b/include/linux/folio_queue.h
deleted file mode 100644
index adab609c972e..000000000000
--- a/include/linux/folio_queue.h
+++ /dev/null
@@ -1,282 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-/* Queue of folios definitions
- *
- * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved.
- * Written by David Howells (dhowells@redhat.com)
- *
- * See:
- *
- *	Documentation/core-api/folio_queue.rst
- *
- * for a description of the API.
- */
-
-#ifndef _LINUX_FOLIO_QUEUE_H
-#define _LINUX_FOLIO_QUEUE_H
-
-#include <linux/pagevec.h>
-#include <linux/mm.h>
-
-/*
- * Segment in a queue of running buffers.  Each segment can hold a number of
- * folios and a portion of the queue can be referenced with the ITER_FOLIOQ
- * iterator.  The possibility exists of inserting non-folio elements into the
- * queue (such as gaps).
- *
- * Explicit prev and next pointers are used instead of a list_head to make it
- * easier to add segments to tail and remove them from the head without the
- * need for a lock.
- */
-struct folio_queue {
-	struct folio_batch	vec;		/* Folios in the queue segment */
-	u8			orders[PAGEVEC_SIZE]; /* Order of each folio */
-	struct folio_queue	*next;		/* Next queue segment or NULL */
-	struct folio_queue	*prev;		/* Previous queue segment of NULL */
-	unsigned long		marks;		/* 1-bit mark per folio */
-	unsigned long		marks2;		/* Second 1-bit mark per folio */
-#if PAGEVEC_SIZE > BITS_PER_LONG
-#error marks is not big enough
-#endif
-	unsigned int		rreq_id;
-	unsigned int		debug_id;
-};
-
-/**
- * folioq_init - Initialise a folio queue segment
- * @folioq: The segment to initialise
- * @rreq_id: The request identifier to use in tracelines.
- *
- * Initialise a folio queue segment and set an identifier to be used in traces.
- *
- * Note that the folio pointers are left uninitialised.
- */
-static inline void folioq_init(struct folio_queue *folioq, unsigned int rreq_id)
-{
-	folio_batch_init(&folioq->vec);
-	folioq->next = NULL;
-	folioq->prev = NULL;
-	folioq->marks = 0;
-	folioq->marks2 = 0;
-	folioq->rreq_id = rreq_id;
-	folioq->debug_id = 0;
-}
-
-/**
- * folioq_nr_slots: Query the capacity of a folio queue segment
- * @folioq: The segment to query
- *
- * Query the number of folios that a particular folio queue segment might hold.
- * [!] NOTE: This must not be assumed to be the same for every segment!
- */
-static inline unsigned int folioq_nr_slots(const struct folio_queue *folioq)
-{
-	return PAGEVEC_SIZE;
-}
-
-/**
- * folioq_count: Query the occupancy of a folio queue segment
- * @folioq: The segment to query
- *
- * Query the number of folios that have been added to a folio queue segment.
- * Note that this is not decreased as folios are removed from a segment.
- */
-static inline unsigned int folioq_count(struct folio_queue *folioq)
-{
-	return folio_batch_count(&folioq->vec);
-}
-
-/**
- * folioq_full: Query if a folio queue segment is full
- * @folioq: The segment to query
- *
- * Query if a folio queue segment is fully occupied.  Note that this does not
- * change if folios are removed from a segment.
- */
-static inline bool folioq_full(struct folio_queue *folioq)
-{
-	//return !folio_batch_space(&folioq->vec);
-	return folioq_count(folioq) >= folioq_nr_slots(folioq);
-}
-
-/**
- * folioq_is_marked: Check first folio mark in a folio queue segment
- * @folioq: The segment to query
- * @slot: The slot number of the folio to query
- *
- * Determine if the first mark is set for the folio in the specified slot in a
- * folio queue segment.
- */
-static inline bool folioq_is_marked(const struct folio_queue *folioq, unsigned int slot)
-{
-	return test_bit(slot, &folioq->marks);
-}
-
-/**
- * folioq_mark: Set the first mark on a folio in a folio queue segment
- * @folioq: The segment to modify
- * @slot: The slot number of the folio to modify
- *
- * Set the first mark for the folio in the specified slot in a folio queue
- * segment.
- */
-static inline void folioq_mark(struct folio_queue *folioq, unsigned int slot)
-{
-	set_bit(slot, &folioq->marks);
-}
-
-/**
- * folioq_unmark: Clear the first mark on a folio in a folio queue segment
- * @folioq: The segment to modify
- * @slot: The slot number of the folio to modify
- *
- * Clear the first mark for the folio in the specified slot in a folio queue
- * segment.
- */
-static inline void folioq_unmark(struct folio_queue *folioq, unsigned int slot)
-{
-	clear_bit(slot, &folioq->marks);
-}
-
-/**
- * folioq_is_marked2: Check second folio mark in a folio queue segment
- * @folioq: The segment to query
- * @slot: The slot number of the folio to query
- *
- * Determine if the second mark is set for the folio in the specified slot in a
- * folio queue segment.
- */
-static inline bool folioq_is_marked2(const struct folio_queue *folioq, unsigned int slot)
-{
-	return test_bit(slot, &folioq->marks2);
-}
-
-/**
- * folioq_mark2: Set the second mark on a folio in a folio queue segment
- * @folioq: The segment to modify
- * @slot: The slot number of the folio to modify
- *
- * Set the second mark for the folio in the specified slot in a folio queue
- * segment.
- */
-static inline void folioq_mark2(struct folio_queue *folioq, unsigned int slot)
-{
-	set_bit(slot, &folioq->marks2);
-}
-
-/**
- * folioq_unmark2: Clear the second mark on a folio in a folio queue segment
- * @folioq: The segment to modify
- * @slot: The slot number of the folio to modify
- *
- * Clear the second mark for the folio in the specified slot in a folio queue
- * segment.
- */
-static inline void folioq_unmark2(struct folio_queue *folioq, unsigned int slot)
-{
-	clear_bit(slot, &folioq->marks2);
-}
-
-/**
- * folioq_append: Add a folio to a folio queue segment
- * @folioq: The segment to add to
- * @folio: The folio to add
- *
- * Add a folio to the tail of the sequence in a folio queue segment, increasing
- * the occupancy count and returning the slot number for the folio just added.
- * The folio size is extracted and stored in the queue and the marks are left
- * unmodified.
- *
- * Note that it's left up to the caller to check that the segment capacity will
- * not be exceeded and to extend the queue.
- */
-static inline unsigned int folioq_append(struct folio_queue *folioq, struct folio *folio)
-{
-	unsigned int slot = folioq->vec.nr++;
-
-	folioq->vec.folios[slot] = folio;
-	folioq->orders[slot] = folio_order(folio);
-	return slot;
-}
-
-/**
- * folioq_append_mark: Add a folio to a folio queue segment
- * @folioq: The segment to add to
- * @folio: The folio to add
- *
- * Add a folio to the tail of the sequence in a folio queue segment, increasing
- * the occupancy count and returning the slot number for the folio just added.
- * The folio size is extracted and stored in the queue, the first mark is set
- * and and the second and third marks are left unmodified.
- *
- * Note that it's left up to the caller to check that the segment capacity will
- * not be exceeded and to extend the queue.
- */
-static inline unsigned int folioq_append_mark(struct folio_queue *folioq, struct folio *folio)
-{
-	unsigned int slot = folioq->vec.nr++;
-
-	folioq->vec.folios[slot] = folio;
-	folioq->orders[slot] = folio_order(folio);
-	folioq_mark(folioq, slot);
-	return slot;
-}
-
-/**
- * folioq_folio: Get a folio from a folio queue segment
- * @folioq: The segment to access
- * @slot: The folio slot to access
- *
- * Retrieve the folio in the specified slot from a folio queue segment.  Note
- * that no bounds check is made and if the slot hasn't been added into yet, the
- * pointer will be undefined.  If the slot has been cleared, NULL will be
- * returned.
- */
-static inline struct folio *folioq_folio(const struct folio_queue *folioq, unsigned int slot)
-{
-	return folioq->vec.folios[slot];
-}
-
-/**
- * folioq_folio_order: Get the order of a folio from a folio queue segment
- * @folioq: The segment to access
- * @slot: The folio slot to access
- *
- * Retrieve the order of the folio in the specified slot from a folio queue
- * segment.  Note that no bounds check is made and if the slot hasn't been
- * added into yet, the order returned will be 0.
- */
-static inline unsigned int folioq_folio_order(const struct folio_queue *folioq, unsigned int slot)
-{
-	return folioq->orders[slot];
-}
-
-/**
- * folioq_folio_size: Get the size of a folio from a folio queue segment
- * @folioq: The segment to access
- * @slot: The folio slot to access
- *
- * Retrieve the size of the folio in the specified slot from a folio queue
- * segment.  Note that no bounds check is made and if the slot hasn't been
- * added into yet, the size returned will be PAGE_SIZE.
- */
-static inline size_t folioq_folio_size(const struct folio_queue *folioq, unsigned int slot)
-{
-	return PAGE_SIZE << folioq_folio_order(folioq, slot);
-}
-
-/**
- * folioq_clear: Clear a folio from a folio queue segment
- * @folioq: The segment to clear
- * @slot: The folio slot to clear
- *
- * Clear a folio from a sequence in a folio queue segment and clear its marks.
- * The occupancy count is left unchanged.
- */
-static inline void folioq_clear(struct folio_queue *folioq, unsigned int slot)
-{
-	folioq->vec.folios[slot] = NULL;
-	folioq_unmark(folioq, slot);
-	folioq_unmark2(folioq, slot);
-}
-
-#endif /* _LINUX_FOLIO_QUEUE_H */
diff --git a/include/linux/rolling_buffer.h b/include/linux/rolling_buffer.h
deleted file mode 100644
index ac15b1ffdd83..000000000000
--- a/include/linux/rolling_buffer.h
+++ /dev/null
@@ -1,61 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-/* Rolling buffer of folios
- *
- * Copyright (C) 2024 Red Hat, Inc. All Rights Reserved.
- * Written by David Howells (dhowells@redhat.com)
- */
-
-#ifndef _ROLLING_BUFFER_H
-#define _ROLLING_BUFFER_H
-
-#include <linux/folio_queue.h>
-#include <linux/uio.h>
-
-/*
- * Rolling buffer.  Whilst the buffer is live and in use, folios and folio
- * queue segments can be added to one end by one thread and removed from the
- * other end by another thread.  The buffer isn't allowed to be empty; it must
- * always have at least one folio_queue in it so that neither side has to
- * modify both queue pointers.
- *
- * The iterator in the buffer is extended as buffers are inserted.  It can be
- * snapshotted to use a segment of the buffer.
- */
-struct rolling_buffer {
-	struct folio_queue	*head;		/* Producer's insertion point */
-	struct folio_queue	*tail;		/* Consumer's removal point */
-	struct iov_iter		iter;		/* Iterator tracking what's left in the buffer */
-	u8			next_head_slot;	/* Next slot in ->head */
-	u8			first_tail_slot; /* First slot in ->tail */
-};
-
-/*
- * Snapshot of a rolling buffer.
- */
-struct rolling_buffer_snapshot {
-	struct folio_queue	*curr_folioq;	/* Queue segment in which current folio resides */
-	unsigned char		curr_slot;	/* Folio currently being read */
-	unsigned char		curr_order;	/* Order of folio */
-};
-
-/* Marks to store per-folio in the internal folio_queue structs. */
-#define ROLLBUF_MARK_1	BIT(0)
-#define ROLLBUF_MARK_2	BIT(1)
-
-int rolling_buffer_init(struct rolling_buffer *roll, unsigned int rreq_id,
-			unsigned int direction);
-int rolling_buffer_make_space(struct rolling_buffer *roll);
-ssize_t rolling_buffer_load_from_ra(struct rolling_buffer *roll,
-				    struct readahead_control *ractl,
-				    struct folio_batch *put_batch);
-ssize_t rolling_buffer_append(struct rolling_buffer *roll, struct folio *folio,
-			      unsigned int flags);
-struct folio_queue *rolling_buffer_delete_spent(struct rolling_buffer *roll);
-void rolling_buffer_clear(struct rolling_buffer *roll);
-
-static inline void rolling_buffer_advance(struct rolling_buffer *roll, size_t amount)
-{
-	iov_iter_advance(&roll->iter, amount);
-}
-
-#endif /* _ROLLING_BUFFER_H */


  parent reply	other threads:[~2026-03-04 14:05 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-04 14:03 [RFC PATCH 00/17] netfs: [WIP] Keep track of folios in a segmented bio_vec[] chain David Howells
2026-03-04 14:03 ` [RFC PATCH 01/17] netfs: Fix unbuffered/DIO writes to dispatch subrequests in strict sequence David Howells
2026-03-04 14:03 ` [RFC PATCH 02/17] vfs: Implement a FIEMAP callback David Howells
2026-03-04 14:06   ` Christoph Hellwig
2026-03-04 14:21     ` David Howells
2026-03-04 14:25       ` Christoph Hellwig
2026-03-04 14:34         ` David Howells
2026-03-04 14:03 ` [RFC PATCH 03/17] iov_iter: Add a segmented queue of bio_vec[] David Howells
2026-03-04 14:03 ` [RFC PATCH 04/17] Add a function to kmap one page of a multipage bio_vec David Howells
2026-03-04 14:03 ` [RFC PATCH 05/17] netfs: Add some tools for managing bvecq chains David Howells
2026-03-04 14:03 ` [RFC PATCH 06/17] afs: Use a bvecq to hold dir content rather than folioq David Howells
2026-03-04 14:03 ` [RFC PATCH 07/17] netfs: Add a function to extract from an iter into a bvecq David Howells
2026-03-04 14:03 ` [RFC PATCH 08/17] cifs: Use a bvecq for buffering instead of a folioq David Howells
2026-03-04 14:03 ` [RFC PATCH 09/17] cifs: Support ITER_BVECQ in smb_extract_iter_to_rdma() David Howells
2026-03-04 14:03 ` [RFC PATCH 10/17] netfs: Switch to using bvecq rather than folio_queue and rolling_buffer David Howells
2026-03-04 14:03 ` [RFC PATCH 11/17] cifs: Remove support for ITER_KVEC/BVEC/FOLIOQ from smb_extract_iter_to_rdma() David Howells
2026-03-04 14:03 ` [RFC PATCH 12/17] netfs: Remove netfs_alloc/free_folioq_buffer() David Howells
2026-03-04 14:03 ` [RFC PATCH 13/17] netfs: Remove netfs_extract_user_iter() David Howells
2026-03-04 14:03 ` [RFC PATCH 14/17] iov_iter: Remove ITER_FOLIOQ David Howells
2026-03-04 14:03 ` David Howells [this message]
2026-03-04 14:03 ` [RFC PATCH 16/17] netfs: Check for too much data being read David Howells
2026-03-04 14:03 ` [RFC PATCH 17/17] netfs: Combine prepare and issue ops and grab the buffers on request David Howells
2026-03-04 14:39   ` Christoph Hellwig
2026-03-04 14:51     ` David Howells
2026-03-04 15:01       ` Christoph Hellwig
2026-03-23 18:37   ` ChenXiaoSong
2026-03-23 20:14     ` David Howells
2026-03-23 22:44     ` Paulo Alcantara
2026-03-24  1:03       ` ChenXiaoSong
2026-03-24  7:16         ` David Howells
2026-03-24  7:38           ` ChenXiaoSong
2026-03-24  7:53             ` David Howells

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260304140328.112636-16-dhowells@redhat.com \
    --to=dhowells@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=ceph-devel@vger.kernel.org \
    --cc=christian@brauner.io \
    --cc=hch@infradead.org \
    --cc=leon@kernel.org \
    --cc=linux-afs@lists.infradead.org \
    --cc=linux-cifs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=netfs@lists.linux.dev \
    --cc=pc@manguebit.com \
    --cc=pc@manguebit.org \
    --cc=sfrench@samba.org \
    --cc=v9fs@lists.linux.dev \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.