* [PATCH v2 0/4] sunrpc: cache infrastructure scalability improvements
@ 2026-02-23 17:09 Jeff Layton
2026-02-23 17:09 ` [PATCH v2 1/4] sunrpc: fix cache_request leak in cache_release Jeff Layton
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Jeff Layton @ 2026-02-23 17:09 UTC (permalink / raw)
To: Chuck Lever, NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey,
Trond Myklebust, Anna Schumaker
Cc: linux-nfs, linux-kernel, Jeff Layton, NeilBrown
The first patch fixes a pre-existing bug that Neil spotted during the
review of v1. The next two patches convert the global spinlock and
waitqueue to be per-cache_detail instead.
The last patch splits up the cache_detail->queue into two lists: one to
hold cache_readers and one for cache_requests. This simplifies the code,
and the new sequence number that helps the readers track position may
help with implementing netlink upcalls.
Please consider these for v7.1.
Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
Changes in v2:
- Don't spinlock around rp->next_seqno updates
- Fix potential cache_request leak in cache_release()
- Link to v1: https://lore.kernel.org/r/20260220-sunrpc-cache-v1-0-47d04014c245@kernel.org
---
Jeff Layton (4):
sunrpc: fix cache_request leak in cache_release
sunrpc: convert queue_lock from global spinlock to per-cache-detail lock
sunrpc: convert queue_wait from global to per-cache-detail waitqueue
sunrpc: split cache_detail queue into request and reader lists
include/linux/sunrpc/cache.h | 7 +-
net/sunrpc/cache.c | 189 ++++++++++++++++++++-----------------------
2 files changed, 95 insertions(+), 101 deletions(-)
---
base-commit: 8fd7d969255c89fb28cd9f34e0d729150da79d68
change-id: 20260220-sunrpc-cache-fe4cd44413d3
Best regards,
--
Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2 1/4] sunrpc: fix cache_request leak in cache_release
2026-02-23 17:09 [PATCH v2 0/4] sunrpc: cache infrastructure scalability improvements Jeff Layton
@ 2026-02-23 17:09 ` Jeff Layton
2026-02-23 17:09 ` [PATCH v2 2/4] sunrpc: convert queue_lock from global spinlock to per-cache-detail lock Jeff Layton
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Jeff Layton @ 2026-02-23 17:09 UTC (permalink / raw)
To: Chuck Lever, NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey,
Trond Myklebust, Anna Schumaker
Cc: linux-nfs, linux-kernel, Jeff Layton, NeilBrown
When a reader's file descriptor is closed while in the middle of reading
a cache_request (rp->offset != 0), cache_release() decrements the
request's readers count but never checks whether it should free the
request.
In cache_read(), when readers drops to 0 and CACHE_PENDING is clear, the
cache_request is removed from the queue and freed along with its buffer
and cache_head reference. cache_release() lacks this cleanup.
The only other path that frees requests with readers == 0 is
cache_dequeue(), but it runs only when CACHE_PENDING transitions from
set to clear. If that transition already happened while readers was
still non-zero, cache_dequeue() will have skipped the request, and no
subsequent call will clean it up.
Add the same cleanup logic from cache_read() to cache_release(): after
decrementing readers, check if it reached 0 with CACHE_PENDING clear,
and if so, dequeue and free the cache_request.
Reported-by: NeilBrown <neilb@ownmail.net>
Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
net/sunrpc/cache.c | 26 +++++++++++++++++++++-----
1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index b82f7cde0c9be6071ee4040150672872e548161d..86b3fd5a429d77f7f917f398a02cb7a5ff8dd1e0 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -1062,14 +1062,25 @@ static int cache_release(struct inode *inode, struct file *filp,
struct cache_reader *rp = filp->private_data;
if (rp) {
+ struct cache_request *rq = NULL;
+
spin_lock(&queue_lock);
if (rp->offset) {
struct cache_queue *cq;
- for (cq= &rp->q; &cq->list != &cd->queue;
- cq = list_entry(cq->list.next, struct cache_queue, list))
+ for (cq = &rp->q; &cq->list != &cd->queue;
+ cq = list_entry(cq->list.next,
+ struct cache_queue, list))
if (!cq->reader) {
- container_of(cq, struct cache_request, q)
- ->readers--;
+ struct cache_request *cr =
+ container_of(cq,
+ struct cache_request, q);
+ cr->readers--;
+ if (cr->readers == 0 &&
+ !test_bit(CACHE_PENDING,
+ &cr->item->flags)) {
+ list_del(&cr->q.list);
+ rq = cr;
+ }
break;
}
rp->offset = 0;
@@ -1077,9 +1088,14 @@ static int cache_release(struct inode *inode, struct file *filp,
list_del(&rp->q.list);
spin_unlock(&queue_lock);
+ if (rq) {
+ cache_put(rq->item, cd);
+ kfree(rq->buf);
+ kfree(rq);
+ }
+
filp->private_data = NULL;
kfree(rp);
-
}
if (filp->f_mode & FMODE_WRITE) {
atomic_dec(&cd->writers);
--
2.53.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 2/4] sunrpc: convert queue_lock from global spinlock to per-cache-detail lock
2026-02-23 17:09 [PATCH v2 0/4] sunrpc: cache infrastructure scalability improvements Jeff Layton
2026-02-23 17:09 ` [PATCH v2 1/4] sunrpc: fix cache_request leak in cache_release Jeff Layton
@ 2026-02-23 17:09 ` Jeff Layton
2026-02-23 17:10 ` [PATCH v2 3/4] sunrpc: convert queue_wait from global to per-cache-detail waitqueue Jeff Layton
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Jeff Layton @ 2026-02-23 17:09 UTC (permalink / raw)
To: Chuck Lever, NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey,
Trond Myklebust, Anna Schumaker
Cc: linux-nfs, linux-kernel, Jeff Layton
The global queue_lock serializes all upcall queue operations across
every cache_detail instance. Convert it to a per-cache-detail spinlock
so that different caches (e.g. auth.unix.ip vs nfsd.fh) no longer
contend with each other on queue operations.
Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
include/linux/sunrpc/cache.h | 1 +
net/sunrpc/cache.c | 47 ++++++++++++++++++++++----------------------
2 files changed, 24 insertions(+), 24 deletions(-)
diff --git a/include/linux/sunrpc/cache.h b/include/linux/sunrpc/cache.h
index e783132e481ff2593fdc5d323f7b3a08f85d4cd8..3d32dd1f7b05d35562d2064fed69877b3950fb51 100644
--- a/include/linux/sunrpc/cache.h
+++ b/include/linux/sunrpc/cache.h
@@ -113,6 +113,7 @@ struct cache_detail {
/* fields for communication over channel */
struct list_head queue;
+ spinlock_t queue_lock;
atomic_t writers; /* how many time is /channel open */
time64_t last_close; /* if no writers, when did last close */
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index 86b3fd5a429d77f7f917f398a02cb7a5ff8dd1e0..1cfaae488c6c67a9797511804e4bbba16bcc70ae 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -400,6 +400,7 @@ void sunrpc_init_cache_detail(struct cache_detail *cd)
{
spin_lock_init(&cd->hash_lock);
INIT_LIST_HEAD(&cd->queue);
+ spin_lock_init(&cd->queue_lock);
spin_lock(&cache_list_lock);
cd->nextcheck = 0;
cd->entries = 0;
@@ -803,8 +804,6 @@ void cache_clean_deferred(void *owner)
*
*/
-static DEFINE_SPINLOCK(queue_lock);
-
struct cache_queue {
struct list_head list;
int reader; /* if 0, then request */
@@ -847,7 +846,7 @@ static ssize_t cache_read(struct file *filp, char __user *buf, size_t count,
inode_lock(inode); /* protect against multiple concurrent
* readers on this file */
again:
- spin_lock(&queue_lock);
+ spin_lock(&cd->queue_lock);
/* need to find next request */
while (rp->q.list.next != &cd->queue &&
list_entry(rp->q.list.next, struct cache_queue, list)
@@ -856,7 +855,7 @@ static ssize_t cache_read(struct file *filp, char __user *buf, size_t count,
list_move(&rp->q.list, next);
}
if (rp->q.list.next == &cd->queue) {
- spin_unlock(&queue_lock);
+ spin_unlock(&cd->queue_lock);
inode_unlock(inode);
WARN_ON_ONCE(rp->offset);
return 0;
@@ -865,7 +864,7 @@ static ssize_t cache_read(struct file *filp, char __user *buf, size_t count,
WARN_ON_ONCE(rq->q.reader);
if (rp->offset == 0)
rq->readers++;
- spin_unlock(&queue_lock);
+ spin_unlock(&cd->queue_lock);
if (rq->len == 0) {
err = cache_request(cd, rq);
@@ -876,9 +875,9 @@ static ssize_t cache_read(struct file *filp, char __user *buf, size_t count,
if (rp->offset == 0 && !test_bit(CACHE_PENDING, &rq->item->flags)) {
err = -EAGAIN;
- spin_lock(&queue_lock);
+ spin_lock(&cd->queue_lock);
list_move(&rp->q.list, &rq->q.list);
- spin_unlock(&queue_lock);
+ spin_unlock(&cd->queue_lock);
} else {
if (rp->offset + count > rq->len)
count = rq->len - rp->offset;
@@ -888,26 +887,26 @@ static ssize_t cache_read(struct file *filp, char __user *buf, size_t count,
rp->offset += count;
if (rp->offset >= rq->len) {
rp->offset = 0;
- spin_lock(&queue_lock);
+ spin_lock(&cd->queue_lock);
list_move(&rp->q.list, &rq->q.list);
- spin_unlock(&queue_lock);
+ spin_unlock(&cd->queue_lock);
}
err = 0;
}
out:
if (rp->offset == 0) {
/* need to release rq */
- spin_lock(&queue_lock);
+ spin_lock(&cd->queue_lock);
rq->readers--;
if (rq->readers == 0 &&
!test_bit(CACHE_PENDING, &rq->item->flags)) {
list_del(&rq->q.list);
- spin_unlock(&queue_lock);
+ spin_unlock(&cd->queue_lock);
cache_put(rq->item, cd);
kfree(rq->buf);
kfree(rq);
} else
- spin_unlock(&queue_lock);
+ spin_unlock(&cd->queue_lock);
}
if (err == -EAGAIN)
goto again;
@@ -988,7 +987,7 @@ static __poll_t cache_poll(struct file *filp, poll_table *wait,
if (!rp)
return mask;
- spin_lock(&queue_lock);
+ spin_lock(&cd->queue_lock);
for (cq= &rp->q; &cq->list != &cd->queue;
cq = list_entry(cq->list.next, struct cache_queue, list))
@@ -996,7 +995,7 @@ static __poll_t cache_poll(struct file *filp, poll_table *wait,
mask |= EPOLLIN | EPOLLRDNORM;
break;
}
- spin_unlock(&queue_lock);
+ spin_unlock(&cd->queue_lock);
return mask;
}
@@ -1011,7 +1010,7 @@ static int cache_ioctl(struct inode *ino, struct file *filp,
if (cmd != FIONREAD || !rp)
return -EINVAL;
- spin_lock(&queue_lock);
+ spin_lock(&cd->queue_lock);
/* only find the length remaining in current request,
* or the length of the next request
@@ -1024,7 +1023,7 @@ static int cache_ioctl(struct inode *ino, struct file *filp,
len = cr->len - rp->offset;
break;
}
- spin_unlock(&queue_lock);
+ spin_unlock(&cd->queue_lock);
return put_user(len, (int __user *)arg);
}
@@ -1046,9 +1045,9 @@ static int cache_open(struct inode *inode, struct file *filp,
rp->offset = 0;
rp->q.reader = 1;
- spin_lock(&queue_lock);
+ spin_lock(&cd->queue_lock);
list_add(&rp->q.list, &cd->queue);
- spin_unlock(&queue_lock);
+ spin_unlock(&cd->queue_lock);
}
if (filp->f_mode & FMODE_WRITE)
atomic_inc(&cd->writers);
@@ -1064,7 +1063,7 @@ static int cache_release(struct inode *inode, struct file *filp,
if (rp) {
struct cache_request *rq = NULL;
- spin_lock(&queue_lock);
+ spin_lock(&cd->queue_lock);
if (rp->offset) {
struct cache_queue *cq;
for (cq = &rp->q; &cq->list != &cd->queue;
@@ -1086,7 +1085,7 @@ static int cache_release(struct inode *inode, struct file *filp,
rp->offset = 0;
}
list_del(&rp->q.list);
- spin_unlock(&queue_lock);
+ spin_unlock(&cd->queue_lock);
if (rq) {
cache_put(rq->item, cd);
@@ -1113,7 +1112,7 @@ static void cache_dequeue(struct cache_detail *detail, struct cache_head *ch)
struct cache_request *cr;
LIST_HEAD(dequeued);
- spin_lock(&queue_lock);
+ spin_lock(&detail->queue_lock);
list_for_each_entry_safe(cq, tmp, &detail->queue, list)
if (!cq->reader) {
cr = container_of(cq, struct cache_request, q);
@@ -1126,7 +1125,7 @@ static void cache_dequeue(struct cache_detail *detail, struct cache_head *ch)
continue;
list_move(&cr->q.list, &dequeued);
}
- spin_unlock(&queue_lock);
+ spin_unlock(&detail->queue_lock);
while (!list_empty(&dequeued)) {
cr = list_entry(dequeued.next, struct cache_request, q.list);
list_del(&cr->q.list);
@@ -1251,7 +1250,7 @@ static int cache_pipe_upcall(struct cache_detail *detail, struct cache_head *h)
crq->buf = buf;
crq->len = 0;
crq->readers = 0;
- spin_lock(&queue_lock);
+ spin_lock(&detail->queue_lock);
if (test_bit(CACHE_PENDING, &h->flags)) {
crq->item = cache_get(h);
list_add_tail(&crq->q.list, &detail->queue);
@@ -1259,7 +1258,7 @@ static int cache_pipe_upcall(struct cache_detail *detail, struct cache_head *h)
} else
/* Lost a race, no longer PENDING, so don't enqueue */
ret = -EAGAIN;
- spin_unlock(&queue_lock);
+ spin_unlock(&detail->queue_lock);
wake_up(&queue_wait);
if (ret == -EAGAIN) {
kfree(buf);
--
2.53.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 3/4] sunrpc: convert queue_wait from global to per-cache-detail waitqueue
2026-02-23 17:09 [PATCH v2 0/4] sunrpc: cache infrastructure scalability improvements Jeff Layton
2026-02-23 17:09 ` [PATCH v2 1/4] sunrpc: fix cache_request leak in cache_release Jeff Layton
2026-02-23 17:09 ` [PATCH v2 2/4] sunrpc: convert queue_lock from global spinlock to per-cache-detail lock Jeff Layton
@ 2026-02-23 17:10 ` Jeff Layton
2026-02-23 17:10 ` [PATCH v2 4/4] sunrpc: split cache_detail queue into request and reader lists Jeff Layton
2026-02-23 21:09 ` [PATCH v2 0/4] sunrpc: cache infrastructure scalability improvements Chuck Lever
4 siblings, 0 replies; 6+ messages in thread
From: Jeff Layton @ 2026-02-23 17:10 UTC (permalink / raw)
To: Chuck Lever, NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey,
Trond Myklebust, Anna Schumaker
Cc: linux-nfs, linux-kernel, Jeff Layton
The queue_wait waitqueue is currently a file-scoped global, so a
wake_up for one cache_detail wakes pollers on all caches. Convert it
to a per-cache-detail field so that only pollers on the relevant cache
are woken.
Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
include/linux/sunrpc/cache.h | 2 ++
net/sunrpc/cache.c | 7 +++----
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/include/linux/sunrpc/cache.h b/include/linux/sunrpc/cache.h
index 3d32dd1f7b05d35562d2064fed69877b3950fb51..031379efba24d40f64ce346cf1032261d4b98d05 100644
--- a/include/linux/sunrpc/cache.h
+++ b/include/linux/sunrpc/cache.h
@@ -16,6 +16,7 @@
#include <linux/atomic.h>
#include <linux/kstrtox.h>
#include <linux/proc_fs.h>
+#include <linux/wait.h>
/*
* Each cache requires:
@@ -114,6 +115,7 @@ struct cache_detail {
/* fields for communication over channel */
struct list_head queue;
spinlock_t queue_lock;
+ wait_queue_head_t queue_wait;
atomic_t writers; /* how many time is /channel open */
time64_t last_close; /* if no writers, when did last close */
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index 1cfaae488c6c67a9797511804e4bbba16bcc70ae..fd02dca1f07afec2f09c591037bac3ea3e8d7e17 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -401,6 +401,7 @@ void sunrpc_init_cache_detail(struct cache_detail *cd)
spin_lock_init(&cd->hash_lock);
INIT_LIST_HEAD(&cd->queue);
spin_lock_init(&cd->queue_lock);
+ init_waitqueue_head(&cd->queue_wait);
spin_lock(&cache_list_lock);
cd->nextcheck = 0;
cd->entries = 0;
@@ -970,8 +971,6 @@ static ssize_t cache_write(struct file *filp, const char __user *buf,
return ret;
}
-static DECLARE_WAIT_QUEUE_HEAD(queue_wait);
-
static __poll_t cache_poll(struct file *filp, poll_table *wait,
struct cache_detail *cd)
{
@@ -979,7 +978,7 @@ static __poll_t cache_poll(struct file *filp, poll_table *wait,
struct cache_reader *rp = filp->private_data;
struct cache_queue *cq;
- poll_wait(filp, &queue_wait, wait);
+ poll_wait(filp, &cd->queue_wait, wait);
/* alway allow write */
mask = EPOLLOUT | EPOLLWRNORM;
@@ -1259,7 +1258,7 @@ static int cache_pipe_upcall(struct cache_detail *detail, struct cache_head *h)
/* Lost a race, no longer PENDING, so don't enqueue */
ret = -EAGAIN;
spin_unlock(&detail->queue_lock);
- wake_up(&queue_wait);
+ wake_up(&detail->queue_wait);
if (ret == -EAGAIN) {
kfree(buf);
kfree(crq);
--
2.53.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 4/4] sunrpc: split cache_detail queue into request and reader lists
2026-02-23 17:09 [PATCH v2 0/4] sunrpc: cache infrastructure scalability improvements Jeff Layton
` (2 preceding siblings ...)
2026-02-23 17:10 ` [PATCH v2 3/4] sunrpc: convert queue_wait from global to per-cache-detail waitqueue Jeff Layton
@ 2026-02-23 17:10 ` Jeff Layton
2026-02-23 21:09 ` [PATCH v2 0/4] sunrpc: cache infrastructure scalability improvements Chuck Lever
4 siblings, 0 replies; 6+ messages in thread
From: Jeff Layton @ 2026-02-23 17:10 UTC (permalink / raw)
To: Chuck Lever, NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey,
Trond Myklebust, Anna Schumaker
Cc: linux-nfs, linux-kernel, Jeff Layton
Replace the single interleaved queue (which mixed cache_request and
cache_reader entries distinguished by a ->reader flag) with two
dedicated lists: cd->requests for upcall requests and cd->readers
for open file handles.
Readers now track their position via a monotonically increasing
sequence number (next_seqno) rather than by their position in the
shared list. Each cache_request is assigned a seqno when enqueued,
and a new cache_next_request() helper finds the next request at or
after a given seqno.
This eliminates the cache_queue wrapper struct entirely, simplifies
the reader-skipping loops in cache_read/cache_poll/cache_ioctl/
cache_release, and makes the data flow easier to reason about.
Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
include/linux/sunrpc/cache.h | 4 +-
net/sunrpc/cache.c | 143 ++++++++++++++++++-------------------------
2 files changed, 62 insertions(+), 85 deletions(-)
diff --git a/include/linux/sunrpc/cache.h b/include/linux/sunrpc/cache.h
index 031379efba24d40f64ce346cf1032261d4b98d05..b1e595c2615bd4be4d9ad19f71a8f4d08bd74a9b 100644
--- a/include/linux/sunrpc/cache.h
+++ b/include/linux/sunrpc/cache.h
@@ -113,9 +113,11 @@ struct cache_detail {
int entries;
/* fields for communication over channel */
- struct list_head queue;
+ struct list_head requests;
+ struct list_head readers;
spinlock_t queue_lock;
wait_queue_head_t queue_wait;
+ u64 next_seqno;
atomic_t writers; /* how many time is /channel open */
time64_t last_close; /* if no writers, when did last close */
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index fd02dca1f07afec2f09c591037bac3ea3e8d7e17..7081c1214e6c3226f8ac82c8bc7ff6c36f598744 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -399,9 +399,11 @@ static struct delayed_work cache_cleaner;
void sunrpc_init_cache_detail(struct cache_detail *cd)
{
spin_lock_init(&cd->hash_lock);
- INIT_LIST_HEAD(&cd->queue);
+ INIT_LIST_HEAD(&cd->requests);
+ INIT_LIST_HEAD(&cd->readers);
spin_lock_init(&cd->queue_lock);
init_waitqueue_head(&cd->queue_wait);
+ cd->next_seqno = 0;
spin_lock(&cache_list_lock);
cd->nextcheck = 0;
cd->entries = 0;
@@ -796,29 +798,20 @@ void cache_clean_deferred(void *owner)
* On read, you get a full request, or block.
* On write, an update request is processed.
* Poll works if anything to read, and always allows write.
- *
- * Implemented by linked list of requests. Each open file has
- * a ->private that also exists in this list. New requests are added
- * to the end and may wakeup and preceding readers.
- * New readers are added to the head. If, on read, an item is found with
- * CACHE_UPCALLING clear, we free it from the list.
- *
*/
-struct cache_queue {
- struct list_head list;
- int reader; /* if 0, then request */
-};
struct cache_request {
- struct cache_queue q;
+ struct list_head list;
struct cache_head *item;
- char * buf;
+ char *buf;
int len;
int readers;
+ u64 seqno;
};
struct cache_reader {
- struct cache_queue q;
+ struct list_head list;
int offset; /* if non-0, we have a refcnt on next request */
+ u64 next_seqno;
};
static int cache_request(struct cache_detail *detail,
@@ -833,6 +826,17 @@ static int cache_request(struct cache_detail *detail,
return PAGE_SIZE - len;
}
+static struct cache_request *
+cache_next_request(struct cache_detail *cd, u64 seqno)
+{
+ struct cache_request *rq;
+
+ list_for_each_entry(rq, &cd->requests, list)
+ if (rq->seqno >= seqno)
+ return rq;
+ return NULL;
+}
+
static ssize_t cache_read(struct file *filp, char __user *buf, size_t count,
loff_t *ppos, struct cache_detail *cd)
{
@@ -849,20 +853,13 @@ static ssize_t cache_read(struct file *filp, char __user *buf, size_t count,
again:
spin_lock(&cd->queue_lock);
/* need to find next request */
- while (rp->q.list.next != &cd->queue &&
- list_entry(rp->q.list.next, struct cache_queue, list)
- ->reader) {
- struct list_head *next = rp->q.list.next;
- list_move(&rp->q.list, next);
- }
- if (rp->q.list.next == &cd->queue) {
+ rq = cache_next_request(cd, rp->next_seqno);
+ if (!rq) {
spin_unlock(&cd->queue_lock);
inode_unlock(inode);
WARN_ON_ONCE(rp->offset);
return 0;
}
- rq = container_of(rp->q.list.next, struct cache_request, q.list);
- WARN_ON_ONCE(rq->q.reader);
if (rp->offset == 0)
rq->readers++;
spin_unlock(&cd->queue_lock);
@@ -876,9 +873,7 @@ static ssize_t cache_read(struct file *filp, char __user *buf, size_t count,
if (rp->offset == 0 && !test_bit(CACHE_PENDING, &rq->item->flags)) {
err = -EAGAIN;
- spin_lock(&cd->queue_lock);
- list_move(&rp->q.list, &rq->q.list);
- spin_unlock(&cd->queue_lock);
+ rp->next_seqno = rq->seqno + 1;
} else {
if (rp->offset + count > rq->len)
count = rq->len - rp->offset;
@@ -888,9 +883,7 @@ static ssize_t cache_read(struct file *filp, char __user *buf, size_t count,
rp->offset += count;
if (rp->offset >= rq->len) {
rp->offset = 0;
- spin_lock(&cd->queue_lock);
- list_move(&rp->q.list, &rq->q.list);
- spin_unlock(&cd->queue_lock);
+ rp->next_seqno = rq->seqno + 1;
}
err = 0;
}
@@ -901,7 +894,7 @@ static ssize_t cache_read(struct file *filp, char __user *buf, size_t count,
rq->readers--;
if (rq->readers == 0 &&
!test_bit(CACHE_PENDING, &rq->item->flags)) {
- list_del(&rq->q.list);
+ list_del(&rq->list);
spin_unlock(&cd->queue_lock);
cache_put(rq->item, cd);
kfree(rq->buf);
@@ -976,7 +969,6 @@ static __poll_t cache_poll(struct file *filp, poll_table *wait,
{
__poll_t mask;
struct cache_reader *rp = filp->private_data;
- struct cache_queue *cq;
poll_wait(filp, &cd->queue_wait, wait);
@@ -988,12 +980,8 @@ static __poll_t cache_poll(struct file *filp, poll_table *wait,
spin_lock(&cd->queue_lock);
- for (cq= &rp->q; &cq->list != &cd->queue;
- cq = list_entry(cq->list.next, struct cache_queue, list))
- if (!cq->reader) {
- mask |= EPOLLIN | EPOLLRDNORM;
- break;
- }
+ if (cache_next_request(cd, rp->next_seqno))
+ mask |= EPOLLIN | EPOLLRDNORM;
spin_unlock(&cd->queue_lock);
return mask;
}
@@ -1004,7 +992,7 @@ static int cache_ioctl(struct inode *ino, struct file *filp,
{
int len = 0;
struct cache_reader *rp = filp->private_data;
- struct cache_queue *cq;
+ struct cache_request *rq;
if (cmd != FIONREAD || !rp)
return -EINVAL;
@@ -1014,14 +1002,9 @@ static int cache_ioctl(struct inode *ino, struct file *filp,
/* only find the length remaining in current request,
* or the length of the next request
*/
- for (cq= &rp->q; &cq->list != &cd->queue;
- cq = list_entry(cq->list.next, struct cache_queue, list))
- if (!cq->reader) {
- struct cache_request *cr =
- container_of(cq, struct cache_request, q);
- len = cr->len - rp->offset;
- break;
- }
+ rq = cache_next_request(cd, rp->next_seqno);
+ if (rq)
+ len = rq->len - rp->offset;
spin_unlock(&cd->queue_lock);
return put_user(len, (int __user *)arg);
@@ -1042,10 +1025,10 @@ static int cache_open(struct inode *inode, struct file *filp,
return -ENOMEM;
}
rp->offset = 0;
- rp->q.reader = 1;
+ rp->next_seqno = 0;
spin_lock(&cd->queue_lock);
- list_add(&rp->q.list, &cd->queue);
+ list_add(&rp->list, &cd->readers);
spin_unlock(&cd->queue_lock);
}
if (filp->f_mode & FMODE_WRITE)
@@ -1064,26 +1047,21 @@ static int cache_release(struct inode *inode, struct file *filp,
spin_lock(&cd->queue_lock);
if (rp->offset) {
- struct cache_queue *cq;
- for (cq = &rp->q; &cq->list != &cd->queue;
- cq = list_entry(cq->list.next,
- struct cache_queue, list))
- if (!cq->reader) {
- struct cache_request *cr =
- container_of(cq,
- struct cache_request, q);
- cr->readers--;
- if (cr->readers == 0 &&
- !test_bit(CACHE_PENDING,
- &cr->item->flags)) {
- list_del(&cr->q.list);
- rq = cr;
- }
- break;
+ struct cache_request *cr;
+
+ cr = cache_next_request(cd, rp->next_seqno);
+ if (cr) {
+ cr->readers--;
+ if (cr->readers == 0 &&
+ !test_bit(CACHE_PENDING,
+ &cr->item->flags)) {
+ list_del(&cr->list);
+ rq = cr;
}
+ }
rp->offset = 0;
}
- list_del(&rp->q.list);
+ list_del(&rp->list);
spin_unlock(&cd->queue_lock);
if (rq) {
@@ -1107,27 +1085,24 @@ static int cache_release(struct inode *inode, struct file *filp,
static void cache_dequeue(struct cache_detail *detail, struct cache_head *ch)
{
- struct cache_queue *cq, *tmp;
- struct cache_request *cr;
+ struct cache_request *cr, *tmp;
LIST_HEAD(dequeued);
spin_lock(&detail->queue_lock);
- list_for_each_entry_safe(cq, tmp, &detail->queue, list)
- if (!cq->reader) {
- cr = container_of(cq, struct cache_request, q);
- if (cr->item != ch)
- continue;
- if (test_bit(CACHE_PENDING, &ch->flags))
- /* Lost a race and it is pending again */
- break;
- if (cr->readers != 0)
- continue;
- list_move(&cr->q.list, &dequeued);
- }
+ list_for_each_entry_safe(cr, tmp, &detail->requests, list) {
+ if (cr->item != ch)
+ continue;
+ if (test_bit(CACHE_PENDING, &ch->flags))
+ /* Lost a race and it is pending again */
+ break;
+ if (cr->readers != 0)
+ continue;
+ list_move(&cr->list, &dequeued);
+ }
spin_unlock(&detail->queue_lock);
while (!list_empty(&dequeued)) {
- cr = list_entry(dequeued.next, struct cache_request, q.list);
- list_del(&cr->q.list);
+ cr = list_entry(dequeued.next, struct cache_request, list);
+ list_del(&cr->list);
cache_put(cr->item, detail);
kfree(cr->buf);
kfree(cr);
@@ -1245,14 +1220,14 @@ static int cache_pipe_upcall(struct cache_detail *detail, struct cache_head *h)
return -EAGAIN;
}
- crq->q.reader = 0;
crq->buf = buf;
crq->len = 0;
crq->readers = 0;
spin_lock(&detail->queue_lock);
if (test_bit(CACHE_PENDING, &h->flags)) {
crq->item = cache_get(h);
- list_add_tail(&crq->q.list, &detail->queue);
+ crq->seqno = detail->next_seqno++;
+ list_add_tail(&crq->list, &detail->requests);
trace_cache_entry_upcall(detail, h);
} else
/* Lost a race, no longer PENDING, so don't enqueue */
--
2.53.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2 0/4] sunrpc: cache infrastructure scalability improvements
2026-02-23 17:09 [PATCH v2 0/4] sunrpc: cache infrastructure scalability improvements Jeff Layton
` (3 preceding siblings ...)
2026-02-23 17:10 ` [PATCH v2 4/4] sunrpc: split cache_detail queue into request and reader lists Jeff Layton
@ 2026-02-23 21:09 ` Chuck Lever
4 siblings, 0 replies; 6+ messages in thread
From: Chuck Lever @ 2026-02-23 21:09 UTC (permalink / raw)
To: NeilBrown, Olga Kornievskaia, Dai Ngo, Tom Talpey,
Trond Myklebust, Anna Schumaker, Jeff Layton
Cc: Chuck Lever, linux-nfs, linux-kernel, NeilBrown
From: Chuck Lever <chuck.lever@oracle.com>
On Mon, 23 Feb 2026 12:09:57 -0500, Jeff Layton wrote:
> The first patch fixes a pre-existing bug that Neil spotted during the
> review of v1. The next two patches convert the global spinlock and
> waitqueue to be per-cache_detail instead.
>
> The last patch splits up the cache_detail->queue into two lists: one to
> hold cache_readers and one for cache_requests. This simplifies the code,
> and the new sequence number that helps the readers track position may
> help with implementing netlink upcalls.
>
> [...]
Applied to nfsd-testing, replacing v1. Thanks!
[1/4] sunrpc: fix cache_request leak in cache_release
commit: dad5f78046759eb5c95970198eb9865550eb6227
[2/4] sunrpc: convert queue_lock from global spinlock to per-cache-detail lock
commit: c94ad34b7ecd5928cf3fdb6ea4fcf6ef55765e97
[3/4] sunrpc: convert queue_wait from global to per-cache-detail waitqueue
commit: 951696964e9c370a5f91d5e3e136d39aa08d912c
[4/4] sunrpc: split cache_detail queue into request and reader lists
commit: 3557b9c71039b2435b383fc57283a0b847b40144
--
Chuck Lever
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2026-02-23 21:09 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-23 17:09 [PATCH v2 0/4] sunrpc: cache infrastructure scalability improvements Jeff Layton
2026-02-23 17:09 ` [PATCH v2 1/4] sunrpc: fix cache_request leak in cache_release Jeff Layton
2026-02-23 17:09 ` [PATCH v2 2/4] sunrpc: convert queue_lock from global spinlock to per-cache-detail lock Jeff Layton
2026-02-23 17:10 ` [PATCH v2 3/4] sunrpc: convert queue_wait from global to per-cache-detail waitqueue Jeff Layton
2026-02-23 17:10 ` [PATCH v2 4/4] sunrpc: split cache_detail queue into request and reader lists Jeff Layton
2026-02-23 21:09 ` [PATCH v2 0/4] sunrpc: cache infrastructure scalability improvements Chuck Lever
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox