io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [zcrx-next 00/10] next zcrx cleanups
@ 2025-08-17 22:43 Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 01/10] io_uring/zcrx: replace memchar_inv with is_zero Pavel Begunkov
                   ` (10 more replies)
  0 siblings, 11 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-08-17 22:43 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

Flushing for review some of zcrx cleanups I had for a while now. This
includes consolidating dma sync, optimising refilling, and using lock
guards.

For a full branch with all relevant patches see
https://github.com/isilence/linux.git zcrx/for-next

Pavel Begunkov (10):
  io_uring/zcrx: replace memchar_inv with is_zero
  io_uring/zcrx: use page_pool_unref_and_test()
  io_uring/zcrx: remove extra io_zcrx_drop_netdev
  io_uring/zcrx: rename dma lock
  io_uring/zcrx: protect netdev with pp_lock
  io_uring/zcrx: unify allocation dma sync
  io_uring/zcrx: reduce netmem scope in refill
  io_uring/zcrx: use guards for the refill lock
  io_uring/zcrx: don't adjust free cache space
  io_uring/zcrx: rely on cache size truncation on refill

 io_uring/zcrx.c | 92 ++++++++++++++++++++++---------------------------
 io_uring/zcrx.h |  8 +++--
 2 files changed, 48 insertions(+), 52 deletions(-)

-- 
2.49.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [zcrx-next 01/10] io_uring/zcrx: replace memchar_inv with is_zero
  2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
@ 2025-08-17 22:43 ` Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 02/10] io_uring/zcrx: use page_pool_unref_and_test() Pavel Begunkov
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-08-17 22:43 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

memchr_inv() is more ambiguous than mem_is_zero(), so use the latter
for zero checks.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/zcrx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index e5ff49f3425e..66bede4f8f44 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -561,7 +561,7 @@ int io_register_zcrx_ifq(struct io_ring_ctx *ctx,
 		return -EFAULT;
 	if (copy_from_user(&rd, u64_to_user_ptr(reg.region_ptr), sizeof(rd)))
 		return -EFAULT;
-	if (memchr_inv(&reg.__resv, 0, sizeof(reg.__resv)) ||
+	if (!mem_is_zero(&reg.__resv, sizeof(reg.__resv)) ||
 	    reg.__resv2 || reg.zcrx_id)
 		return -EINVAL;
 	if (reg.if_rxq == -1 || !reg.rq_entries || reg.flags)
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [zcrx-next 02/10] io_uring/zcrx: use page_pool_unref_and_test()
  2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 01/10] io_uring/zcrx: replace memchar_inv with is_zero Pavel Begunkov
@ 2025-08-17 22:43 ` Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 03/10] io_uring/zcrx: remove extra io_zcrx_drop_netdev Pavel Begunkov
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-08-17 22:43 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

page_pool_unref_and_test() tries to better follow usuall refcount
semantics, use it instead of page_pool_unref_netmem().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/zcrx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 66bede4f8f44..bd8b3ce7d589 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -785,7 +785,7 @@ static void io_zcrx_ring_refill(struct page_pool *pp,
 			continue;
 
 		netmem = net_iov_to_netmem(niov);
-		if (page_pool_unref_netmem(netmem, 1) != 0)
+		if (!page_pool_unref_and_test(netmem))
 			continue;
 
 		if (unlikely(niov->pp != pp)) {
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [zcrx-next 03/10] io_uring/zcrx: remove extra io_zcrx_drop_netdev
  2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 01/10] io_uring/zcrx: replace memchar_inv with is_zero Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 02/10] io_uring/zcrx: use page_pool_unref_and_test() Pavel Begunkov
@ 2025-08-17 22:43 ` Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 04/10] io_uring/zcrx: rename dma lock Pavel Begunkov
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-08-17 22:43 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

io_close_queue() already detaches the netdev, don't unnecessary call
io_zcrx_drop_netdev() right after.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/zcrx.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index bd8b3ce7d589..ba0c51feb126 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -513,7 +513,6 @@ static void io_close_queue(struct io_zcrx_ifq *ifq)
 static void io_zcrx_ifq_free(struct io_zcrx_ifq *ifq)
 {
 	io_close_queue(ifq);
-	io_zcrx_drop_netdev(ifq);
 
 	if (ifq->area)
 		io_zcrx_free_area(ifq->area);
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [zcrx-next 04/10] io_uring/zcrx: rename dma lock
  2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
                   ` (2 preceding siblings ...)
  2025-08-17 22:43 ` [zcrx-next 03/10] io_uring/zcrx: remove extra io_zcrx_drop_netdev Pavel Begunkov
@ 2025-08-17 22:43 ` Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 05/10] io_uring/zcrx: protect netdev with pp_lock Pavel Begunkov
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-08-17 22:43 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

In preparation for reusing the lock for other purposes, rename it to
"pp_lock". As before, it can be taken deeper inside the networking stack
by page pool, and so the syscall io_uring must avoid holding it while
doing queue reconfiguration or anything that can result in immediate pp
init/destruction.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/zcrx.c | 8 ++++----
 io_uring/zcrx.h | 7 ++++++-
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index ba0c51feb126..e664107221de 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -247,7 +247,7 @@ static void io_zcrx_unmap_area(struct io_zcrx_ifq *ifq,
 {
 	int i;
 
-	guard(mutex)(&ifq->dma_lock);
+	guard(mutex)(&ifq->pp_lock);
 	if (!area->is_mapped)
 		return;
 	area->is_mapped = false;
@@ -278,7 +278,7 @@ static int io_zcrx_map_area(struct io_zcrx_ifq *ifq, struct io_zcrx_area *area)
 {
 	int ret;
 
-	guard(mutex)(&ifq->dma_lock);
+	guard(mutex)(&ifq->pp_lock);
 	if (area->is_mapped)
 		return 0;
 
@@ -471,7 +471,7 @@ static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx)
 	ifq->ctx = ctx;
 	spin_lock_init(&ifq->lock);
 	spin_lock_init(&ifq->rq_lock);
-	mutex_init(&ifq->dma_lock);
+	mutex_init(&ifq->pp_lock);
 	return ifq;
 }
 
@@ -520,7 +520,7 @@ static void io_zcrx_ifq_free(struct io_zcrx_ifq *ifq)
 		put_device(ifq->dev);
 
 	io_free_rbuf_ring(ifq);
-	mutex_destroy(&ifq->dma_lock);
+	mutex_destroy(&ifq->pp_lock);
 	kfree(ifq);
 }
 
diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
index 109c4ca36434..479dd4b5c1d2 100644
--- a/io_uring/zcrx.h
+++ b/io_uring/zcrx.h
@@ -54,7 +54,12 @@ struct io_zcrx_ifq {
 	struct net_device		*netdev;
 	netdevice_tracker		netdev_tracker;
 	spinlock_t			lock;
-	struct mutex			dma_lock;
+
+	/*
+	 * Page pool and net configuration lock, can be taken deeper in the
+	 * net stack.
+	 */
+	struct mutex			pp_lock;
 	struct io_mapped_region		region;
 };
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [zcrx-next 05/10] io_uring/zcrx: protect netdev with pp_lock
  2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
                   ` (3 preceding siblings ...)
  2025-08-17 22:43 ` [zcrx-next 04/10] io_uring/zcrx: rename dma lock Pavel Begunkov
@ 2025-08-17 22:43 ` Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 06/10] io_uring/zcrx: unify allocation dma sync Pavel Begunkov
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-08-17 22:43 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

Remove ifq->lock and reuse pp_lock to protect the netdev pointer.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/zcrx.c | 23 +++++++++++------------
 io_uring/zcrx.h |  1 -
 2 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index e664107221de..d8dd4624f8f8 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -469,7 +469,6 @@ static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx)
 
 	ifq->if_rxq = -1;
 	ifq->ctx = ctx;
-	spin_lock_init(&ifq->lock);
 	spin_lock_init(&ifq->rq_lock);
 	mutex_init(&ifq->pp_lock);
 	return ifq;
@@ -477,12 +476,12 @@ static struct io_zcrx_ifq *io_zcrx_ifq_alloc(struct io_ring_ctx *ctx)
 
 static void io_zcrx_drop_netdev(struct io_zcrx_ifq *ifq)
 {
-	spin_lock(&ifq->lock);
-	if (ifq->netdev) {
-		netdev_put(ifq->netdev, &ifq->netdev_tracker);
-		ifq->netdev = NULL;
-	}
-	spin_unlock(&ifq->lock);
+	guard(mutex)(&ifq->pp_lock);
+
+	if (!ifq->netdev)
+		return;
+	netdev_put(ifq->netdev, &ifq->netdev_tracker);
+	ifq->netdev = NULL;
 }
 
 static void io_close_queue(struct io_zcrx_ifq *ifq)
@@ -497,11 +496,11 @@ static void io_close_queue(struct io_zcrx_ifq *ifq)
 	if (ifq->if_rxq == -1)
 		return;
 
-	spin_lock(&ifq->lock);
-	netdev = ifq->netdev;
-	netdev_tracker = ifq->netdev_tracker;
-	ifq->netdev = NULL;
-	spin_unlock(&ifq->lock);
+	scoped_guard(mutex, &ifq->pp_lock) {
+		netdev = ifq->netdev;
+		netdev_tracker = ifq->netdev_tracker;
+		ifq->netdev = NULL;
+	}
 
 	if (netdev) {
 		net_mp_close_rxq(netdev, ifq->if_rxq, &p);
diff --git a/io_uring/zcrx.h b/io_uring/zcrx.h
index 479dd4b5c1d2..f6a9ecf3e08a 100644
--- a/io_uring/zcrx.h
+++ b/io_uring/zcrx.h
@@ -53,7 +53,6 @@ struct io_zcrx_ifq {
 	struct device			*dev;
 	struct net_device		*netdev;
 	netdevice_tracker		netdev_tracker;
-	spinlock_t			lock;
 
 	/*
 	 * Page pool and net configuration lock, can be taken deeper in the
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [zcrx-next 06/10] io_uring/zcrx: unify allocation dma sync
  2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
                   ` (4 preceding siblings ...)
  2025-08-17 22:43 ` [zcrx-next 05/10] io_uring/zcrx: protect netdev with pp_lock Pavel Begunkov
@ 2025-08-17 22:43 ` Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 07/10] io_uring/zcrx: reduce netmem scope in refill Pavel Begunkov
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-08-17 22:43 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

First, I want niov dma sync'ing during page pool allocation out of
spinlocked sections, i.e. rq_lock for ring refilling and freelist_lock
for slow path allocation. Move it all to a common section, which can
further optimise by checking dma_dev_need_sync() only once per batch.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/zcrx.c | 39 ++++++++++++++++++++-------------------
 1 file changed, 20 insertions(+), 19 deletions(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index d8dd4624f8f8..555d4d9ff479 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -292,21 +292,6 @@ static int io_zcrx_map_area(struct io_zcrx_ifq *ifq, struct io_zcrx_area *area)
 	return ret;
 }
 
-static void io_zcrx_sync_for_device(const struct page_pool *pool,
-				    struct net_iov *niov)
-{
-#if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC)
-	dma_addr_t dma_addr;
-
-	if (!dma_dev_need_sync(pool->p.dev))
-		return;
-
-	dma_addr = page_pool_get_dma_addr_netmem(net_iov_to_netmem(niov));
-	__dma_sync_single_for_device(pool->p.dev, dma_addr + pool->p.offset,
-				     PAGE_SIZE, pool->p.dma_dir);
-#endif
-}
-
 #define IO_RQ_MAX_ENTRIES		32768
 
 #define IO_SKBS_PER_CALL_LIMIT	20
@@ -791,7 +776,6 @@ static void io_zcrx_ring_refill(struct page_pool *pp,
 			continue;
 		}
 
-		io_zcrx_sync_for_device(pp, niov);
 		net_mp_netmem_place_in_cache(pp, netmem);
 	} while (--entries);
 
@@ -806,15 +790,31 @@ static void io_zcrx_refill_slow(struct page_pool *pp, struct io_zcrx_ifq *ifq)
 	spin_lock_bh(&area->freelist_lock);
 	while (area->free_count && pp->alloc.count < PP_ALLOC_CACHE_REFILL) {
 		struct net_iov *niov = __io_zcrx_get_free_niov(area);
-		netmem_ref netmem = net_iov_to_netmem(niov);
 
 		net_mp_niov_set_page_pool(pp, niov);
-		io_zcrx_sync_for_device(pp, niov);
-		net_mp_netmem_place_in_cache(pp, netmem);
+		net_mp_netmem_place_in_cache(pp, net_iov_to_netmem(niov));
 	}
 	spin_unlock_bh(&area->freelist_lock);
 }
 
+static void io_sync_allocated_niovs(struct page_pool *pp)
+{
+#if defined(CONFIG_HAS_DMA) && defined(CONFIG_DMA_NEED_SYNC)
+	int i;
+
+	if (!dma_dev_need_sync(pp->p.dev))
+		return;
+
+	for (i = 0; i < pp->alloc.count; i++) {
+		netmem_ref netmem = pp->alloc.cache[i];
+		dma_addr_t dma_addr = page_pool_get_dma_addr_netmem(netmem);
+
+		__dma_sync_single_for_device(pp->p.dev, dma_addr + pp->p.offset,
+					     PAGE_SIZE, pp->p.dma_dir);
+	}
+#endif
+}
+
 static netmem_ref io_pp_zc_alloc_netmems(struct page_pool *pp, gfp_t gfp)
 {
 	struct io_zcrx_ifq *ifq = io_pp_to_ifq(pp);
@@ -831,6 +831,7 @@ static netmem_ref io_pp_zc_alloc_netmems(struct page_pool *pp, gfp_t gfp)
 	if (!pp->alloc.count)
 		return 0;
 out_return:
+	io_sync_allocated_niovs(pp);
 	return pp->alloc.cache[--pp->alloc.count];
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [zcrx-next 07/10] io_uring/zcrx: reduce netmem scope in refill
  2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
                   ` (5 preceding siblings ...)
  2025-08-17 22:43 ` [zcrx-next 06/10] io_uring/zcrx: unify allocation dma sync Pavel Begunkov
@ 2025-08-17 22:43 ` Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 08/10] io_uring/zcrx: use guards for the refill lock Pavel Begunkov
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-08-17 22:43 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

Reduce the scope of a local var netmem in io_zcrx_ring_refill.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/zcrx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 555d4d9ff479..44e6a0cb7916 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -735,7 +735,6 @@ static void io_zcrx_ring_refill(struct page_pool *pp,
 {
 	unsigned int mask = ifq->rq_entries - 1;
 	unsigned int entries;
-	netmem_ref netmem;
 
 	spin_lock_bh(&ifq->rq_lock);
 
@@ -751,6 +750,7 @@ static void io_zcrx_ring_refill(struct page_pool *pp,
 		struct io_zcrx_area *area;
 		struct net_iov *niov;
 		unsigned niov_idx, area_idx;
+		netmem_ref netmem;
 
 		area_idx = rqe->off >> IORING_ZCRX_AREA_SHIFT;
 		niov_idx = (rqe->off & ~IORING_ZCRX_AREA_MASK) >> PAGE_SHIFT;
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [zcrx-next 08/10] io_uring/zcrx: use guards for the refill lock
  2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
                   ` (6 preceding siblings ...)
  2025-08-17 22:43 ` [zcrx-next 07/10] io_uring/zcrx: reduce netmem scope in refill Pavel Begunkov
@ 2025-08-17 22:43 ` Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 09/10] io_uring/zcrx: don't adjust free cache space Pavel Begunkov
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-08-17 22:43 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

Use guards for rq_lock in io_zcrx_ring_refill(), makes it a tad simpler.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/zcrx.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 44e6a0cb7916..a235ef2f852a 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -736,14 +736,12 @@ static void io_zcrx_ring_refill(struct page_pool *pp,
 	unsigned int mask = ifq->rq_entries - 1;
 	unsigned int entries;
 
-	spin_lock_bh(&ifq->rq_lock);
+	guard(spinlock_bh)(&ifq->rq_lock);
 
 	entries = io_zcrx_rqring_entries(ifq);
 	entries = min_t(unsigned, entries, PP_ALLOC_CACHE_REFILL - pp->alloc.count);
-	if (unlikely(!entries)) {
-		spin_unlock_bh(&ifq->rq_lock);
+	if (unlikely(!entries))
 		return;
-	}
 
 	do {
 		struct io_uring_zcrx_rqe *rqe = io_zcrx_get_rqe(ifq, mask);
@@ -780,7 +778,6 @@ static void io_zcrx_ring_refill(struct page_pool *pp,
 	} while (--entries);
 
 	smp_store_release(&ifq->rq_ring->head, ifq->cached_rq_head);
-	spin_unlock_bh(&ifq->rq_lock);
 }
 
 static void io_zcrx_refill_slow(struct page_pool *pp, struct io_zcrx_ifq *ifq)
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [zcrx-next 09/10] io_uring/zcrx: don't adjust free cache space
  2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
                   ` (7 preceding siblings ...)
  2025-08-17 22:43 ` [zcrx-next 08/10] io_uring/zcrx: use guards for the refill lock Pavel Begunkov
@ 2025-08-17 22:43 ` Pavel Begunkov
  2025-08-17 22:43 ` [zcrx-next 10/10] io_uring/zcrx: rely on cache size truncation on refill Pavel Begunkov
  2025-08-20 18:20 ` [zcrx-next 00/10] next zcrx cleanups Jens Axboe
  10 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-08-17 22:43 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

The cache should be empty when io_pp_zc_alloc_netmems() is called,
that's promised by page pool and further checked, so there is no need to
recalculate the available space in io_zcrx_ring_refill().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/zcrx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index a235ef2f852a..6d6b09b932d2 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -739,7 +739,7 @@ static void io_zcrx_ring_refill(struct page_pool *pp,
 	guard(spinlock_bh)(&ifq->rq_lock);
 
 	entries = io_zcrx_rqring_entries(ifq);
-	entries = min_t(unsigned, entries, PP_ALLOC_CACHE_REFILL - pp->alloc.count);
+	entries = min_t(unsigned, entries, PP_ALLOC_CACHE_REFILL);
 	if (unlikely(!entries))
 		return;
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [zcrx-next 10/10] io_uring/zcrx: rely on cache size truncation on refill
  2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
                   ` (8 preceding siblings ...)
  2025-08-17 22:43 ` [zcrx-next 09/10] io_uring/zcrx: don't adjust free cache space Pavel Begunkov
@ 2025-08-17 22:43 ` Pavel Begunkov
  2025-08-20 18:20 ` [zcrx-next 00/10] next zcrx cleanups Jens Axboe
  10 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2025-08-17 22:43 UTC (permalink / raw)
  To: io-uring; +Cc: asml.silence

The return value of io_zcrx_rqring_entries() is truncated by rq_size so
that refilling doesn't loop indefinitely. However, io_zcrx_ring_refill()
is already protected against that by iterating no more than
PP_ALLOC_CACHE_REFILL times. Remove the truncation.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 io_uring/zcrx.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c
index 6d6b09b932d2..859bb5f54892 100644
--- a/io_uring/zcrx.c
+++ b/io_uring/zcrx.c
@@ -716,10 +716,7 @@ void io_shutdown_zcrx_ifqs(struct io_ring_ctx *ctx)
 
 static inline u32 io_zcrx_rqring_entries(struct io_zcrx_ifq *ifq)
 {
-	u32 entries;
-
-	entries = smp_load_acquire(&ifq->rq_ring->tail) - ifq->cached_rq_head;
-	return min(entries, ifq->rq_entries);
+	return smp_load_acquire(&ifq->rq_ring->tail) - ifq->cached_rq_head;
 }
 
 static struct io_uring_zcrx_rqe *io_zcrx_get_rqe(struct io_zcrx_ifq *ifq,
@@ -738,8 +735,7 @@ static void io_zcrx_ring_refill(struct page_pool *pp,
 
 	guard(spinlock_bh)(&ifq->rq_lock);
 
-	entries = io_zcrx_rqring_entries(ifq);
-	entries = min_t(unsigned, entries, PP_ALLOC_CACHE_REFILL);
+	entries = min(PP_ALLOC_CACHE_REFILL, io_zcrx_rqring_entries(ifq));
 	if (unlikely(!entries))
 		return;
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [zcrx-next 00/10] next zcrx cleanups
  2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
                   ` (9 preceding siblings ...)
  2025-08-17 22:43 ` [zcrx-next 10/10] io_uring/zcrx: rely on cache size truncation on refill Pavel Begunkov
@ 2025-08-20 18:20 ` Jens Axboe
  10 siblings, 0 replies; 12+ messages in thread
From: Jens Axboe @ 2025-08-20 18:20 UTC (permalink / raw)
  To: io-uring, Pavel Begunkov


On Sun, 17 Aug 2025 23:43:26 +0100, Pavel Begunkov wrote:
> Flushing for review some of zcrx cleanups I had for a while now. This
> includes consolidating dma sync, optimising refilling, and using lock
> guards.
> 
> For a full branch with all relevant patches see
> https://github.com/isilence/linux.git zcrx/for-next
> 
> [...]

Applied, thanks!

[01/10] io_uring/zcrx: replace memchar_inv with is_zero
        commit: 4c7956ee3605011e3c868a76aa5f5b8808380f8e
[02/10] io_uring/zcrx: use page_pool_unref_and_test()
        commit: 32c02734932ac71eacdd0d799f136c0dcaf4a01c
[03/10] io_uring/zcrx: remove extra io_zcrx_drop_netdev
        commit: 9f3cfd0fbe4cd2a26e43f1dddefa26a62dceb929
[04/10] io_uring/zcrx: rename dma lock
        commit: 4857be683fe694be4367ae18873e8c18533cfbe0
[05/10] io_uring/zcrx: protect netdev with pp_lock
        commit: 4e913a5d41c112979d67d35ffbefb5bddd3ff3a7
[06/10] io_uring/zcrx: unify allocation dma sync
        commit: 5a3b2e8a4d0483073dda20c0b5fdf8f6545bb277
[07/10] io_uring/zcrx: reduce netmem scope in refill
        commit: a4bb98933134b5da7cefb20a668c22ea018921c5
[08/10] io_uring/zcrx: use guards for the refill lock
        commit: fdceaa88004a091461180045b084856aadfcce8d
[09/10] io_uring/zcrx: don't adjust free cache space
        commit: 5f55fe9517f23692f622eb24e7f4449fdc2a8359
[10/10] io_uring/zcrx: rely on cache size truncation on refill
        commit: 093964677b89452896b8566c47d9af8e0f8fd8df

Best regards,
-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-08-20 18:20 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-17 22:43 [zcrx-next 00/10] next zcrx cleanups Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 01/10] io_uring/zcrx: replace memchar_inv with is_zero Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 02/10] io_uring/zcrx: use page_pool_unref_and_test() Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 03/10] io_uring/zcrx: remove extra io_zcrx_drop_netdev Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 04/10] io_uring/zcrx: rename dma lock Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 05/10] io_uring/zcrx: protect netdev with pp_lock Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 06/10] io_uring/zcrx: unify allocation dma sync Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 07/10] io_uring/zcrx: reduce netmem scope in refill Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 08/10] io_uring/zcrx: use guards for the refill lock Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 09/10] io_uring/zcrx: don't adjust free cache space Pavel Begunkov
2025-08-17 22:43 ` [zcrx-next 10/10] io_uring/zcrx: rely on cache size truncation on refill Pavel Begunkov
2025-08-20 18:20 ` [zcrx-next 00/10] next zcrx cleanups Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).