netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Xing <kerneljasonxing@gmail.com>
To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
	pabeni@redhat.com, bjorn@kernel.org, magnus.karlsson@intel.com,
	maciej.fijalkowski@intel.com, jonathan.lemon@gmail.com,
	sdf@fomichev.me, ast@kernel.org, daniel@iogearbox.net,
	hawk@kernel.org, john.fastabend@gmail.com, horms@kernel.org,
	andrew+netdev@lunn.ch
Cc: bpf@vger.kernel.org, netdev@vger.kernel.org,
	Jason Xing <kernelxing@tencent.com>
Subject: [PATCH net-next v3 2/3] xsk: use atomic operations around cached_prod for copy mode
Date: Fri, 28 Nov 2025 21:46:00 +0800	[thread overview]
Message-ID: <20251128134601.54678-3-kerneljasonxing@gmail.com> (raw)
In-Reply-To: <20251128134601.54678-1-kerneljasonxing@gmail.com>

From: Jason Xing <kernelxing@tencent.com>

Use atomic_try_cmpxchg operations to replace spin lock. Technically
CAS (Compare And Swap) is better than a coarse way like spin-lock
especially when we only need to perform a few simple operations.
Similar idea can also be found in the recent commit 100dfa74cad9
("net: dev_queue_xmit() llist adoption") that implements the lockless
logic with the help of try_cmpxchg.

Signed-off-by: Jason Xing <kernelxing@tencent.com>
---
Paolo, sorry that I didn't try to move the lock to struct xsk_queue
because after investigation I reckon try_cmpxchg can add less overhead
when multiple xsks contend at this point. So I hope this approach
can be adopted.
---
 net/xdp/xsk.c       |  4 ++--
 net/xdp/xsk_queue.h | 17 ++++++++++++-----
 2 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index bcfd400e9cf8..b63409b1422e 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -551,7 +551,7 @@ static int xsk_cq_reserve_locked(struct xsk_buff_pool *pool)
 	int ret;
 
 	spin_lock(&pool->cq_cached_prod_lock);
-	ret = xskq_prod_reserve(pool->cq);
+	ret = xsk_cq_cached_prod_reserve(pool->cq);
 	spin_unlock(&pool->cq_cached_prod_lock);
 
 	return ret;
@@ -588,7 +588,7 @@ static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool,
 static void xsk_cq_cancel_locked(struct xsk_buff_pool *pool, u32 n)
 {
 	spin_lock(&pool->cq_cached_prod_lock);
-	xskq_prod_cancel_n(pool->cq, n);
+	atomic_sub(n, &pool->cq->cached_prod_atomic);
 	spin_unlock(&pool->cq_cached_prod_lock);
 }
 
diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
index 44cc01555c0b..7fdc80e624d6 100644
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -402,13 +402,20 @@ static inline void xskq_prod_cancel_n(struct xsk_queue *q, u32 cnt)
 	q->cached_prod -= cnt;
 }
 
-static inline int xskq_prod_reserve(struct xsk_queue *q)
+static inline int xsk_cq_cached_prod_reserve(struct xsk_queue *q)
 {
-	if (xskq_prod_is_full(q))
-		return -ENOSPC;
+	int free_entries;
+	u32 cached_prod;
+
+	do {
+		q->cached_cons = READ_ONCE(q->ring->consumer);
+		cached_prod = atomic_read(&q->cached_prod_atomic);
+		free_entries = q->nentries - (cached_prod - q->cached_cons);
+		if (free_entries <= 0)
+			return -ENOSPC;
+	} while (!atomic_try_cmpxchg(&q->cached_prod_atomic, &cached_prod,
+				     cached_prod + 1));
 
-	/* A, matches D */
-	q->cached_prod++;
 	return 0;
 }
 
-- 
2.41.3


  parent reply	other threads:[~2025-11-28 13:46 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-28 13:45 [PATCH net-next v3 0/3] xsk: introduce atomic for cq in generic path Jason Xing
2025-11-28 13:45 ` [PATCH net-next v3 1/3] xsk: add atomic cached_prod for copy mode Jason Xing
2025-11-28 13:46 ` Jason Xing [this message]
2025-11-28 14:20   ` [PATCH net-next v3 2/3] xsk: use atomic operations around " Paolo Abeni
2025-11-29  0:55     ` Jason Xing
2025-12-03  6:56       ` Jason Xing
2025-12-03  9:24         ` Paolo Abeni
2025-12-03  9:40           ` Magnus Karlsson
2025-12-03 11:16             ` Jason Xing
2025-11-28 13:46 ` [PATCH net-next v3 3/3] xsk: remove spin lock protection of cached_prod Jason Xing

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251128134601.54678-3-kerneljasonxing@gmail.com \
    --to=kerneljasonxing@gmail.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=ast@kernel.org \
    --cc=bjorn@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kernelxing@tencent.com \
    --cc=kuba@kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=sdf@fomichev.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).