netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Abeni <pabeni@redhat.com>
To: Jason Xing <kerneljasonxing@gmail.com>,
	davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
	bjorn@kernel.org, magnus.karlsson@intel.com,
	maciej.fijalkowski@intel.com, jonathan.lemon@gmail.com,
	sdf@fomichev.me, ast@kernel.org, daniel@iogearbox.net,
	hawk@kernel.org, john.fastabend@gmail.com, horms@kernel.org,
	andrew+netdev@lunn.ch
Cc: bpf@vger.kernel.org, netdev@vger.kernel.org,
	Jason Xing <kernelxing@tencent.com>
Subject: Re: [PATCH net-next v3 2/3] xsk: use atomic operations around cached_prod for copy mode
Date: Fri, 28 Nov 2025 15:20:09 +0100	[thread overview]
Message-ID: <8fa70565-0f4a-4a73-a464-5530b2e29fa5@redhat.com> (raw)
In-Reply-To: <20251128134601.54678-3-kerneljasonxing@gmail.com>

On 11/28/25 2:46 PM, Jason Xing wrote:
> From: Jason Xing <kernelxing@tencent.com>
> 
> Use atomic_try_cmpxchg operations to replace spin lock. Technically
> CAS (Compare And Swap) is better than a coarse way like spin-lock
> especially when we only need to perform a few simple operations.
> Similar idea can also be found in the recent commit 100dfa74cad9
> ("net: dev_queue_xmit() llist adoption") that implements the lockless
> logic with the help of try_cmpxchg.
> 
> Signed-off-by: Jason Xing <kernelxing@tencent.com>
> ---
> Paolo, sorry that I didn't try to move the lock to struct xsk_queue
> because after investigation I reckon try_cmpxchg can add less overhead
> when multiple xsks contend at this point. So I hope this approach
> can be adopted.

I still think that moving the lock would be preferable, because it makes
sense also from a maintenance perspective. Can you report the difference
you measure atomics vs moving the spin lock?

Have you tried moving cq_prod_lock, too?

/P


  reply	other threads:[~2025-11-28 14:20 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-28 13:45 [PATCH net-next v3 0/3] xsk: introduce atomic for cq in generic path Jason Xing
2025-11-28 13:45 ` [PATCH net-next v3 1/3] xsk: add atomic cached_prod for copy mode Jason Xing
2025-11-28 13:46 ` [PATCH net-next v3 2/3] xsk: use atomic operations around " Jason Xing
2025-11-28 14:20   ` Paolo Abeni [this message]
2025-11-29  0:55     ` Jason Xing
2025-12-03  6:56       ` Jason Xing
2025-12-03  9:24         ` Paolo Abeni
2025-12-03  9:40           ` Magnus Karlsson
2025-12-03 11:16             ` Jason Xing
2025-11-28 13:46 ` [PATCH net-next v3 3/3] xsk: remove spin lock protection of cached_prod Jason Xing

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8fa70565-0f4a-4a73-a464-5530b2e29fa5@redhat.com \
    --to=pabeni@redhat.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=ast@kernel.org \
    --cc=bjorn@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kerneljasonxing@gmail.com \
    --cc=kernelxing@tencent.com \
    --cc=kuba@kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=sdf@fomichev.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).