* [PATCH net-next v2 0/2] xsk: minor optimizations around locks
@ 2025-10-30 0:06 Jason Xing
2025-10-30 0:06 ` [PATCH net-next v2 1/2] xsk: do not enable/disable irq when grabbing/releasing xsk_tx_list_lock Jason Xing
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Jason Xing @ 2025-10-30 0:06 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, bjorn, magnus.karlsson,
maciej.fijalkowski, jonathan.lemon, sdf, ast, daniel, hawk,
john.fastabend, horms, andrew+netdev
Cc: bpf, netdev, Jason Xing
From: Jason Xing <kernelxing@tencent.com>
Two optimizations regarding xsk_tx_list_lock and cq_lock can yield a
performance increase because of avoiding disabling and enabling
interrupts frequently.
---
V2
Link: https://lore.kernel.org/all/20251025065310.5676-1-kerneljasonxing@gmail.com/
1. abandon applying lockless idea around cached_prod because the case as
Jakub pointed out can cause the pool messy.
2. add a new patch to handle xsk_tx_list_lock.
Jason Xing (2):
xsk: do not enable/disable irq when grabbing/releasing
xsk_tx_list_lock
xsk: use a smaller new lock for shared pool case
include/net/xsk_buff_pool.h | 13 +++++++++----
net/xdp/xsk.c | 15 ++++++---------
net/xdp/xsk_buff_pool.c | 15 ++++++---------
3 files changed, 21 insertions(+), 22 deletions(-)
--
2.41.3
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH net-next v2 1/2] xsk: do not enable/disable irq when grabbing/releasing xsk_tx_list_lock 2025-10-30 0:06 [PATCH net-next v2 0/2] xsk: minor optimizations around locks Jason Xing @ 2025-10-30 0:06 ` Jason Xing 2025-11-03 14:42 ` Maciej Fijalkowski 2025-10-30 0:06 ` [PATCH net-next v2 2/2] xsk: use a smaller new lock for shared pool case Jason Xing 2025-11-04 15:20 ` [PATCH net-next v2 0/2] xsk: minor optimizations around locks patchwork-bot+netdevbpf 2 siblings, 1 reply; 7+ messages in thread From: Jason Xing @ 2025-10-30 0:06 UTC (permalink / raw) To: davem, edumazet, kuba, pabeni, bjorn, magnus.karlsson, maciej.fijalkowski, jonathan.lemon, sdf, ast, daniel, hawk, john.fastabend, horms, andrew+netdev Cc: bpf, netdev, Jason Xing From: Jason Xing <kernelxing@tencent.com> The commit ac98d8aab61b ("xsk: wire upp Tx zero-copy functions") originally introducing this lock put the deletion process in the sk_destruct which can run in irq context obviously, so the xxx_irqsave()/xxx_irqrestore() pair was used. But later another commit 541d7fdd7694 ("xsk: proper AF_XDP socket teardown ordering") moved the deletion into xsk_release() that only happens in process context. It means that since this commit, it doesn't necessarily need that pair. Now, there are two places that use this xsk_tx_list_lock and only run in the process context. So avoid manipulating the irq then. Signed-off-by: Jason Xing <kernelxing@tencent.com> --- net/xdp/xsk_buff_pool.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index aa9788f20d0d..309075050b2a 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -12,26 +12,22 @@ void xp_add_xsk(struct xsk_buff_pool *pool, struct xdp_sock *xs) { - unsigned long flags; - if (!xs->tx) return; - spin_lock_irqsave(&pool->xsk_tx_list_lock, flags); + spin_lock(&pool->xsk_tx_list_lock); list_add_rcu(&xs->tx_list, &pool->xsk_tx_list); - spin_unlock_irqrestore(&pool->xsk_tx_list_lock, flags); + spin_unlock(&pool->xsk_tx_list_lock); } void xp_del_xsk(struct xsk_buff_pool *pool, struct xdp_sock *xs) { - unsigned long flags; - if (!xs->tx) return; - spin_lock_irqsave(&pool->xsk_tx_list_lock, flags); + spin_lock(&pool->xsk_tx_list_lock); list_del_rcu(&xs->tx_list); - spin_unlock_irqrestore(&pool->xsk_tx_list_lock, flags); + spin_unlock(&pool->xsk_tx_list_lock); } void xp_destroy(struct xsk_buff_pool *pool) -- 2.41.3 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH net-next v2 1/2] xsk: do not enable/disable irq when grabbing/releasing xsk_tx_list_lock 2025-10-30 0:06 ` [PATCH net-next v2 1/2] xsk: do not enable/disable irq when grabbing/releasing xsk_tx_list_lock Jason Xing @ 2025-11-03 14:42 ` Maciej Fijalkowski 0 siblings, 0 replies; 7+ messages in thread From: Maciej Fijalkowski @ 2025-11-03 14:42 UTC (permalink / raw) To: Jason Xing Cc: davem, edumazet, kuba, pabeni, bjorn, magnus.karlsson, jonathan.lemon, sdf, ast, daniel, hawk, john.fastabend, horms, andrew+netdev, bpf, netdev, Jason Xing On Thu, Oct 30, 2025 at 08:06:45AM +0800, Jason Xing wrote: > From: Jason Xing <kernelxing@tencent.com> > > The commit ac98d8aab61b ("xsk: wire upp Tx zero-copy functions") > originally introducing this lock put the deletion process in the > sk_destruct which can run in irq context obviously, so the > xxx_irqsave()/xxx_irqrestore() pair was used. But later another > commit 541d7fdd7694 ("xsk: proper AF_XDP socket teardown ordering") > moved the deletion into xsk_release() that only happens in process > context. It means that since this commit, it doesn't necessarily > need that pair. > > Now, there are two places that use this xsk_tx_list_lock and only > run in the process context. So avoid manipulating the irq then. > > Signed-off-by: Jason Xing <kernelxing@tencent.com> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > --- > net/xdp/xsk_buff_pool.c | 12 ++++-------- > 1 file changed, 4 insertions(+), 8 deletions(-) > > diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c > index aa9788f20d0d..309075050b2a 100644 > --- a/net/xdp/xsk_buff_pool.c > +++ b/net/xdp/xsk_buff_pool.c > @@ -12,26 +12,22 @@ > > void xp_add_xsk(struct xsk_buff_pool *pool, struct xdp_sock *xs) > { > - unsigned long flags; > - > if (!xs->tx) > return; > > - spin_lock_irqsave(&pool->xsk_tx_list_lock, flags); > + spin_lock(&pool->xsk_tx_list_lock); > list_add_rcu(&xs->tx_list, &pool->xsk_tx_list); > - spin_unlock_irqrestore(&pool->xsk_tx_list_lock, flags); > + spin_unlock(&pool->xsk_tx_list_lock); > } > > void xp_del_xsk(struct xsk_buff_pool *pool, struct xdp_sock *xs) > { > - unsigned long flags; > - > if (!xs->tx) > return; > > - spin_lock_irqsave(&pool->xsk_tx_list_lock, flags); > + spin_lock(&pool->xsk_tx_list_lock); > list_del_rcu(&xs->tx_list); > - spin_unlock_irqrestore(&pool->xsk_tx_list_lock, flags); > + spin_unlock(&pool->xsk_tx_list_lock); > } > > void xp_destroy(struct xsk_buff_pool *pool) > -- > 2.41.3 > ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH net-next v2 2/2] xsk: use a smaller new lock for shared pool case 2025-10-30 0:06 [PATCH net-next v2 0/2] xsk: minor optimizations around locks Jason Xing 2025-10-30 0:06 ` [PATCH net-next v2 1/2] xsk: do not enable/disable irq when grabbing/releasing xsk_tx_list_lock Jason Xing @ 2025-10-30 0:06 ` Jason Xing 2025-11-03 14:58 ` Maciej Fijalkowski 2025-11-04 15:20 ` [PATCH net-next v2 0/2] xsk: minor optimizations around locks patchwork-bot+netdevbpf 2 siblings, 1 reply; 7+ messages in thread From: Jason Xing @ 2025-10-30 0:06 UTC (permalink / raw) To: davem, edumazet, kuba, pabeni, bjorn, magnus.karlsson, maciej.fijalkowski, jonathan.lemon, sdf, ast, daniel, hawk, john.fastabend, horms, andrew+netdev Cc: bpf, netdev, Jason Xing From: Jason Xing <kernelxing@tencent.com> - Split cq_lock into two smaller locks: cq_prod_lock and cq_cached_prod_lock - Avoid disabling/enabling interrupts in the hot xmit path In either xsk_cq_cancel_locked() or xsk_cq_reserve_locked() function, the race condition is only between multiple xsks sharing the same pool. They are all in the process context rather than interrupt context, so now the small lock named cq_cached_prod_lock can be used without handling interrupts. While cq_cached_prod_lock ensures the exclusive modification of @cached_prod, cq_prod_lock in xsk_cq_submit_addr_locked() only cares about @producer and corresponding @desc. Both of them don't necessarily be consistent with @cached_prod protected by cq_cached_prod_lock. That's the reason why the previous big lock can be split into two smaller ones. Please note that SPSC rule is all about the global state of producer and consumer that can affect both layers instead of local or cached ones. Frequently disabling and enabling interrupt are very time consuming in some cases, especially in a per-descriptor granularity, which now can be avoided after this optimization, even when the pool is shared by multiple xsks. With this patch, the performance number[1] could go from 1,872,565 pps to 1,961,009 pps. It's a minor rise of around 5%. [1]: taskset -c 1 ./xdpsock -i enp2s0f1 -q 0 -t -S -s 64 Signed-off-by: Jason Xing <kernelxing@tencent.com> --- include/net/xsk_buff_pool.h | 13 +++++++++---- net/xdp/xsk.c | 15 ++++++--------- net/xdp/xsk_buff_pool.c | 3 ++- 3 files changed, 17 insertions(+), 14 deletions(-) diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h index cac56e6b0869..92a2358c6ce3 100644 --- a/include/net/xsk_buff_pool.h +++ b/include/net/xsk_buff_pool.h @@ -85,11 +85,16 @@ struct xsk_buff_pool { bool unaligned; bool tx_sw_csum; void *addrs; - /* Mutual exclusion of the completion ring in the SKB mode. Two cases to protect: - * NAPI TX thread and sendmsg error paths in the SKB destructor callback and when - * sockets share a single cq when the same netdev and queue id is shared. + /* Mutual exclusion of the completion ring in the SKB mode. + * Protect: NAPI TX thread and sendmsg error paths in the SKB + * destructor callback. */ - spinlock_t cq_lock; + spinlock_t cq_prod_lock; + /* Mutual exclusion of the completion ring in the SKB mode. + * Protect: when sockets share a single cq when the same netdev + * and queue id is shared. + */ + spinlock_t cq_cached_prod_lock; struct xdp_buff_xsk *free_heads[]; }; diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index 7b0c68a70888..2f26c918d448 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -548,12 +548,11 @@ static int xsk_wakeup(struct xdp_sock *xs, u8 flags) static int xsk_cq_reserve_locked(struct xsk_buff_pool *pool) { - unsigned long flags; int ret; - spin_lock_irqsave(&pool->cq_lock, flags); + spin_lock(&pool->cq_cached_prod_lock); ret = xskq_prod_reserve(pool->cq); - spin_unlock_irqrestore(&pool->cq_lock, flags); + spin_unlock(&pool->cq_cached_prod_lock); return ret; } @@ -566,7 +565,7 @@ static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool, unsigned long flags; u32 idx; - spin_lock_irqsave(&pool->cq_lock, flags); + spin_lock_irqsave(&pool->cq_prod_lock, flags); idx = xskq_get_prod(pool->cq); xskq_prod_write_addr(pool->cq, idx, @@ -583,16 +582,14 @@ static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool, } } xskq_prod_submit_n(pool->cq, descs_processed); - spin_unlock_irqrestore(&pool->cq_lock, flags); + spin_unlock_irqrestore(&pool->cq_prod_lock, flags); } static void xsk_cq_cancel_locked(struct xsk_buff_pool *pool, u32 n) { - unsigned long flags; - - spin_lock_irqsave(&pool->cq_lock, flags); + spin_lock(&pool->cq_cached_prod_lock); xskq_prod_cancel_n(pool->cq, n); - spin_unlock_irqrestore(&pool->cq_lock, flags); + spin_unlock(&pool->cq_cached_prod_lock); } static void xsk_inc_num_desc(struct sk_buff *skb) diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index 309075050b2a..00a4eddaa0cd 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -90,7 +90,8 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs, INIT_LIST_HEAD(&pool->xskb_list); INIT_LIST_HEAD(&pool->xsk_tx_list); spin_lock_init(&pool->xsk_tx_list_lock); - spin_lock_init(&pool->cq_lock); + spin_lock_init(&pool->cq_prod_lock); + spin_lock_init(&pool->cq_cached_prod_lock); refcount_set(&pool->users, 1); pool->fq = xs->fq_tmp; -- 2.41.3 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH net-next v2 2/2] xsk: use a smaller new lock for shared pool case 2025-10-30 0:06 ` [PATCH net-next v2 2/2] xsk: use a smaller new lock for shared pool case Jason Xing @ 2025-11-03 14:58 ` Maciej Fijalkowski 2025-11-03 23:26 ` Jason Xing 0 siblings, 1 reply; 7+ messages in thread From: Maciej Fijalkowski @ 2025-11-03 14:58 UTC (permalink / raw) To: Jason Xing Cc: davem, edumazet, kuba, pabeni, bjorn, magnus.karlsson, jonathan.lemon, sdf, ast, daniel, hawk, john.fastabend, horms, andrew+netdev, bpf, netdev, Jason Xing On Thu, Oct 30, 2025 at 08:06:46AM +0800, Jason Xing wrote: > From: Jason Xing <kernelxing@tencent.com> > > - Split cq_lock into two smaller locks: cq_prod_lock and > cq_cached_prod_lock > - Avoid disabling/enabling interrupts in the hot xmit path > > In either xsk_cq_cancel_locked() or xsk_cq_reserve_locked() function, > the race condition is only between multiple xsks sharing the same > pool. They are all in the process context rather than interrupt context, > so now the small lock named cq_cached_prod_lock can be used without > handling interrupts. > > While cq_cached_prod_lock ensures the exclusive modification of > @cached_prod, cq_prod_lock in xsk_cq_submit_addr_locked() only cares > about @producer and corresponding @desc. Both of them don't necessarily > be consistent with @cached_prod protected by cq_cached_prod_lock. > That's the reason why the previous big lock can be split into two > smaller ones. Please note that SPSC rule is all about the global state > of producer and consumer that can affect both layers instead of local > or cached ones. > > Frequently disabling and enabling interrupt are very time consuming > in some cases, especially in a per-descriptor granularity, which now > can be avoided after this optimization, even when the pool is shared by > multiple xsks. > > With this patch, the performance number[1] could go from 1,872,565 pps > to 1,961,009 pps. It's a minor rise of around 5%. > > [1]: taskset -c 1 ./xdpsock -i enp2s0f1 -q 0 -t -S -s 64 > > Signed-off-by: Jason Xing <kernelxing@tencent.com> > --- > include/net/xsk_buff_pool.h | 13 +++++++++---- > net/xdp/xsk.c | 15 ++++++--------- > net/xdp/xsk_buff_pool.c | 3 ++- > 3 files changed, 17 insertions(+), 14 deletions(-) > > diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h > index cac56e6b0869..92a2358c6ce3 100644 > --- a/include/net/xsk_buff_pool.h > +++ b/include/net/xsk_buff_pool.h > @@ -85,11 +85,16 @@ struct xsk_buff_pool { > bool unaligned; > bool tx_sw_csum; > void *addrs; > - /* Mutual exclusion of the completion ring in the SKB mode. Two cases to protect: > - * NAPI TX thread and sendmsg error paths in the SKB destructor callback and when > - * sockets share a single cq when the same netdev and queue id is shared. > + /* Mutual exclusion of the completion ring in the SKB mode. > + * Protect: NAPI TX thread and sendmsg error paths in the SKB > + * destructor callback. > */ > - spinlock_t cq_lock; > + spinlock_t cq_prod_lock; > + /* Mutual exclusion of the completion ring in the SKB mode. > + * Protect: when sockets share a single cq when the same netdev > + * and queue id is shared. > + */ > + spinlock_t cq_cached_prod_lock; Nice that existing hole is utilized here. Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > struct xdp_buff_xsk *free_heads[]; > }; > > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c > index 7b0c68a70888..2f26c918d448 100644 > --- a/net/xdp/xsk.c > +++ b/net/xdp/xsk.c > @@ -548,12 +548,11 @@ static int xsk_wakeup(struct xdp_sock *xs, u8 flags) > > static int xsk_cq_reserve_locked(struct xsk_buff_pool *pool) > { > - unsigned long flags; > int ret; > > - spin_lock_irqsave(&pool->cq_lock, flags); > + spin_lock(&pool->cq_cached_prod_lock); > ret = xskq_prod_reserve(pool->cq); > - spin_unlock_irqrestore(&pool->cq_lock, flags); > + spin_unlock(&pool->cq_cached_prod_lock); > > return ret; > } > @@ -566,7 +565,7 @@ static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool, > unsigned long flags; > u32 idx; > > - spin_lock_irqsave(&pool->cq_lock, flags); > + spin_lock_irqsave(&pool->cq_prod_lock, flags); > idx = xskq_get_prod(pool->cq); > > xskq_prod_write_addr(pool->cq, idx, > @@ -583,16 +582,14 @@ static void xsk_cq_submit_addr_locked(struct xsk_buff_pool *pool, > } > } > xskq_prod_submit_n(pool->cq, descs_processed); > - spin_unlock_irqrestore(&pool->cq_lock, flags); > + spin_unlock_irqrestore(&pool->cq_prod_lock, flags); > } > > static void xsk_cq_cancel_locked(struct xsk_buff_pool *pool, u32 n) > { > - unsigned long flags; > - > - spin_lock_irqsave(&pool->cq_lock, flags); > + spin_lock(&pool->cq_cached_prod_lock); > xskq_prod_cancel_n(pool->cq, n); > - spin_unlock_irqrestore(&pool->cq_lock, flags); > + spin_unlock(&pool->cq_cached_prod_lock); > } > > static void xsk_inc_num_desc(struct sk_buff *skb) > diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c > index 309075050b2a..00a4eddaa0cd 100644 > --- a/net/xdp/xsk_buff_pool.c > +++ b/net/xdp/xsk_buff_pool.c > @@ -90,7 +90,8 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs, > INIT_LIST_HEAD(&pool->xskb_list); > INIT_LIST_HEAD(&pool->xsk_tx_list); > spin_lock_init(&pool->xsk_tx_list_lock); > - spin_lock_init(&pool->cq_lock); > + spin_lock_init(&pool->cq_prod_lock); > + spin_lock_init(&pool->cq_cached_prod_lock); > refcount_set(&pool->users, 1); > > pool->fq = xs->fq_tmp; > -- > 2.41.3 > ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH net-next v2 2/2] xsk: use a smaller new lock for shared pool case 2025-11-03 14:58 ` Maciej Fijalkowski @ 2025-11-03 23:26 ` Jason Xing 0 siblings, 0 replies; 7+ messages in thread From: Jason Xing @ 2025-11-03 23:26 UTC (permalink / raw) To: Maciej Fijalkowski Cc: davem, edumazet, kuba, pabeni, bjorn, magnus.karlsson, jonathan.lemon, sdf, ast, daniel, hawk, john.fastabend, horms, andrew+netdev, bpf, netdev, Jason Xing On Mon, Nov 3, 2025 at 10:58 PM Maciej Fijalkowski <maciej.fijalkowski@intel.com> wrote: > > On Thu, Oct 30, 2025 at 08:06:46AM +0800, Jason Xing wrote: > > From: Jason Xing <kernelxing@tencent.com> > > > > - Split cq_lock into two smaller locks: cq_prod_lock and > > cq_cached_prod_lock > > - Avoid disabling/enabling interrupts in the hot xmit path > > > > In either xsk_cq_cancel_locked() or xsk_cq_reserve_locked() function, > > the race condition is only between multiple xsks sharing the same > > pool. They are all in the process context rather than interrupt context, > > so now the small lock named cq_cached_prod_lock can be used without > > handling interrupts. > > > > While cq_cached_prod_lock ensures the exclusive modification of > > @cached_prod, cq_prod_lock in xsk_cq_submit_addr_locked() only cares > > about @producer and corresponding @desc. Both of them don't necessarily > > be consistent with @cached_prod protected by cq_cached_prod_lock. > > That's the reason why the previous big lock can be split into two > > smaller ones. Please note that SPSC rule is all about the global state > > of producer and consumer that can affect both layers instead of local > > or cached ones. > > > > Frequently disabling and enabling interrupt are very time consuming > > in some cases, especially in a per-descriptor granularity, which now > > can be avoided after this optimization, even when the pool is shared by > > multiple xsks. > > > > With this patch, the performance number[1] could go from 1,872,565 pps > > to 1,961,009 pps. It's a minor rise of around 5%. > > > > [1]: taskset -c 1 ./xdpsock -i enp2s0f1 -q 0 -t -S -s 64 > > > > Signed-off-by: Jason Xing <kernelxing@tencent.com> > > --- > > include/net/xsk_buff_pool.h | 13 +++++++++---- > > net/xdp/xsk.c | 15 ++++++--------- > > net/xdp/xsk_buff_pool.c | 3 ++- > > 3 files changed, 17 insertions(+), 14 deletions(-) > > > > diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h > > index cac56e6b0869..92a2358c6ce3 100644 > > --- a/include/net/xsk_buff_pool.h > > +++ b/include/net/xsk_buff_pool.h > > @@ -85,11 +85,16 @@ struct xsk_buff_pool { > > bool unaligned; > > bool tx_sw_csum; > > void *addrs; > > - /* Mutual exclusion of the completion ring in the SKB mode. Two cases to protect: > > - * NAPI TX thread and sendmsg error paths in the SKB destructor callback and when > > - * sockets share a single cq when the same netdev and queue id is shared. > > + /* Mutual exclusion of the completion ring in the SKB mode. > > + * Protect: NAPI TX thread and sendmsg error paths in the SKB > > + * destructor callback. > > */ > > - spinlock_t cq_lock; > > + spinlock_t cq_prod_lock; > > + /* Mutual exclusion of the completion ring in the SKB mode. > > + * Protect: when sockets share a single cq when the same netdev > > + * and queue id is shared. > > + */ > > + spinlock_t cq_cached_prod_lock; > > Nice that existing hole is utilized here. > > Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Thanks for the review :) Thanks, Jason ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH net-next v2 0/2] xsk: minor optimizations around locks 2025-10-30 0:06 [PATCH net-next v2 0/2] xsk: minor optimizations around locks Jason Xing 2025-10-30 0:06 ` [PATCH net-next v2 1/2] xsk: do not enable/disable irq when grabbing/releasing xsk_tx_list_lock Jason Xing 2025-10-30 0:06 ` [PATCH net-next v2 2/2] xsk: use a smaller new lock for shared pool case Jason Xing @ 2025-11-04 15:20 ` patchwork-bot+netdevbpf 2 siblings, 0 replies; 7+ messages in thread From: patchwork-bot+netdevbpf @ 2025-11-04 15:20 UTC (permalink / raw) To: Jason Xing Cc: davem, edumazet, kuba, pabeni, bjorn, magnus.karlsson, maciej.fijalkowski, jonathan.lemon, sdf, ast, daniel, hawk, john.fastabend, horms, andrew+netdev, bpf, netdev, kernelxing Hello: This series was applied to netdev/net-next.git (main) by Paolo Abeni <pabeni@redhat.com>: On Thu, 30 Oct 2025 08:06:44 +0800 you wrote: > From: Jason Xing <kernelxing@tencent.com> > > Two optimizations regarding xsk_tx_list_lock and cq_lock can yield a > performance increase because of avoiding disabling and enabling > interrupts frequently. > > > [...] Here is the summary with links: - [net-next,v2,1/2] xsk: do not enable/disable irq when grabbing/releasing xsk_tx_list_lock https://git.kernel.org/netdev/net-next/c/462280043466 - [net-next,v2,2/2] xsk: use a smaller new lock for shared pool case https://git.kernel.org/netdev/net-next/c/30ed05adca4a You are awesome, thank you! -- Deet-doot-dot, I am a bot. https://korg.docs.kernel.org/patchwork/pwbot.html ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-11-04 15:20 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-10-30 0:06 [PATCH net-next v2 0/2] xsk: minor optimizations around locks Jason Xing 2025-10-30 0:06 ` [PATCH net-next v2 1/2] xsk: do not enable/disable irq when grabbing/releasing xsk_tx_list_lock Jason Xing 2025-11-03 14:42 ` Maciej Fijalkowski 2025-10-30 0:06 ` [PATCH net-next v2 2/2] xsk: use a smaller new lock for shared pool case Jason Xing 2025-11-03 14:58 ` Maciej Fijalkowski 2025-11-03 23:26 ` Jason Xing 2025-11-04 15:20 ` [PATCH net-next v2 0/2] xsk: minor optimizations around locks patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).