public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 net-next] net: add sysctl to toggle napi_consume_skb() alien skb defer
@ 2026-03-27 15:33 Jason Xing
  2026-03-31 11:39 ` Paolo Abeni
  0 siblings, 1 reply; 3+ messages in thread
From: Jason Xing @ 2026-03-27 15:33 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, horms, kuniyu, stfomichev
  Cc: netdev, Jason Xing

Commit e20dfbad8aab ("net: fix napi_consume_skb() with alien skbs")
defers freeing of alien SKBs (alloc_cpu != current cpu) via
skb_attempt_defer_free() on the TX completion path to reduce cross-NUMA
SLUB spinlock contention to improve multi-queue UDP workloads.

However, this unconditionally impacts the napi_skb_cache fast recycle
path for single-flow / few-flow workloads (e.g. AF_XDP benchmarks[1]):
when the TX completion NAPI CPU differs from the SKB allocation CPU,
SKBs are deferred instead of being returned to the local napi_skb_cache,
forcing RX allocations back to the slow slab path.

The existing net.core.skb_defer_max=0 could disable this, but it is a
global switch that also disables the defer mechanism in TCP/UDP/MPTCP
recvmsg paths, losing its positive SLUB locality benefits there. AF_XDP
can co-exist with other protocols. That's the reason why I gave up
reusing skb_defer_disable_key. Besides, if the defer path is disabled,
that means TCP/UDP/MPTCP in process path will trigger directly freeing
skb with enabling/disabling bottom half(in kfree_skb_napi_cache())
which could affect others. So my thinking is not to touch this path.

Add a dedicated sysctl net.core.napi_consume_skb_defer backed by a
static key to selectively control the alien skb defer feature. Let
users decide which is the best fit for their own requirements.

This patch also avoids touching local_bh* pair(in kfree_skb_napi_cache())
to minimize the overhead.

[1]: taskset -c 0 ./xdpsock -i enp2s0f1 -q 1 -t -S -s 64
1) sysctl -w net.core.napi_consume_skb_defer=1 (as default)
 sock0@enp2s0f1:1 txonly xdp-skb
                   pps            pkts           1.00
rx                 0              0
tx                 1,851,950      20,397,952

2)sysctl -w net.core.napi_consume_skb_defer=0
 sock0@enp2s0f1:1 txonly xdp-skb
                   pps            pkts           1.00
rx                 0              0
tx                 1,985,067      25,530,432

For AF_XDP scenario, it turns out to be around 6.6% improvement.

Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
---
V2
Link: https://lore.kernel.org/all/20260326144249.97213-1-kerneljasonxing@gmail.com/
1. reuse proc_do_static_key() (Eric)
2. add doc (Stan)
---
 Documentation/admin-guide/sysctl/net.rst | 13 +++++++++++++
 net/core/net-sysfs.h                     |  1 +
 net/core/skbuff.c                        |  5 ++++-
 net/core/sysctl_net_core.c               |  7 +++++++
 4 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/sysctl/net.rst b/Documentation/admin-guide/sysctl/net.rst
index 0724a793798f..42e06f93306f 100644
--- a/Documentation/admin-guide/sysctl/net.rst
+++ b/Documentation/admin-guide/sysctl/net.rst
@@ -368,6 +368,19 @@ by the cpu which allocated them.
 
 Default: 128
 
+napi_consume_skb_defer
+----------------------
+When set to 1 (default), napi_consume_skb() defers freeing SKBs whose
+allocation CPU differs from the current CPU via skb_attempt_defer_free().
+This reduces cross-NUMA SLUB spinlock contention for multi-queue workloads.
+
+Setting this to 0 disables the defer path in napi_consume_skb() only,
+allowing SKBs to be returned to the local napi_skb_cache immediately.
+This can benefit single-flow or few-flow workloads (e.g. AF_XDP TX)
+where the defer detour hurts the fast recycle path.
+
+Default: 1
+
 optmem_max
 ----------
 
diff --git a/net/core/net-sysfs.h b/net/core/net-sysfs.h
index 38e2e3ffd0bd..a026f757867e 100644
--- a/net/core/net-sysfs.h
+++ b/net/core/net-sysfs.h
@@ -14,4 +14,5 @@ int netdev_change_owner(struct net_device *, const struct net *net_old,
 extern struct mutex rps_default_mask_mutex;
 
 DECLARE_STATIC_KEY_FALSE(skb_defer_disable_key);
+DECLARE_STATIC_KEY_TRUE(napi_consume_skb_defer_key);
 #endif
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 3d6978dd0aa8..3db90a9aa61d 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -94,6 +94,7 @@
 
 #include "dev.h"
 #include "devmem.h"
+#include "net-sysfs.h"
 #include "netmem_priv.h"
 #include "sock_destructor.h"
 
@@ -1519,7 +1520,8 @@ void napi_consume_skb(struct sk_buff *skb, int budget)
 
 	DEBUG_NET_WARN_ON_ONCE(!in_softirq());
 
-	if (skb->alloc_cpu != smp_processor_id() && !skb_shared(skb)) {
+	if (static_branch_likely(&napi_consume_skb_defer_key) &&
+	    skb->alloc_cpu != smp_processor_id() && !skb_shared(skb)) {
 		skb_release_head_state(skb);
 		return skb_attempt_defer_free(skb);
 	}
@@ -7257,6 +7259,7 @@ static void kfree_skb_napi_cache(struct sk_buff *skb)
 }
 
 DEFINE_STATIC_KEY_FALSE(skb_defer_disable_key);
+DEFINE_STATIC_KEY_TRUE(napi_consume_skb_defer_key);
 
 /**
  * skb_attempt_defer_free - queue skb for remote freeing
diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
index b508618bfc12..e85ce10afa1f 100644
--- a/net/core/sysctl_net_core.c
+++ b/net/core/sysctl_net_core.c
@@ -676,6 +676,13 @@ static struct ctl_table net_core_table[] = {
 		.proc_handler	= proc_do_skb_defer_max,
 		.extra1		= SYSCTL_ZERO,
 	},
+	{
+		.procname	= "napi_consume_skb_defer",
+		.data		= &napi_consume_skb_defer_key.key,
+		.maxlen		= sizeof(napi_consume_skb_defer_key),
+		.mode		= 0644,
+		.proc_handler	= proc_do_static_key,
+	},
 };
 
 static struct ctl_table netns_core_table[] = {
-- 
2.41.3


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2 net-next] net: add sysctl to toggle napi_consume_skb() alien skb defer
  2026-03-27 15:33 [PATCH v2 net-next] net: add sysctl to toggle napi_consume_skb() alien skb defer Jason Xing
@ 2026-03-31 11:39 ` Paolo Abeni
  2026-04-01  3:39   ` Jason Xing
  0 siblings, 1 reply; 3+ messages in thread
From: Paolo Abeni @ 2026-03-31 11:39 UTC (permalink / raw)
  To: Jason Xing, davem, edumazet, kuba, horms, kuniyu, stfomichev; +Cc: netdev

On 3/27/26 4:33 PM, Jason Xing wrote:
> Commit e20dfbad8aab ("net: fix napi_consume_skb() with alien skbs")
> defers freeing of alien SKBs (alloc_cpu != current cpu) via
> skb_attempt_defer_free() on the TX completion path to reduce cross-NUMA
> SLUB spinlock contention to improve multi-queue UDP workloads.
> 
> However, this unconditionally impacts the napi_skb_cache fast recycle
> path for single-flow / few-flow workloads (e.g. AF_XDP benchmarks[1]):
> when the TX completion NAPI CPU differs from the SKB allocation CPU,
> SKBs are deferred instead of being returned to the local napi_skb_cache,
> forcing RX allocations back to the slow slab path.
> 
> The existing net.core.skb_defer_max=0 could disable this, but it is a
> global switch that also disables the defer mechanism in TCP/UDP/MPTCP
> recvmsg paths, losing its positive SLUB locality benefits there. AF_XDP
> can co-exist with other protocols. That's the reason why I gave up
> reusing skb_defer_disable_key. Besides, if the defer path is disabled,
> that means TCP/UDP/MPTCP in process path will trigger directly freeing
> skb with enabling/disabling bottom half(in kfree_skb_napi_cache())
> which could affect others. So my thinking is not to touch this path.
> 
> Add a dedicated sysctl net.core.napi_consume_skb_defer backed by a
> static key to selectively control the alien skb defer feature. Let
> users decide which is the best fit for their own requirements.
> 
> This patch also avoids touching local_bh* pair(in kfree_skb_napi_cache())
> to minimize the overhead.
> 
> [1]: taskset -c 0 ./xdpsock -i enp2s0f1 -q 1 -t -S -s 64
> 1) sysctl -w net.core.napi_consume_skb_defer=1 (as default)
>  sock0@enp2s0f1:1 txonly xdp-skb
>                    pps            pkts           1.00
> rx                 0              0
> tx                 1,851,950      20,397,952
> 
> 2)sysctl -w net.core.napi_consume_skb_defer=0
>  sock0@enp2s0f1:1 txonly xdp-skb
>                    pps            pkts           1.00
> rx                 0              0
> tx                 1,985,067      25,530,432
> 
> For AF_XDP scenario, it turns out to be around 6.6% improvement.

I'm not a big fan of multiple tunables around the same feature, but
possibly here the use-case extends beyond AF_XDP right? Do you observe
some measurable positive delta even with UDP? Possibly even with TCP,
when the bottleneck is the sender?

More data would be helpful.

> Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> ---
> V2
> Link: https://lore.kernel.org/all/20260326144249.97213-1-kerneljasonxing@gmail.com/
> 1. reuse proc_do_static_key() (Eric)
> 2. add doc (Stan)
> ---
>  Documentation/admin-guide/sysctl/net.rst | 13 +++++++++++++
>  net/core/net-sysfs.h                     |  1 +
>  net/core/skbuff.c                        |  5 ++++-
>  net/core/sysctl_net_core.c               |  7 +++++++
>  4 files changed, 25 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/admin-guide/sysctl/net.rst b/Documentation/admin-guide/sysctl/net.rst
> index 0724a793798f..42e06f93306f 100644
> --- a/Documentation/admin-guide/sysctl/net.rst
> +++ b/Documentation/admin-guide/sysctl/net.rst
> @@ -368,6 +368,19 @@ by the cpu which allocated them.
>  
>  Default: 128
>  
> +napi_consume_skb_defer
> +----------------------
> +When set to 1 (default), napi_consume_skb() defers freeing SKBs whose
> +allocation CPU differs from the current CPU via skb_attempt_defer_free().
> +This reduces cross-NUMA SLUB spinlock contention for multi-queue workloads.
> +
> +Setting this to 0 disables the defer path in napi_consume_skb() only,
> +allowing SKBs to be returned to the local napi_skb_cache immediately.
> +This can benefit single-flow or few-flow workloads (e.g. AF_XDP TX)
> +where the defer detour hurts the fast recycle path.

I think it should be clarified that skb_defer_max takes priority, i.e.
no defer with skb_defer_max == 0. Also it would be great if you could
additional extend skb_defer_max documentation noting that 0 disable such
feature.

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2 net-next] net: add sysctl to toggle napi_consume_skb() alien skb defer
  2026-03-31 11:39 ` Paolo Abeni
@ 2026-04-01  3:39   ` Jason Xing
  0 siblings, 0 replies; 3+ messages in thread
From: Jason Xing @ 2026-04-01  3:39 UTC (permalink / raw)
  To: Paolo Abeni; +Cc: davem, edumazet, kuba, horms, kuniyu, stfomichev, netdev

On Tue, Mar 31, 2026 at 7:39 PM Paolo Abeni <pabeni@redhat.com> wrote:
>
> On 3/27/26 4:33 PM, Jason Xing wrote:
> > Commit e20dfbad8aab ("net: fix napi_consume_skb() with alien skbs")
> > defers freeing of alien SKBs (alloc_cpu != current cpu) via
> > skb_attempt_defer_free() on the TX completion path to reduce cross-NUMA
> > SLUB spinlock contention to improve multi-queue UDP workloads.
> >
> > However, this unconditionally impacts the napi_skb_cache fast recycle
> > path for single-flow / few-flow workloads (e.g. AF_XDP benchmarks[1]):
> > when the TX completion NAPI CPU differs from the SKB allocation CPU,
> > SKBs are deferred instead of being returned to the local napi_skb_cache,
> > forcing RX allocations back to the slow slab path.
> >
> > The existing net.core.skb_defer_max=0 could disable this, but it is a
> > global switch that also disables the defer mechanism in TCP/UDP/MPTCP
> > recvmsg paths, losing its positive SLUB locality benefits there. AF_XDP
> > can co-exist with other protocols. That's the reason why I gave up
> > reusing skb_defer_disable_key. Besides, if the defer path is disabled,
> > that means TCP/UDP/MPTCP in process path will trigger directly freeing
> > skb with enabling/disabling bottom half(in kfree_skb_napi_cache())
> > which could affect others. So my thinking is not to touch this path.
> >
> > Add a dedicated sysctl net.core.napi_consume_skb_defer backed by a
> > static key to selectively control the alien skb defer feature. Let
> > users decide which is the best fit for their own requirements.
> >
> > This patch also avoids touching local_bh* pair(in kfree_skb_napi_cache())
> > to minimize the overhead.
> >
> > [1]: taskset -c 0 ./xdpsock -i enp2s0f1 -q 1 -t -S -s 64
> > 1) sysctl -w net.core.napi_consume_skb_defer=1 (as default)
> >  sock0@enp2s0f1:1 txonly xdp-skb
> >                    pps            pkts           1.00
> > rx                 0              0
> > tx                 1,851,950      20,397,952
> >
> > 2)sysctl -w net.core.napi_consume_skb_defer=0
> >  sock0@enp2s0f1:1 txonly xdp-skb
> >                    pps            pkts           1.00
> > rx                 0              0
> > tx                 1,985,067      25,530,432
> >
> > For AF_XDP scenario, it turns out to be around 6.6% improvement.
>
> I'm not a big fan of multiple tunables around the same feature, but

Another interesting thing about this is for AI/auto tuning systems
more sysctls can be leveraged and help users find the best combination
of sysctls for their own scenario. I realized that especially a sysctl
can decide the different/opposite fate of one particular case, which
means we can give it a try.

> possibly here the use-case extends beyond AF_XDP right? Do you observe
> some measurable positive delta even with UDP? Possibly even with TCP,
> when the bottleneck is the sender?
>
> More data would be helpful.

Unfortunately, I didn't see any improvement with a simple udp flood
test that I wrote. Maybe I need to do more experiments around it with
more adjustments.

>
> > Signed-off-by: Jason Xing <kerneljasonxing@gmail.com>
> > ---
> > V2
> > Link: https://lore.kernel.org/all/20260326144249.97213-1-kerneljasonxing@gmail.com/
> > 1. reuse proc_do_static_key() (Eric)
> > 2. add doc (Stan)
> > ---
> >  Documentation/admin-guide/sysctl/net.rst | 13 +++++++++++++
> >  net/core/net-sysfs.h                     |  1 +
> >  net/core/skbuff.c                        |  5 ++++-
> >  net/core/sysctl_net_core.c               |  7 +++++++
> >  4 files changed, 25 insertions(+), 1 deletion(-)
> >
> > diff --git a/Documentation/admin-guide/sysctl/net.rst b/Documentation/admin-guide/sysctl/net.rst
> > index 0724a793798f..42e06f93306f 100644
> > --- a/Documentation/admin-guide/sysctl/net.rst
> > +++ b/Documentation/admin-guide/sysctl/net.rst
> > @@ -368,6 +368,19 @@ by the cpu which allocated them.
> >
> >  Default: 128
> >
> > +napi_consume_skb_defer
> > +----------------------
> > +When set to 1 (default), napi_consume_skb() defers freeing SKBs whose
> > +allocation CPU differs from the current CPU via skb_attempt_defer_free().
> > +This reduces cross-NUMA SLUB spinlock contention for multi-queue workloads.
> > +
> > +Setting this to 0 disables the defer path in napi_consume_skb() only,
> > +allowing SKBs to be returned to the local napi_skb_cache immediately.
> > +This can benefit single-flow or few-flow workloads (e.g. AF_XDP TX)
> > +where the defer detour hurts the fast recycle path.
>
> I think it should be clarified that skb_defer_max takes priority, i.e.
> no defer with skb_defer_max == 0. Also it would be great if you could
> additional extend skb_defer_max documentation noting that 0 disable such
> feature.

The skb_defer_max and the newly added sysctl control different paths
respectively. In napi_consume_skb, napi_consume_skb_defer takes higher
priority.

These days, I'm wondering if you don't think it's worth a standalone
knob, how about a simple patch to save more cycles when skb_defer_max
is zero, like this:
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 3d6978dd0aa8..4045d7c484a1 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -94,6 +94,7 @@

 #include "dev.h"
 #include "devmem.h"
+#include "net-sysfs.h"
 #include "netmem_priv.h"
 #include "sock_destructor.h"

@@ -1519,7 +1520,8 @@ void napi_consume_skb(struct sk_buff *skb, int budget)

        DEBUG_NET_WARN_ON_ONCE(!in_softirq());

-       if (skb->alloc_cpu != smp_processor_id() && !skb_shared(skb)) {
+       if (!static_branch_unlikely(&skb_defer_disable_key) &&
+           skb->alloc_cpu != smp_processor_id() && !skb_shared(skb)) {
                skb_release_head_state(skb);
                return skb_attempt_defer_free(skb);
        }
-- 
2.41.3

Thanks,
Jason

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-04-01  3:40 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-27 15:33 [PATCH v2 net-next] net: add sysctl to toggle napi_consume_skb() alien skb defer Jason Xing
2026-03-31 11:39 ` Paolo Abeni
2026-04-01  3:39   ` Jason Xing

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox