* [PATCH net-next v3] xsk: add indirect call for xsk_destruct_skb
@ 2025-10-31 10:33 Jason Xing
2025-11-05 6:29 ` Jason Xing
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Jason Xing @ 2025-10-31 10:33 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, bjorn, magnus.karlsson,
maciej.fijalkowski, jonathan.lemon, sdf, ast, daniel, hawk,
john.fastabend
Cc: bpf, netdev, Jason Xing, Alexander Lobakin
From: Jason Xing <kernelxing@tencent.com>
Since Eric proposed an idea about adding indirect call wrappers for
UDP and managed to see a huge improvement[1], the same situation can
also be applied in xsk scenario.
This patch adds an indirect call for xsk and helps current copy mode
improve the performance by around 1% stably which was observed with
IXGBE at 10Gb/sec loaded. If the throughput grows, the positive effect
will be magnified. I applied this patch on top of batch xmit series[2],
and was able to see <5% improvement from our internal application
which is a little bit unstable though.
Use INDIRECT wrappers to keep xsk_destruct_skb static as it used to
be when the mitigation config is off.
Be aware of the freeing path that can be very hot since the frequency
can reach around 2,000,000 times per second with the xdpsock test.
[1]: https://lore.kernel.org/netdev/20251006193103.2684156-2-edumazet@google.com/
[2]: https://lore.kernel.org/all/20251021131209.41491-1-kerneljasonxing@gmail.com/
Suggested-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Signed-off-by: Jason Xing <kernelxing@tencent.com>
---
v3
Link: https://lore.kernel.org/all/20251026145824.81675-1-kerneljasonxing@gmail.com/
1. revise the commit message (Paolo)
v2
Link: https://lore.kernel.org/all/20251023085843.25619-1-kerneljasonxing@gmail.com/
1. use INDIRECT helpers (Alexander)
---
include/net/xdp_sock.h | 7 +++++++
net/core/skbuff.c | 8 +++++---
net/xdp/xsk.c | 3 ++-
3 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
index ce587a225661..23e8861e8b25 100644
--- a/include/net/xdp_sock.h
+++ b/include/net/xdp_sock.h
@@ -125,6 +125,7 @@ struct xsk_tx_metadata_ops {
int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp);
int __xsk_map_redirect(struct xdp_sock *xs, struct xdp_buff *xdp);
void __xsk_map_flush(struct list_head *flush_list);
+INDIRECT_CALLABLE_DECLARE(void xsk_destruct_skb(struct sk_buff *));
/**
* xsk_tx_metadata_to_compl - Save enough relevant metadata information
@@ -218,6 +219,12 @@ static inline void __xsk_map_flush(struct list_head *flush_list)
{
}
+#ifdef CONFIG_MITIGATION_RETPOLINE
+static inline void xsk_destruct_skb(struct sk_buff *skb)
+{
+}
+#endif
+
static inline void xsk_tx_metadata_to_compl(struct xsk_tx_metadata *meta,
struct xsk_tx_metadata_compl *compl)
{
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 5b4bc8b1c7d5..00ea38248bd6 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -81,6 +81,7 @@
#include <net/page_pool/helpers.h>
#include <net/psp/types.h>
#include <net/dropreason.h>
+#include <net/xdp_sock.h>
#include <linux/uaccess.h>
#include <trace/events/skb.h>
@@ -1140,12 +1141,13 @@ void skb_release_head_state(struct sk_buff *skb)
if (skb->destructor) {
DEBUG_NET_WARN_ON_ONCE(in_hardirq());
#ifdef CONFIG_INET
- INDIRECT_CALL_3(skb->destructor,
+ INDIRECT_CALL_4(skb->destructor,
tcp_wfree, __sock_wfree, sock_wfree,
+ xsk_destruct_skb,
skb);
#else
- INDIRECT_CALL_1(skb->destructor,
- sock_wfree,
+ INDIRECT_CALL_2(skb->destructor,
+ sock_wfree, xsk_destruct_skb,
skb);
#endif
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 7b0c68a70888..9451b090db16 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -605,7 +605,8 @@ static u32 xsk_get_num_desc(struct sk_buff *skb)
return XSKCB(skb)->num_descs;
}
-static void xsk_destruct_skb(struct sk_buff *skb)
+INDIRECT_CALLABLE_SCOPE
+void xsk_destruct_skb(struct sk_buff *skb)
{
struct xsk_tx_metadata_compl *compl = &skb_shinfo(skb)->xsk_meta;
--
2.41.3
^ permalink raw reply related [flat|nested] 5+ messages in thread* Re: [PATCH net-next v3] xsk: add indirect call for xsk_destruct_skb
2025-10-31 10:33 [PATCH net-next v3] xsk: add indirect call for xsk_destruct_skb Jason Xing
@ 2025-11-05 6:29 ` Jason Xing
2025-11-11 9:29 ` Paolo Abeni
2025-11-11 10:00 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 5+ messages in thread
From: Jason Xing @ 2025-11-05 6:29 UTC (permalink / raw)
To: davem, edumazet, kuba, pabeni, bjorn, magnus.karlsson,
maciej.fijalkowski, jonathan.lemon, sdf, ast, daniel, hawk,
john.fastabend
Cc: bpf, netdev, Jason Xing, Alexander Lobakin
On Fri, Oct 31, 2025 at 6:33 PM Jason Xing <kerneljasonxing@gmail.com> wrote:
>
> From: Jason Xing <kernelxing@tencent.com>
>
> Since Eric proposed an idea about adding indirect call wrappers for
> UDP and managed to see a huge improvement[1], the same situation can
> also be applied in xsk scenario.
>
> This patch adds an indirect call for xsk and helps current copy mode
> improve the performance by around 1% stably which was observed with
> IXGBE at 10Gb/sec loaded. If the throughput grows, the positive effect
> will be magnified. I applied this patch on top of batch xmit series[2],
> and was able to see <5% improvement from our internal application
> which is a little bit unstable though.
>
> Use INDIRECT wrappers to keep xsk_destruct_skb static as it used to
> be when the mitigation config is off.
>
> Be aware of the freeing path that can be very hot since the frequency
> can reach around 2,000,000 times per second with the xdpsock test.
>
> [1]: https://lore.kernel.org/netdev/20251006193103.2684156-2-edumazet@google.com/
> [2]: https://lore.kernel.org/all/20251021131209.41491-1-kerneljasonxing@gmail.com/
>
> Suggested-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> Signed-off-by: Jason Xing <kernelxing@tencent.com>
Sorry that I miss adding the tag from Alexander.
Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Thanks,
Jason
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next v3] xsk: add indirect call for xsk_destruct_skb
2025-10-31 10:33 [PATCH net-next v3] xsk: add indirect call for xsk_destruct_skb Jason Xing
2025-11-05 6:29 ` Jason Xing
@ 2025-11-11 9:29 ` Paolo Abeni
2025-11-11 12:15 ` Jason Xing
2025-11-11 10:00 ` patchwork-bot+netdevbpf
2 siblings, 1 reply; 5+ messages in thread
From: Paolo Abeni @ 2025-11-11 9:29 UTC (permalink / raw)
To: Jason Xing, davem, edumazet, kuba, bjorn, magnus.karlsson,
maciej.fijalkowski, jonathan.lemon, sdf, ast, daniel, hawk,
john.fastabend
Cc: bpf, netdev, Jason Xing, Alexander Lobakin
On 10/31/25 11:33 AM, Jason Xing wrote:
> From: Jason Xing <kernelxing@tencent.com>
>
> Since Eric proposed an idea about adding indirect call wrappers for
> UDP and managed to see a huge improvement[1], the same situation can
> also be applied in xsk scenario.
>
> This patch adds an indirect call for xsk and helps current copy mode
> improve the performance by around 1% stably which was observed with
> IXGBE at 10Gb/sec loaded. If the throughput grows, the positive effect
> will be magnified. I applied this patch on top of batch xmit series[2],
> and was able to see <5% improvement from our internal application
> which is a little bit unstable though.
>
> Use INDIRECT wrappers to keep xsk_destruct_skb static as it used to
> be when the mitigation config is off.
>
> Be aware of the freeing path that can be very hot since the frequency
> can reach around 2,000,000 times per second with the xdpsock test.
>
> [1]: https://lore.kernel.org/netdev/20251006193103.2684156-2-edumazet@google.com/
> [2]: https://lore.kernel.org/all/20251021131209.41491-1-kerneljasonxing@gmail.com/
>
> Suggested-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> Signed-off-by: Jason Xing <kernelxing@tencent.com>
My take here is that this should not impact too negatively the
maintenance cost, and I agree that virtio_net is a legit/significant
use-case.
Cheers,
Paolo
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next v3] xsk: add indirect call for xsk_destruct_skb
2025-11-11 9:29 ` Paolo Abeni
@ 2025-11-11 12:15 ` Jason Xing
0 siblings, 0 replies; 5+ messages in thread
From: Jason Xing @ 2025-11-11 12:15 UTC (permalink / raw)
To: Paolo Abeni
Cc: davem, edumazet, kuba, bjorn, magnus.karlsson, maciej.fijalkowski,
jonathan.lemon, sdf, ast, daniel, hawk, john.fastabend, bpf,
netdev, Jason Xing, Alexander Lobakin
On Tue, Nov 11, 2025 at 5:30 PM Paolo Abeni <pabeni@redhat.com> wrote:
>
> On 10/31/25 11:33 AM, Jason Xing wrote:
> > From: Jason Xing <kernelxing@tencent.com>
> >
> > Since Eric proposed an idea about adding indirect call wrappers for
> > UDP and managed to see a huge improvement[1], the same situation can
> > also be applied in xsk scenario.
> >
> > This patch adds an indirect call for xsk and helps current copy mode
> > improve the performance by around 1% stably which was observed with
> > IXGBE at 10Gb/sec loaded. If the throughput grows, the positive effect
> > will be magnified. I applied this patch on top of batch xmit series[2],
> > and was able to see <5% improvement from our internal application
> > which is a little bit unstable though.
> >
> > Use INDIRECT wrappers to keep xsk_destruct_skb static as it used to
> > be when the mitigation config is off.
> >
> > Be aware of the freeing path that can be very hot since the frequency
> > can reach around 2,000,000 times per second with the xdpsock test.
> >
> > [1]: https://lore.kernel.org/netdev/20251006193103.2684156-2-edumazet@google.com/
> > [2]: https://lore.kernel.org/all/20251021131209.41491-1-kerneljasonxing@gmail.com/
> >
> > Suggested-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> > Signed-off-by: Jason Xing <kernelxing@tencent.com>
>
> My take here is that this should not impact too negatively the
> maintenance cost, and I agree that virtio_net is a legit/significant
> use-case.
Thanks for your understanding :) This case is one of my biggest
headaches because I have to use copy mode :(
Thanks,
Jason
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH net-next v3] xsk: add indirect call for xsk_destruct_skb
2025-10-31 10:33 [PATCH net-next v3] xsk: add indirect call for xsk_destruct_skb Jason Xing
2025-11-05 6:29 ` Jason Xing
2025-11-11 9:29 ` Paolo Abeni
@ 2025-11-11 10:00 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 5+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-11-11 10:00 UTC (permalink / raw)
To: Jason Xing
Cc: davem, edumazet, kuba, pabeni, bjorn, magnus.karlsson,
maciej.fijalkowski, jonathan.lemon, sdf, ast, daniel, hawk,
john.fastabend, bpf, netdev, kernelxing, aleksander.lobakin
Hello:
This patch was applied to netdev/net-next.git (main)
by Paolo Abeni <pabeni@redhat.com>:
On Fri, 31 Oct 2025 18:33:28 +0800 you wrote:
> From: Jason Xing <kernelxing@tencent.com>
>
> Since Eric proposed an idea about adding indirect call wrappers for
> UDP and managed to see a huge improvement[1], the same situation can
> also be applied in xsk scenario.
>
> This patch adds an indirect call for xsk and helps current copy mode
> improve the performance by around 1% stably which was observed with
> IXGBE at 10Gb/sec loaded. If the throughput grows, the positive effect
> will be magnified. I applied this patch on top of batch xmit series[2],
> and was able to see <5% improvement from our internal application
> which is a little bit unstable though.
>
> [...]
Here is the summary with links:
- [net-next,v3] xsk: add indirect call for xsk_destruct_skb
https://git.kernel.org/netdev/net-next/c/8da7bea7db69
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-11-11 12:16 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-31 10:33 [PATCH net-next v3] xsk: add indirect call for xsk_destruct_skb Jason Xing
2025-11-05 6:29 ` Jason Xing
2025-11-11 9:29 ` Paolo Abeni
2025-11-11 12:15 ` Jason Xing
2025-11-11 10:00 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).