netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf] cpumap: disable page_pool direct xdp_return need larger scope
@ 2025-08-14 18:24 Jesper Dangaard Brouer
  2025-08-14 20:56 ` Chris Arges
  2025-08-15  4:50 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 3+ messages in thread
From: Jesper Dangaard Brouer @ 2025-08-14 18:24 UTC (permalink / raw)
  To: bpf, Jakub Kicinski, dtatulea
  Cc: Jesper Dangaard Brouer, Alexei Starovoitov, Daniel Borkmann,
	netdev, Eric Dumazet, David S. Miller, Paolo Abeni, tariqt,
	tariqt, memxor, john.fastabend, kernel-team, yan, jbrandeburg,
	carges, arzeznik

When running an XDP bpf_prog on the remote CPU in cpumap code
then we must disable the direct return optimization that
xdp_return can perform for mem_type page_pool.  This optimization
assumes code is still executing under RX-NAPI of the original
receiving CPU, which isn't true on this remote CPU.

The cpumap code already disabled this via helpers
xdp_set_return_frame_no_direct() and xdp_clear_return_frame_no_direct(),
but the scope didn't include xdp_do_flush().

When doing XDP_REDIRECT towards e.g devmap this causes the
function bq_xmit_all() to run with direct return optimization
enabled. This can lead to hard to find bugs.  The issue
only happens when bq_xmit_all() cannot ndo_xdp_xmit all
frames and them frees them via xdp_return_frame_rx_napi().

Fix by expanding scope to include xdp_do_flush().

Fixes: 11941f8a8536 ("bpf: cpumap: Implement generic cpumap")
Found-by: Dragos Tatulea <dtatulea@nvidia.com>
Reported-by: Chris Arges <carges@cloudflare.com>
Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
---
 kernel/bpf/cpumap.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index b2b7b8ec2c2a..c46360b27871 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -186,7 +186,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
 	struct xdp_buff xdp;
 	int i, nframes = 0;
 
-	xdp_set_return_frame_no_direct();
 	xdp.rxq = &rxq;
 
 	for (i = 0; i < n; i++) {
@@ -231,7 +230,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
 		}
 	}
 
-	xdp_clear_return_frame_no_direct();
 	stats->pass += nframes;
 
 	return nframes;
@@ -255,6 +253,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
 
 	rcu_read_lock();
 	bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
+	xdp_set_return_frame_no_direct();
 
 	ret->xdp_n = cpu_map_bpf_prog_run_xdp(rcpu, frames, ret->xdp_n, stats);
 	if (unlikely(ret->skb_n))
@@ -264,6 +263,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
 	if (stats->redirect)
 		xdp_do_flush();
 
+	xdp_clear_return_frame_no_direct();
 	bpf_net_ctx_clear(bpf_net_ctx);
 	rcu_read_unlock();
 



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH bpf] cpumap: disable page_pool direct xdp_return need larger scope
  2025-08-14 18:24 [PATCH bpf] cpumap: disable page_pool direct xdp_return need larger scope Jesper Dangaard Brouer
@ 2025-08-14 20:56 ` Chris Arges
  2025-08-15  4:50 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: Chris Arges @ 2025-08-14 20:56 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: bpf, Jakub Kicinski, dtatulea, Alexei Starovoitov,
	Daniel Borkmann, netdev, Eric Dumazet, David S. Miller,
	Paolo Abeni, tariqt, memxor, john.fastabend, kernel-team, yan,
	jbrandeburg, arzeznik

On 2025-08-14 20:24:37, Jesper Dangaard Brouer wrote:
> When running an XDP bpf_prog on the remote CPU in cpumap code
> then we must disable the direct return optimization that
> xdp_return can perform for mem_type page_pool.  This optimization
> assumes code is still executing under RX-NAPI of the original
> receiving CPU, which isn't true on this remote CPU.
> 
> The cpumap code already disabled this via helpers
> xdp_set_return_frame_no_direct() and xdp_clear_return_frame_no_direct(),
> but the scope didn't include xdp_do_flush().
> 
> When doing XDP_REDIRECT towards e.g devmap this causes the
> function bq_xmit_all() to run with direct return optimization
> enabled. This can lead to hard to find bugs.  The issue
> only happens when bq_xmit_all() cannot ndo_xdp_xmit all
> frames and them frees them via xdp_return_frame_rx_napi().
> 
> Fix by expanding scope to include xdp_do_flush().
> 
> Fixes: 11941f8a8536 ("bpf: cpumap: Implement generic cpumap")
> Found-by: Dragos Tatulea <dtatulea@nvidia.com>
> Reported-by: Chris Arges <carges@cloudflare.com>
> Signed-off-by: Jesper Dangaard Brouer <hawk@kernel.org>
> ---
>  kernel/bpf/cpumap.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
> index b2b7b8ec2c2a..c46360b27871 100644
> --- a/kernel/bpf/cpumap.c
> +++ b/kernel/bpf/cpumap.c
> @@ -186,7 +186,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
>  	struct xdp_buff xdp;
>  	int i, nframes = 0;
>  
> -	xdp_set_return_frame_no_direct();
>  	xdp.rxq = &rxq;
>  
>  	for (i = 0; i < n; i++) {
> @@ -231,7 +230,6 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
>  		}
>  	}
>  
> -	xdp_clear_return_frame_no_direct();
>  	stats->pass += nframes;
>  
>  	return nframes;
> @@ -255,6 +253,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
>  
>  	rcu_read_lock();
>  	bpf_net_ctx = bpf_net_ctx_set(&__bpf_net_ctx);
> +	xdp_set_return_frame_no_direct();
>  
>  	ret->xdp_n = cpu_map_bpf_prog_run_xdp(rcpu, frames, ret->xdp_n, stats);
>  	if (unlikely(ret->skb_n))
> @@ -264,6 +263,7 @@ static void cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
>  	if (stats->redirect)
>  		xdp_do_flush();
>  
> +	xdp_clear_return_frame_no_direct();
>  	bpf_net_ctx_clear(bpf_net_ctx);
>  	rcu_read_unlock();
>  
> 
>

FWIW, I tested this patch and could no longer reproduce the original issue.

Tested-By: Chris Arges <carges@cloudflare.com>

--chris

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH bpf] cpumap: disable page_pool direct xdp_return need larger scope
  2025-08-14 18:24 [PATCH bpf] cpumap: disable page_pool direct xdp_return need larger scope Jesper Dangaard Brouer
  2025-08-14 20:56 ` Chris Arges
@ 2025-08-15  4:50 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-08-15  4:50 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: bpf, kuba, dtatulea, ast, borkmann, netdev, eric.dumazet, davem,
	pabeni, tariqt, memxor, john.fastabend, kernel-team, yan,
	jbrandeburg, carges, arzeznik

Hello:

This patch was applied to bpf/bpf.git (master)
by Martin KaFai Lau <martin.lau@kernel.org>:

On Thu, 14 Aug 2025 20:24:37 +0200 you wrote:
> When running an XDP bpf_prog on the remote CPU in cpumap code
> then we must disable the direct return optimization that
> xdp_return can perform for mem_type page_pool.  This optimization
> assumes code is still executing under RX-NAPI of the original
> receiving CPU, which isn't true on this remote CPU.
> 
> The cpumap code already disabled this via helpers
> xdp_set_return_frame_no_direct() and xdp_clear_return_frame_no_direct(),
> but the scope didn't include xdp_do_flush().
> 
> [...]

Here is the summary with links:
  - [bpf] cpumap: disable page_pool direct xdp_return need larger scope
    https://git.kernel.org/bpf/bpf/c/7572a47ebcdf

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-08-15  4:49 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-14 18:24 [PATCH bpf] cpumap: disable page_pool direct xdp_return need larger scope Jesper Dangaard Brouer
2025-08-14 20:56 ` Chris Arges
2025-08-15  4:50 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).