linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v2] bpf: Remove migrate_disable in kprobe_multi_link_prog_run
@ 2025-08-05 16:27 Tao Chen
  2025-08-12 22:05 ` Andrii Nakryiko
  0 siblings, 1 reply; 4+ messages in thread
From: Tao Chen @ 2025-08-05 16:27 UTC (permalink / raw)
  To: ast, daniel, andrii, martin.lau, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, mattbobrowski,
	rostedt, mhiramat, mathieu.desnoyers
  Cc: bpf, linux-kernel, linux-trace-kernel, Tao Chen

bpf program should run under migration disabled, kprobe_multi_link_prog_run
called all the way from graph tracer, which disables preemption in
function_graph_enter_regs, as Jiri and Yonghong suggested, there is no
need to use migrate_disable. As a result, some overhead maybe will be
reduced.

Fixes: 0dcac2725406 ("bpf: Add multi kprobe link")
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Tao Chen <chen.dylane@linux.dev>
---
 kernel/trace/bpf_trace.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

Change list:
 v1 -> v2:
  - s/called the way/called all the way/.(Jiri)
 v1: https://lore.kernel.org/bpf/f7acfd22-bcf3-4dff-9a87-7c1e6f84ce9c@linux.dev

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 3ae52978cae..5701791e3cb 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2734,14 +2734,19 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
 		goto out;
 	}
 
-	migrate_disable();
+	/*
+	 * bpf program should run under migration disabled, kprobe_multi_link_prog_run
+	 * called all the way from graph tracer, which disables preemption in
+	 * function_graph_enter_regs, so there is no need to use migrate_disable.
+	 * Accessing the above percpu data bpf_prog_active is also safe for the same
+	 * reason.
+	 */
 	rcu_read_lock();
 	regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr());
 	old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx);
 	err = bpf_prog_run(link->link.prog, regs);
 	bpf_reset_run_ctx(old_run_ctx);
 	rcu_read_unlock();
-	migrate_enable();
 
  out:
 	__this_cpu_dec(bpf_prog_active);
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf-next v2] bpf: Remove migrate_disable in kprobe_multi_link_prog_run
  2025-08-05 16:27 [PATCH bpf-next v2] bpf: Remove migrate_disable in kprobe_multi_link_prog_run Tao Chen
@ 2025-08-12 22:05 ` Andrii Nakryiko
  2025-08-13 11:54   ` Tao Chen
  2025-08-14 11:35   ` Tao Chen
  0 siblings, 2 replies; 4+ messages in thread
From: Andrii Nakryiko @ 2025-08-12 22:05 UTC (permalink / raw)
  To: Tao Chen
  Cc: ast, daniel, andrii, martin.lau, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, mattbobrowski,
	rostedt, mhiramat, mathieu.desnoyers, bpf, linux-kernel,
	linux-trace-kernel

On Tue, Aug 5, 2025 at 9:28 AM Tao Chen <chen.dylane@linux.dev> wrote:
>
> bpf program should run under migration disabled, kprobe_multi_link_prog_run
> called all the way from graph tracer, which disables preemption in
> function_graph_enter_regs, as Jiri and Yonghong suggested, there is no
> need to use migrate_disable. As a result, some overhead maybe will be
> reduced.
>
> Fixes: 0dcac2725406 ("bpf: Add multi kprobe link")
> Acked-by: Yonghong Song <yonghong.song@linux.dev>
> Acked-by: Jiri Olsa <jolsa@kernel.org>
> Signed-off-by: Tao Chen <chen.dylane@linux.dev>
> ---
>  kernel/trace/bpf_trace.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
>
> Change list:
>  v1 -> v2:
>   - s/called the way/called all the way/.(Jiri)
>  v1: https://lore.kernel.org/bpf/f7acfd22-bcf3-4dff-9a87-7c1e6f84ce9c@linux.dev
>
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 3ae52978cae..5701791e3cb 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -2734,14 +2734,19 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,

even though bpf_prog_run() eventually calls cant_migrate(), we should
add it before that __this_cpu_inc_return() call as well, because that
one is relying on that non-migration independently from bpf_prog_run()

>                 goto out;
>         }
>
> -       migrate_disable();
> +       /*
> +        * bpf program should run under migration disabled, kprobe_multi_link_prog_run
> +        * called all the way from graph tracer, which disables preemption in
> +        * function_graph_enter_regs, so there is no need to use migrate_disable.
> +        * Accessing the above percpu data bpf_prog_active is also safe for the same
> +        * reason.
> +        */

let's shorten this a bit to something like:

/* graph tracer framework ensures we won't migrate */
cant_migrate();

all the other stuff in the comment can become outdated way too easily
and/or is sort of general BPF implementation knowledge

pw-bot: cr


>         rcu_read_lock();
>         regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr());
>         old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx);
>         err = bpf_prog_run(link->link.prog, regs);
>         bpf_reset_run_ctx(old_run_ctx);
>         rcu_read_unlock();
> -       migrate_enable();
>
>   out:
>         __this_cpu_dec(bpf_prog_active);
> --
> 2.48.1
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf-next v2] bpf: Remove migrate_disable in kprobe_multi_link_prog_run
  2025-08-12 22:05 ` Andrii Nakryiko
@ 2025-08-13 11:54   ` Tao Chen
  2025-08-14 11:35   ` Tao Chen
  1 sibling, 0 replies; 4+ messages in thread
From: Tao Chen @ 2025-08-13 11:54 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: ast, daniel, andrii, martin.lau, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, mattbobrowski,
	rostedt, mhiramat, mathieu.desnoyers, bpf, linux-kernel,
	linux-trace-kernel

在 2025/8/13 06:05, Andrii Nakryiko 写道:
> On Tue, Aug 5, 2025 at 9:28 AM Tao Chen <chen.dylane@linux.dev> wrote:
>>
>> bpf program should run under migration disabled, kprobe_multi_link_prog_run
>> called all the way from graph tracer, which disables preemption in
>> function_graph_enter_regs, as Jiri and Yonghong suggested, there is no
>> need to use migrate_disable. As a result, some overhead maybe will be
>> reduced.
>>
>> Fixes: 0dcac2725406 ("bpf: Add multi kprobe link")
>> Acked-by: Yonghong Song <yonghong.song@linux.dev>
>> Acked-by: Jiri Olsa <jolsa@kernel.org>
>> Signed-off-by: Tao Chen <chen.dylane@linux.dev>
>> ---
>>   kernel/trace/bpf_trace.c | 9 +++++++--
>>   1 file changed, 7 insertions(+), 2 deletions(-)
>>
>> Change list:
>>   v1 -> v2:
>>    - s/called the way/called all the way/.(Jiri)
>>   v1: https://lore.kernel.org/bpf/f7acfd22-bcf3-4dff-9a87-7c1e6f84ce9c@linux.dev
>>
>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>> index 3ae52978cae..5701791e3cb 100644
>> --- a/kernel/trace/bpf_trace.c
>> +++ b/kernel/trace/bpf_trace.c
>> @@ -2734,14 +2734,19 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
> 
> even though bpf_prog_run() eventually calls cant_migrate(), we should
> add it before that __this_cpu_inc_return() call as well, because that
> one is relying on that non-migration independently from bpf_prog_run()
> 

Hi Andrii,

There is __this_cpu_preempt_check in __this_cpu_inc_return, and the 
judgment criteria are similar to cant_migrate, and I'm not sure if it
is enough.

>>                  goto out;
>>          }
>>
>> -       migrate_disable();
>> +       /*
>> +        * bpf program should run under migration disabled, kprobe_multi_link_prog_run
>> +        * called all the way from graph tracer, which disables preemption in
>> +        * function_graph_enter_regs, so there is no need to use migrate_disable.
>> +        * Accessing the above percpu data bpf_prog_active is also safe for the same
>> +        * reason.
>> +        */
> 
> let's shorten this a bit to something like:
> 
> /* graph tracer framework ensures we won't migrate */
will change it in v3.

> cant_migrate();
> 
> all the other stuff in the comment can become outdated way too easily
> and/or is sort of general BPF implementation knowledge
> 
> pw-bot: cr
> 
> 
>>          rcu_read_lock();
>>          regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr());
>>          old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx);
>>          err = bpf_prog_run(link->link.prog, regs);
>>          bpf_reset_run_ctx(old_run_ctx);
>>          rcu_read_unlock();
>> -       migrate_enable();
>>
>>    out:
>>          __this_cpu_dec(bpf_prog_active);
>> --
>> 2.48.1
>>
-- 
Best Regards
Tao Chen

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf-next v2] bpf: Remove migrate_disable in kprobe_multi_link_prog_run
  2025-08-12 22:05 ` Andrii Nakryiko
  2025-08-13 11:54   ` Tao Chen
@ 2025-08-14 11:35   ` Tao Chen
  1 sibling, 0 replies; 4+ messages in thread
From: Tao Chen @ 2025-08-14 11:35 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: ast, daniel, andrii, martin.lau, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, mattbobrowski,
	rostedt, mhiramat, mathieu.desnoyers, bpf, linux-kernel,
	linux-trace-kernel

在 2025/8/13 06:05, Andrii Nakryiko 写道:
> On Tue, Aug 5, 2025 at 9:28 AM Tao Chen <chen.dylane@linux.dev> wrote:
>>
>> bpf program should run under migration disabled, kprobe_multi_link_prog_run
>> called all the way from graph tracer, which disables preemption in
>> function_graph_enter_regs, as Jiri and Yonghong suggested, there is no
>> need to use migrate_disable. As a result, some overhead maybe will be
>> reduced.
>>
>> Fixes: 0dcac2725406 ("bpf: Add multi kprobe link")
>> Acked-by: Yonghong Song <yonghong.song@linux.dev>
>> Acked-by: Jiri Olsa <jolsa@kernel.org>
>> Signed-off-by: Tao Chen <chen.dylane@linux.dev>
>> ---
>>   kernel/trace/bpf_trace.c | 9 +++++++--
>>   1 file changed, 7 insertions(+), 2 deletions(-)
>>
>> Change list:
>>   v1 -> v2:
>>    - s/called the way/called all the way/.(Jiri)
>>   v1: https://lore.kernel.org/bpf/f7acfd22-bcf3-4dff-9a87-7c1e6f84ce9c@linux.dev
>>
>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>> index 3ae52978cae..5701791e3cb 100644
>> --- a/kernel/trace/bpf_trace.c
>> +++ b/kernel/trace/bpf_trace.c
>> @@ -2734,14 +2734,19 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
> 
> even though bpf_prog_run() eventually calls cant_migrate(), we should
> add it before that __this_cpu_inc_return() call as well, because that
> one is relying on that non-migration independently from bpf_prog_run()
> 

maybe cant_sleep() is better like trace_call_bpf, cant_sleep dose not 
check migration_disabled again, which is done in 
__this_cpu_preempt_check. I will add it in v3.

>>                  goto out;
>>          }
>>
>> -       migrate_disable();
>> +       /*
>> +        * bpf program should run under migration disabled, kprobe_multi_link_prog_run
>> +        * called all the way from graph tracer, which disables preemption in
>> +        * function_graph_enter_regs, so there is no need to use migrate_disable.
>> +        * Accessing the above percpu data bpf_prog_active is also safe for the same
>> +        * reason.
>> +        */
> 
> let's shorten this a bit to something like:
> 
> /* graph tracer framework ensures we won't migrate */
> cant_migrate();
> 
> all the other stuff in the comment can become outdated way too easily
> and/or is sort of general BPF implementation knowledge
> 
> pw-bot: cr
> 
> 
>>          rcu_read_lock();
>>          regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr());
>>          old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx);
>>          err = bpf_prog_run(link->link.prog, regs);
>>          bpf_reset_run_ctx(old_run_ctx);
>>          rcu_read_unlock();
>> -       migrate_enable();
>>
>>    out:
>>          __this_cpu_dec(bpf_prog_active);
>> --
>> 2.48.1
>>

-- 
Best Regards
Tao Chen

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-08-14 11:35 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-05 16:27 [PATCH bpf-next v2] bpf: Remove migrate_disable in kprobe_multi_link_prog_run Tao Chen
2025-08-12 22:05 ` Andrii Nakryiko
2025-08-13 11:54   ` Tao Chen
2025-08-14 11:35   ` Tao Chen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).