From: Mykyta Yatsenko <mykyta.yatsenko5@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: bpf@vger.kernel.org, ast@kernel.org, andrii@kernel.org,
daniel@iogearbox.net, kafai@meta.com, kernel-team@meta.com,
eddyz87@gmail.com, memxor@gmail.com, rostedt@goodmis.org,
Mykyta Yatsenko <yatsenko@meta.com>
Subject: Re: [PATCH bpf-next v11 1/6] bpf: Add sleepable support for raw tracepoint programs
Date: Thu, 23 Apr 2026 13:39:24 +0100 [thread overview]
Message-ID: <b06ad9d8-681e-41a4-bfd4-fc86f021add9@gmail.com> (raw)
In-Reply-To: <20260423095650.GA3126523@noisy.programming.kicks-ass.net>
On 4/23/26 10:56 AM, Peter Zijlstra wrote:
> On Tue, Apr 21, 2026 at 10:14:17AM -0700, Mykyta Yatsenko wrote:
>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> Rework __bpf_trace_run() to support sleepable BPF programs by using
>> explicit RCU flavor selection, following the uprobe_prog_run() pattern.
>>
>> For sleepable programs, use rcu_read_lock_tasks_trace() for lifetime
>> protection with migrate_disable().
>
> Why the migrate_disable() ? This is a new kind of BPF program; would it
> not be a good opportunity to not have this constraint?
>
I rely on existing infrastructure: __bpf_prog_run() has
cant_migrate(), there is per-cpu variables access, bpf program
itself uses private stack (per-cpu allocated). It looks like making
sleepable tracepoints run without migrate_disable() would force us to
change quite a few things.
Then there are few problems in how per-cpu BPF maps would work; timer
helpers rely on per-cpu variables for deadlock detection.
>> For non-sleepable programs, use the
>> regular rcu_read_lock_dont_migrate().
>>
>> Remove the preempt_disable_notrace/preempt_enable_notrace pair from
>> the faultable tracepoint BPF probe wrapper in bpf_probe.h, since
>> migration protection and RCU locking are now handled per-program
>> inside __bpf_trace_run().
>>
>> Adapt bpf_prog_test_run_raw_tp() for sleepable programs: reject
>> BPF_F_TEST_RUN_ON_CPU since sleepable programs cannot run in hardirq
>> or preempt-disabled context, and call __bpf_prog_test_run_raw_tp()
>> directly instead of via smp_call_function_single(). Rework
>> __bpf_prog_test_run_raw_tp() to select RCU flavor per-program and
>> add per-program recursion context guard for private stack safety.
>>
>> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>> ---
>> include/trace/bpf_probe.h | 2 --
>> kernel/trace/bpf_trace.c | 20 ++++++++++++---
>> net/bpf/test_run.c | 65 ++++++++++++++++++++++++++++++++++++-----------
>> 3 files changed, 67 insertions(+), 20 deletions(-)
>>
>> diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
>> index 9391d54d3f12..d1de8f9aa07f 100644
>> --- a/include/trace/bpf_probe.h
>> +++ b/include/trace/bpf_probe.h
>> @@ -58,9 +58,7 @@ static notrace void \
>> __bpf_trace_##call(void *__data, proto) \
>> { \
>> might_fault(); \
>> - preempt_disable_notrace(); \
>> CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \
>> - preempt_enable_notrace(); \
>> }
>>
>> #undef DECLARE_EVENT_SYSCALL_CLASS
>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>> index e916f0ccbed9..7276c72c1d31 100644
>> --- a/kernel/trace/bpf_trace.c
>> +++ b/kernel/trace/bpf_trace.c
>> @@ -2072,11 +2072,19 @@ void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp)
>> static __always_inline
>> void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 *args)
>> {
>> + struct srcu_ctr __percpu *scp = NULL;
>> struct bpf_prog *prog = link->link.prog;
>> + bool sleepable = prog->sleepable;
>> struct bpf_run_ctx *old_run_ctx;
>> struct bpf_trace_run_ctx run_ctx;
>>
>> - rcu_read_lock_dont_migrate();
>> + if (sleepable) {
>> + scp = rcu_read_lock_tasks_trace();
>> + migrate_disable();
>> + } else {
>> + rcu_read_lock_dont_migrate();
>> + }
>> +
>> if (unlikely(!bpf_prog_get_recursion_context(prog))) {
>> bpf_prog_inc_misses_counter(prog);
>> goto out;
>> @@ -2085,12 +2093,18 @@ void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 *args)
>> run_ctx.bpf_cookie = link->cookie;
>> old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
>>
>> - (void) bpf_prog_run(prog, args);
>> + (void)bpf_prog_run(prog, args);
>>
>> bpf_reset_run_ctx(old_run_ctx);
>> out:
>> bpf_prog_put_recursion_context(prog);
>> - rcu_read_unlock_migrate();
>> +
>> + if (sleepable) {
>> + migrate_enable();
>> + rcu_read_unlock_tasks_trace(scp);
>> + } else {
>> + rcu_read_unlock_migrate();
>> + }
>> }
>
> Since you have a clear distinction between __BPF_DECLARE_TRACE() and
> __BPF_DELARE_TRACE_SYSCALL(), would it not make more sense to have
> different versions of __bpf_trace_run() instead of having a runtime
> condition on ->sleepable ?
Syscall tracepoint can be sleepable or non-sleepable, so the runtime
dispatch on prog->sleepable is needed anyway. I'm not sure how to
get access to bpf_prog in __BPF_DELARE_TRACE_SYSCALL(), that probably
won't look very clean.
next prev parent reply other threads:[~2026-04-23 12:39 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-21 17:14 [PATCH bpf-next v11 0/6] bpf: Add support for sleepable tracepoint programs Mykyta Yatsenko
2026-04-21 17:14 ` [PATCH bpf-next v11 1/6] bpf: Add sleepable support for raw " Mykyta Yatsenko
2026-04-23 9:56 ` Peter Zijlstra
2026-04-23 12:39 ` Mykyta Yatsenko [this message]
2026-04-23 14:04 ` Steven Rostedt
2026-04-23 14:11 ` Kumar Kartikeya Dwivedi
2026-04-21 17:14 ` [PATCH bpf-next v11 2/6] bpf: Add bpf_prog_run_array_sleepable() Mykyta Yatsenko
2026-04-21 20:42 ` Alexei Starovoitov
2026-04-21 23:16 ` Mykyta Yatsenko
2026-04-23 10:00 ` Peter Zijlstra
2026-04-23 9:59 ` Peter Zijlstra
2026-04-21 17:14 ` [PATCH bpf-next v11 3/6] bpf: Add sleepable support for classic tracepoint programs Mykyta Yatsenko
2026-04-22 15:58 ` Steven Rostedt
2026-04-21 17:14 ` [PATCH bpf-next v11 4/6] bpf: Verifier support for sleepable " Mykyta Yatsenko
2026-04-21 17:14 ` [PATCH bpf-next v11 5/6] libbpf: Add section handlers for sleepable tracepoints Mykyta Yatsenko
2026-04-21 17:14 ` [PATCH bpf-next v11 6/6] selftests/bpf: Add tests for sleepable tracepoint programs Mykyta Yatsenko
2026-04-21 18:06 ` bot+bpf-ci
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b06ad9d8-681e-41a4-bfd4-fc86f021add9@gmail.com \
--to=mykyta.yatsenko5@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=eddyz87@gmail.com \
--cc=kafai@meta.com \
--cc=kernel-team@meta.com \
--cc=memxor@gmail.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=yatsenko@meta.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox