* [PATCH bpf v4 0/1] Fix bpf_link grace period wait for tracepoints
@ 2026-03-31 21:10 Kumar Kartikeya Dwivedi
2026-03-31 21:10 ` [PATCH bpf v4 1/1] bpf: Fix grace period wait for tracepoint bpf_link Kumar Kartikeya Dwivedi
2026-03-31 23:10 ` [PATCH bpf v4 0/1] Fix bpf_link grace period wait for tracepoints patchwork-bot+netdevbpf
0 siblings, 2 replies; 4+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2026-03-31 21:10 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Eduard Zingerman, Paul E. McKenney,
Steven Rostedt, kkd, kernel-team
A recent change to non-faultable tracepoints switched from
preempt-disabled critical sections to SRCU-fast, which breaks
assumptions in the bpf_link_free() path. Use call_srcu() to fix the
breakage.
Changelog:
----------
v3 -> v4
v3: https://lore.kernel.org/bpf/20260331005215.2813492-1-memxor@gmail.com
* Introduce call_tracepoint_unregister_{atomic,syscall} instead. (Alexei, Steven)
v2 -> v3
v2: https://lore.kernel.org/bpf/20260330143102.1265391-1-memxor@gmail.com
* Introduce and switch to call_tracepoint_unregister_non_faultable(). (Steven)
* Address Andrii's comment and add Acked-by. (Andrii)
* Drop rcu_trace_implies_rcu_gp() conversion. (Alexei)
v1 -> v2
v1: https://lore.kernel.org/bpf/20260330032124.3141001-1-memxor@gmail.com
* Add Reviewed-by tags. (Paul, Puranjay)
* Adjust commit descriptions and comments to clarify intent. (Puranjay)
Kumar Kartikeya Dwivedi (1):
bpf: Fix grace period wait for tracepoint bpf_link
include/linux/bpf.h | 4 ++++
include/linux/tracepoint.h | 20 ++++++++++++++++++++
kernel/bpf/syscall.c | 25 +++++++++++++++++++++++--
3 files changed, 47 insertions(+), 2 deletions(-)
base-commit: c369299895a591d96745d6492d4888259b004a9e
--
2.52.0
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH bpf v4 1/1] bpf: Fix grace period wait for tracepoint bpf_link
2026-03-31 21:10 [PATCH bpf v4 0/1] Fix bpf_link grace period wait for tracepoints Kumar Kartikeya Dwivedi
@ 2026-03-31 21:10 ` Kumar Kartikeya Dwivedi
2026-03-31 21:23 ` Steven Rostedt
2026-03-31 23:10 ` [PATCH bpf v4 0/1] Fix bpf_link grace period wait for tracepoints patchwork-bot+netdevbpf
1 sibling, 1 reply; 4+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2026-03-31 21:10 UTC (permalink / raw)
To: bpf
Cc: Sun Jian, Puranjay Mohan, Andrii Nakryiko, Alexei Starovoitov,
Daniel Borkmann, Martin KaFai Lau, Eduard Zingerman,
Paul E. McKenney, Steven Rostedt, kkd, kernel-team
Recently, tracepoints were switched from using disabled preemption
(which acts as RCU read section) to SRCU-fast when they are not
faultable. This means that to do a proper grace period wait for programs
running in such tracepoints, we must use SRCU's grace period wait.
This is only for non-faultable tracepoints, faultable ones continue
using RCU Tasks Trace.
However, bpf_link_free() currently does call_rcu() for all cases when
the link is non-sleepable (hence, for tracepoints, non-faultable). Fix
this by doing a call_srcu() grace period wait.
As far RCU Tasks Trace gp -> RCU gp chaining is concerned, it is deemed
unnecessary for tracepoint programs. The link and program are either
accessed under RCU Tasks Trace protection, or SRCU-fast protection now.
The earlier logic of chaining both RCU Tasks Trace and RCU gp waits was
to generalize the logic, even if it conceded an extra RCU gp wait,
however that is unnecessary for tracepoints even before this change.
In practice no cost was paid since rcu_trace_implies_rcu_gp() was always
true. Hence we need not chaining any RCU gp after the SRCU gp.
For instance, in the non-faultable raw tracepoint, the RCU read section
of the program in __bpf_trace_run() is enclosed in the SRCU gp, likewise
for faultable raw tracepoint, the program is under the RCU Tasks Trace
protection. Hence, the outermost scope can be waited upon to ensure
correctness.
Also, sleepable programs cannot be attached to non-faultable
tracepoints, so whenever program or link is sleepable, only RCU Tasks
Trace protection is being used for the link and prog.
Fixes: a46023d5616e ("tracing: Guard __DECLARE_TRACE() use of __DO_TRACE_CALL() with SRCU-fast")
Reviewed-by: Sun Jian <sun.jian.kdev@gmail.com>
Reviewed-by: Puranjay Mohan <puranjay@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
include/linux/bpf.h | 4 ++++
include/linux/tracepoint.h | 20 ++++++++++++++++++++
kernel/bpf/syscall.c | 25 +++++++++++++++++++++++--
3 files changed, 47 insertions(+), 2 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 05b34a6355b0..35b1e25bd104 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1854,6 +1854,10 @@ struct bpf_link_ops {
* target hook is sleepable, we'll go through tasks trace RCU GP and
* then "classic" RCU GP; this need for chaining tasks trace and
* classic RCU GPs is designated by setting bpf_link->sleepable flag
+ *
+ * For non-sleepable tracepoint links we go through SRCU gp instead,
+ * since RCU is not used in that case. Sleepable tracepoints still
+ * follow the scheme above.
*/
void (*dealloc_deferred)(struct bpf_link *link);
int (*detach)(struct bpf_link *link);
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 22ca1c8b54f3..1d7f29f5e901 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -122,6 +122,22 @@ static inline bool tracepoint_is_faultable(struct tracepoint *tp)
{
return tp->ext && tp->ext->faultable;
}
+/*
+ * Run RCU callback with the appropriate grace period wait for non-faultable
+ * tracepoints, e.g., those used in atomic context.
+ */
+static inline void call_tracepoint_unregister_atomic(struct rcu_head *rcu, rcu_callback_t func)
+{
+ call_srcu(&tracepoint_srcu, rcu, func);
+}
+/*
+ * Run RCU callback with the appropriate grace period wait for faultable
+ * tracepoints, e.g., those used in syscall context.
+ */
+static inline void call_tracepoint_unregister_syscall(struct rcu_head *rcu, rcu_callback_t func)
+{
+ call_rcu_tasks_trace(rcu, func);
+}
#else
static inline void tracepoint_synchronize_unregister(void)
{ }
@@ -129,6 +145,10 @@ static inline bool tracepoint_is_faultable(struct tracepoint *tp)
{
return false;
}
+static inline void call_tracepoint_unregister_atomic(struct rcu_head *rcu, rcu_callback_t func)
+{ }
+static inline void call_tracepoint_unregister_syscall(struct rcu_head *rcu, rcu_callback_t func)
+{ }
#endif
#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 274039e36465..700938782bed 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3261,6 +3261,18 @@ static void bpf_link_defer_dealloc_rcu_gp(struct rcu_head *rcu)
bpf_link_dealloc(link);
}
+static bool bpf_link_is_tracepoint(struct bpf_link *link)
+{
+ /*
+ * Only these combinations support a tracepoint bpf_link.
+ * BPF_LINK_TYPE_TRACING raw_tp progs are hardcoded to use
+ * bpf_raw_tp_link_lops and thus dealloc_deferred(), see
+ * bpf_raw_tp_link_attach().
+ */
+ return link->type == BPF_LINK_TYPE_RAW_TRACEPOINT ||
+ (link->type == BPF_LINK_TYPE_TRACING && link->attach_type == BPF_TRACE_RAW_TP);
+}
+
static void bpf_link_defer_dealloc_mult_rcu_gp(struct rcu_head *rcu)
{
if (rcu_trace_implies_rcu_gp())
@@ -3279,16 +3291,25 @@ static void bpf_link_free(struct bpf_link *link)
if (link->prog)
ops->release(link);
if (ops->dealloc_deferred) {
- /* Schedule BPF link deallocation, which will only then
+ /*
+ * Schedule BPF link deallocation, which will only then
* trigger putting BPF program refcount.
* If underlying BPF program is sleepable or BPF link's target
* attach hookpoint is sleepable or otherwise requires RCU GPs
* to ensure link and its underlying BPF program is not
* reachable anymore, we need to first wait for RCU tasks
- * trace sync, and then go through "classic" RCU grace period
+ * trace sync, and then go through "classic" RCU grace period.
+ *
+ * For tracepoint BPF links, we need to go through SRCU grace
+ * period wait instead when non-faultable tracepoint is used. We
+ * don't need to chain SRCU grace period waits, however, for the
+ * faultable case, since it exclusively uses RCU Tasks Trace.
*/
if (link->sleepable || (link->prog && link->prog->sleepable))
call_rcu_tasks_trace(&link->rcu, bpf_link_defer_dealloc_mult_rcu_gp);
+ /* We need to do a SRCU grace period wait for non-faultable tracepoint BPF links. */
+ else if (bpf_link_is_tracepoint(link))
+ call_tracepoint_unregister_atomic(&link->rcu, bpf_link_defer_dealloc_rcu_gp);
else
call_rcu(&link->rcu, bpf_link_defer_dealloc_rcu_gp);
} else if (ops->dealloc) {
--
2.52.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH bpf v4 1/1] bpf: Fix grace period wait for tracepoint bpf_link
2026-03-31 21:10 ` [PATCH bpf v4 1/1] bpf: Fix grace period wait for tracepoint bpf_link Kumar Kartikeya Dwivedi
@ 2026-03-31 21:23 ` Steven Rostedt
0 siblings, 0 replies; 4+ messages in thread
From: Steven Rostedt @ 2026-03-31 21:23 UTC (permalink / raw)
To: Kumar Kartikeya Dwivedi
Cc: bpf, Sun Jian, Puranjay Mohan, Andrii Nakryiko,
Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau,
Eduard Zingerman, Paul E. McKenney, kkd, kernel-team
On Tue, 31 Mar 2026 23:10:20 +0200
Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote:
> diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
> index 22ca1c8b54f3..1d7f29f5e901 100644
> --- a/include/linux/tracepoint.h
> +++ b/include/linux/tracepoint.h
> @@ -122,6 +122,22 @@ static inline bool tracepoint_is_faultable(struct tracepoint *tp)
> {
> return tp->ext && tp->ext->faultable;
> }
> +/*
> + * Run RCU callback with the appropriate grace period wait for non-faultable
> + * tracepoints, e.g., those used in atomic context.
> + */
> +static inline void call_tracepoint_unregister_atomic(struct rcu_head *rcu, rcu_callback_t func)
> +{
> + call_srcu(&tracepoint_srcu, rcu, func);
> +}
> +/*
> + * Run RCU callback with the appropriate grace period wait for faultable
> + * tracepoints, e.g., those used in syscall context.
> + */
> +static inline void call_tracepoint_unregister_syscall(struct rcu_head *rcu, rcu_callback_t func)
> +{
> + call_rcu_tasks_trace(rcu, func);
> +}
> #else
> static inline void tracepoint_synchronize_unregister(void)
> { }
> @@ -129,6 +145,10 @@ static inline bool tracepoint_is_faultable(struct tracepoint *tp)
> {
> return false;
> }
> +static inline void call_tracepoint_unregister_atomic(struct rcu_head *rcu, rcu_callback_t func)
> +{ }
> +static inline void call_tracepoint_unregister_syscall(struct rcu_head *rcu, rcu_callback_t func)
> +{ }
> #endif
>
Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
-- Steve
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH bpf v4 0/1] Fix bpf_link grace period wait for tracepoints
2026-03-31 21:10 [PATCH bpf v4 0/1] Fix bpf_link grace period wait for tracepoints Kumar Kartikeya Dwivedi
2026-03-31 21:10 ` [PATCH bpf v4 1/1] bpf: Fix grace period wait for tracepoint bpf_link Kumar Kartikeya Dwivedi
@ 2026-03-31 23:10 ` patchwork-bot+netdevbpf
1 sibling, 0 replies; 4+ messages in thread
From: patchwork-bot+netdevbpf @ 2026-03-31 23:10 UTC (permalink / raw)
To: Kumar Kartikeya Dwivedi
Cc: bpf, ast, andrii, daniel, martin.lau, eddyz87, paulmck, rostedt,
kkd, kernel-team
Hello:
This patch was applied to bpf/bpf.git (master)
by Alexei Starovoitov <ast@kernel.org>:
On Tue, 31 Mar 2026 23:10:19 +0200 you wrote:
> A recent change to non-faultable tracepoints switched from
> preempt-disabled critical sections to SRCU-fast, which breaks
> assumptions in the bpf_link_free() path. Use call_srcu() to fix the
> breakage.
>
> Changelog:
>
> [...]
Here is the summary with links:
- [bpf,v4,1/1] bpf: Fix grace period wait for tracepoint bpf_link
https://git.kernel.org/bpf/bpf/c/c76fef7dcd93
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-03-31 23:10 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-31 21:10 [PATCH bpf v4 0/1] Fix bpf_link grace period wait for tracepoints Kumar Kartikeya Dwivedi
2026-03-31 21:10 ` [PATCH bpf v4 1/1] bpf: Fix grace period wait for tracepoint bpf_link Kumar Kartikeya Dwivedi
2026-03-31 21:23 ` Steven Rostedt
2026-03-31 23:10 ` [PATCH bpf v4 0/1] Fix bpf_link grace period wait for tracepoints patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox