* [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs
@ 2026-03-24 19:03 Mykyta Yatsenko
2026-03-24 19:03 ` [PATCH bpf-next v6 1/6] bpf: Add sleepable support for raw " Mykyta Yatsenko
` (6 more replies)
0 siblings, 7 replies; 16+ messages in thread
From: Mykyta Yatsenko @ 2026-03-24 19:03 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor
Cc: Mykyta Yatsenko
This series adds support for sleepable BPF programs attached to raw
tracepoints (tp_btf), classic raw tracepoints (raw_tp), and classic
tracepoints (tp). The motivation is to allow BPF programs on syscall
tracepoints to use sleepable helpers such as bpf_copy_from_user(),
enabling reliable user memory reads that can page-fault.
This series removes restriction for faultable tracepoints:
Patch 1 modifies __bpf_trace_run() to support sleepable programs
following the uprobe_prog_run() pattern: use explicit
rcu_read_lock_trace() for sleepable programs and rcu_read_lock() for
non-sleepable programs. Also removes preempt_disable from the faultable
tracepoint BPF callback wrapper, since migration protection and RCU
locking are now managed per-program inside __bpf_trace_run().
Patch 2 renames bpf_prog_run_array_uprobe() to
bpf_prog_run_array_sleepable() to support new usecase.
Patch 3 adds sleepable support for classic tracepoints
(BPF_PROG_TYPE_TRACEPOINT) by introducing trace_call_bpf_faultable()
and restructuring perf_syscall_enter/exit() to run BPF programs in
faultable context before the preempt-disabled per-cpu buffer
allocation. trace_call_bpf_faultable() uses rcu_tasks_trace for
lifetime protection, following the uprobe pattern. This adds
rcu_tasks_trace overhead for all classic tracepoint BPF programs on
syscall tracepoints, not just sleepable ones.
Patch 4 allows BPF_TRACE_RAW_TP, BPF_PROG_TYPE_RAW_TRACEPOINT, and
BPF_PROG_TYPE_TRACEPOINT programs to be loaded as sleepable, with
load-time and attach-time checks to reject sleepable programs on
non-faultable tracepoints.
Patch 5 adds libbpf SEC_DEF handlers: tp_btf.s, raw_tp.s,
raw_tracepoint.s, tp.s, and tracepoint.s.
Patch 6 adds selftests covering tp_btf.s, raw_tp.s, and tp.s positive
cases using bpf_copy_from_user() on sys_enter nanosleep, plus negative
tests for non-faultable tracepoints.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
Changes in v6:
- Remove recursion check from trace_call_bpf_faultable(), sleepable
tracepoints are called from syscall enter/exit, no recursion is
possible.(Kumar)
- Refactor bpf_prog_run_array_uprobe() to support tracepoints
usecase cleanly (Kumar)
- Link to v5: https://lore.kernel.org/r/20260316-sleepable_tracepoints-v5-0-85525de71d25@meta.com
Changes in v5:
- Addressed AI review: zero initialize struct pt_regs in
perf_call_bpf_enter(); changed handling tp.s and tracepoint.s in
attach_tp() in libbpf.
- Updated commit messages
- Link to v4: https://lore.kernel.org/r/20260313-sleepable_tracepoints-v4-0-debc688a66b3@meta.com
Changes in v4:
- Follow uprobe_prog_run() pattern with explicit rcu_read_lock_trace()
instead of relying on outer rcu_tasks_trace lock
- Add sleepable support for classic raw tracepoints (raw_tp.s)
- Add sleepable support for classic tracepoints (tp.s) with new
trace_call_bpf_faultable() and restructured perf_syscall_enter/exit()
- Add raw_tp.s, raw_tracepoint.s, tp.s, tracepoint.s SEC_DEF handlers
- Replace growing type enumeration in error message with generic
"program of this type cannot be sleepable"
- Use PT_REGS_PARM1_SYSCALL (non-CO-RE) in BTF test
- Add classic raw_tp and classic tracepoint sleepable tests
- Link to v3: https://lore.kernel.org/r/20260311-sleepable_tracepoints-v3-0-3e9bbde5bd22@meta.com
Changes in v3:
- Moved faultable tracepoint check from attach time to load time in
bpf_check_attach_target(), providing a clear verifier error message
- Folded preempt_disable removal into the sleepable execution path
patch
- Used RUN_TESTS() with __failure/__msg for negative test case instead
of explicit userspace program
- Reduced series from 6 patches to 4
- Link to v2: https://lore.kernel.org/r/20260225-sleepable_tracepoints-v2-0-0330dafd650f@meta.com
Changes in v2:
- Address AI review points - modified the order of the patches
- Link to v1: https://lore.kernel.org/bpf/20260218-sleepable_tracepoints-v1-0-ec2705497208@meta.com/
---
Mykyta Yatsenko (6):
bpf: Add sleepable support for raw tracepoint programs
bpf: Rename bpf_prog_run_array_uprobe() to bpf_prog_run_array_sleepable()
bpf: Add sleepable support for classic tracepoint programs
bpf: Verifier support for sleepable tracepoint programs
libbpf: Add section handlers for sleepable tracepoints
selftests/bpf: Add tests for sleepable tracepoint programs
include/linux/bpf.h | 7 +-
include/linux/trace_events.h | 6 +
include/trace/bpf_probe.h | 2 -
kernel/bpf/syscall.c | 5 +
kernel/bpf/verifier.c | 13 ++-
kernel/events/core.c | 9 ++
kernel/trace/bpf_trace.c | 50 ++++++++-
kernel/trace/trace_syscalls.c | 110 ++++++++++---------
kernel/trace/trace_uprobe.c | 2 +-
tools/lib/bpf/libbpf.c | 39 +++++--
.../bpf/prog_tests/sleepable_tracepoints.c | 121 +++++++++++++++++++++
.../bpf/progs/test_sleepable_tracepoints.c | 117 ++++++++++++++++++++
.../bpf/progs/test_sleepable_tracepoints_fail.c | 18 +++
tools/testing/selftests/bpf/verifier/sleepable.c | 17 ++-
14 files changed, 444 insertions(+), 72 deletions(-)
---
base-commit: 6c8e1a9eee0fec802b542dadf768c30c2a183b3c
change-id: 20260216-sleepable_tracepoints-381ae1410550
Best regards,
--
Mykyta Yatsenko <yatsenko@meta.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH bpf-next v6 1/6] bpf: Add sleepable support for raw tracepoint programs
2026-03-24 19:03 [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs Mykyta Yatsenko
@ 2026-03-24 19:03 ` Mykyta Yatsenko
2026-03-24 19:46 ` Alexei Starovoitov
2026-03-24 19:03 ` [PATCH bpf-next v6 2/6] bpf: Rename bpf_prog_run_array_uprobe() to bpf_prog_run_array_sleepable() Mykyta Yatsenko
` (5 subsequent siblings)
6 siblings, 1 reply; 16+ messages in thread
From: Mykyta Yatsenko @ 2026-03-24 19:03 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor
Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Rework __bpf_trace_run() to support sleepable BPF programs by using
explicit RCU flavor selection, following the uprobe_prog_run() pattern.
For sleepable programs, use rcu_read_lock_tasks_trace() for lifetime
protection and add a might_fault() annotation. For non-sleepable
programs, use the regular rcu_read_lock(). Replace the combined
rcu_read_lock_dont_migrate() with separate rcu_read_lock()/
migrate_disable() calls, since sleepable programs need
rcu_read_lock_tasks_trace() instead of rcu_read_lock().
Remove the preempt_disable_notrace/preempt_enable_notrace pair from
the faultable tracepoint BPF probe wrapper in bpf_probe.h, since
migration protection and RCU locking are now handled per-program
inside __bpf_trace_run().
This prepares the runtime execution path for both BTF-based raw
tracepoints (tp_btf) and classic raw tracepoints (raw_tp) to support
sleepable BPF programs on faultable tracepoints (e.g. syscall
tracepoints). The verifier changes to allow loading sleepable
programs are in a subsequent patch.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
include/trace/bpf_probe.h | 2 --
kernel/trace/bpf_trace.c | 21 ++++++++++++++++++---
2 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
index 9391d54d3f12..d1de8f9aa07f 100644
--- a/include/trace/bpf_probe.h
+++ b/include/trace/bpf_probe.h
@@ -58,9 +58,7 @@ static notrace void \
__bpf_trace_##call(void *__data, proto) \
{ \
might_fault(); \
- preempt_disable_notrace(); \
CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \
- preempt_enable_notrace(); \
}
#undef DECLARE_EVENT_SYSCALL_CLASS
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 0b040a417442..35ed53807cfd 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2072,11 +2072,18 @@ void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp)
static __always_inline
void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 *args)
{
+ struct srcu_ctr __percpu *scp = NULL;
struct bpf_prog *prog = link->link.prog;
+ bool sleepable = prog->sleepable;
struct bpf_run_ctx *old_run_ctx;
struct bpf_trace_run_ctx run_ctx;
- rcu_read_lock_dont_migrate();
+ if (sleepable)
+ scp = rcu_read_lock_tasks_trace();
+ else
+ rcu_read_lock();
+
+ migrate_disable();
if (unlikely(!bpf_prog_get_recursion_context(prog))) {
bpf_prog_inc_misses_counter(prog);
goto out;
@@ -2085,12 +2092,20 @@ void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 *args)
run_ctx.bpf_cookie = link->cookie;
old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
- (void) bpf_prog_run(prog, args);
+ if (sleepable)
+ might_fault();
+
+ (void)bpf_prog_run(prog, args);
bpf_reset_run_ctx(old_run_ctx);
out:
bpf_prog_put_recursion_context(prog);
- rcu_read_unlock_migrate();
+ migrate_enable();
+
+ if (sleepable)
+ rcu_read_unlock_tasks_trace(scp);
+ else
+ rcu_read_unlock();
}
#define UNPACK(...) __VA_ARGS__
--
2.52.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next v6 2/6] bpf: Rename bpf_prog_run_array_uprobe() to bpf_prog_run_array_sleepable()
2026-03-24 19:03 [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs Mykyta Yatsenko
2026-03-24 19:03 ` [PATCH bpf-next v6 1/6] bpf: Add sleepable support for raw " Mykyta Yatsenko
@ 2026-03-24 19:03 ` Mykyta Yatsenko
2026-03-24 19:03 ` [PATCH bpf-next v6 3/6] bpf: Add sleepable support for classic tracepoint programs Mykyta Yatsenko
` (4 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: Mykyta Yatsenko @ 2026-03-24 19:03 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor
Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
bpf_prog_run_array_uprobe() will be used by both uprobes and faultable
classic tracepoints. Rename it to bpf_prog_run_array_sleepable()
to reflect its general purpose:
per-program RCU flavor selection for sleepable program support.
Add a bool is_uprobe parameter instead of hardcoding
run_ctx.is_uprobe = true. This flag controls bpf_get_func_ip_kprobe()
address resolution and should only be set for actual uprobe callers.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
include/linux/bpf.h | 7 ++++---
kernel/trace/trace_uprobe.c | 2 +-
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 05b34a6355b0..9fe19ab21cfc 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2487,8 +2487,9 @@ bpf_prog_run_array(const struct bpf_prog_array *array,
* rcu-protected dynamically sized maps.
*/
static __always_inline u32
-bpf_prog_run_array_uprobe(const struct bpf_prog_array *array,
- const void *ctx, bpf_prog_run_fn run_prog)
+bpf_prog_run_array_sleepable(const struct bpf_prog_array *array,
+ const void *ctx, bpf_prog_run_fn run_prog,
+ bool is_uprobe)
{
const struct bpf_prog_array_item *item;
const struct bpf_prog *prog;
@@ -2504,7 +2505,7 @@ bpf_prog_run_array_uprobe(const struct bpf_prog_array *array,
migrate_disable();
- run_ctx.is_uprobe = true;
+ run_ctx.is_uprobe = is_uprobe;
old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
item = &array->items[0];
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 2cabf8a23ec5..eb4d78e689ac 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -1402,7 +1402,7 @@ static void __uprobe_perf_func(struct trace_uprobe *tu,
rcu_read_lock_trace();
array = rcu_dereference_check(call->prog_array, rcu_read_lock_trace_held());
- ret = bpf_prog_run_array_uprobe(array, regs, bpf_prog_run);
+ ret = bpf_prog_run_array_sleepable(array, regs, bpf_prog_run, true);
rcu_read_unlock_trace();
if (!ret)
return;
--
2.52.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next v6 3/6] bpf: Add sleepable support for classic tracepoint programs
2026-03-24 19:03 [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs Mykyta Yatsenko
2026-03-24 19:03 ` [PATCH bpf-next v6 1/6] bpf: Add sleepable support for raw " Mykyta Yatsenko
2026-03-24 19:03 ` [PATCH bpf-next v6 2/6] bpf: Rename bpf_prog_run_array_uprobe() to bpf_prog_run_array_sleepable() Mykyta Yatsenko
@ 2026-03-24 19:03 ` Mykyta Yatsenko
2026-03-24 19:56 ` Alexei Starovoitov
2026-03-24 19:03 ` [PATCH bpf-next v6 4/6] bpf: Verifier support for sleepable " Mykyta Yatsenko
` (3 subsequent siblings)
6 siblings, 1 reply; 16+ messages in thread
From: Mykyta Yatsenko @ 2026-03-24 19:03 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor
Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Add trace_call_bpf_faultable(), a variant of trace_call_bpf() for
faultable tracepoints that supports sleepable BPF programs. It uses
rcu_tasks_trace for lifetime protection and
bpf_prog_run_array_sleepable() for per-program RCU flavor selection,
following the uprobe_prog_run() pattern.
Restructure perf_syscall_enter() and perf_syscall_exit() to run BPF
filter before perf event processing. Previously, BPF ran after the
per-cpu perf trace buffer was allocated under preempt_disable,
requiring cleanup via perf_swevent_put_recursion_context() on filter.
Now BPF runs in faultable context before preempt_disable, reading
syscall arguments from local variables instead of the per-cpu trace
record, removing the dependency on buffer allocation. This allows
sleepable BPF programs to execute and avoids unnecessary buffer
allocation when BPF filters the event. The perf event submission
path (buffer allocation, fill, submit) remains under preempt_disable
as before.
Add an attach-time check in __perf_event_set_bpf_prog() to reject
sleepable BPF_PROG_TYPE_TRACEPOINT programs on non-syscall
tracepoints, since only syscall tracepoints run in faultable context.
This prepares the classic tracepoint runtime and attach paths for
sleepable programs. The verifier changes to allow loading sleepable
BPF_PROG_TYPE_TRACEPOINT programs are in a subsequent patch.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
include/linux/trace_events.h | 6 +++
kernel/events/core.c | 9 ++++
kernel/trace/bpf_trace.c | 29 +++++++++++
kernel/trace/trace_syscalls.c | 110 ++++++++++++++++++++++--------------------
4 files changed, 102 insertions(+), 52 deletions(-)
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 37eb2f0f3dd8..5fbbeb9ec4b9 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -767,6 +767,7 @@ trace_trigger_soft_disabled(struct trace_event_file *file)
#ifdef CONFIG_BPF_EVENTS
unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx);
+unsigned int trace_call_bpf_faultable(struct trace_event_call *call, void *ctx);
int perf_event_attach_bpf_prog(struct perf_event *event, struct bpf_prog *prog, u64 bpf_cookie);
void perf_event_detach_bpf_prog(struct perf_event *event);
int perf_event_query_prog_array(struct perf_event *event, void __user *info);
@@ -789,6 +790,11 @@ static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *c
return 1;
}
+static inline unsigned int trace_call_bpf_faultable(struct trace_event_call *call, void *ctx)
+{
+ return 1;
+}
+
static inline int
perf_event_attach_bpf_prog(struct perf_event *event, struct bpf_prog *prog, u64 bpf_cookie)
{
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 1f5699b339ec..46b733d3dd41 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -11647,6 +11647,15 @@ static int __perf_event_set_bpf_prog(struct perf_event *event,
/* only uprobe programs are allowed to be sleepable */
return -EINVAL;
+ if (prog->type == BPF_PROG_TYPE_TRACEPOINT && prog->sleepable) {
+ /*
+ * Sleepable tracepoint programs can only attach to faultable
+ * tracepoints. Currently only syscall tracepoints are faultable.
+ */
+ if (!is_syscall_tp)
+ return -EINVAL;
+ }
+
/* Kprobe override only works for kprobes, not uprobes. */
if (prog->kprobe_override && !is_kprobe)
return -EINVAL;
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 35ed53807cfd..3f39c0e8df50 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -152,6 +152,35 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
return ret;
}
+/**
+ * trace_call_bpf_faultable - invoke BPF program in faultable context
+ * @call: tracepoint event
+ * @ctx: opaque context pointer
+ *
+ * Variant of trace_call_bpf() for faultable tracepoints (syscall
+ * tracepoints). Supports sleepable BPF programs by using rcu_tasks_trace
+ * for lifetime protection and bpf_prog_run_array_sleepable() for per-program
+ * RCU flavor selection, following the uprobe pattern.
+ *
+ * Recursion protection is unnecessary here (hence not touching bpf_prog_active
+ * compared to trace_call_bpf()), because syscall tracepoints fire only at
+ * syscall entry/exit boundaries, self-recursion is impossible.
+ *
+ * Must be called from a faultable/preemptible context.
+ */
+unsigned int trace_call_bpf_faultable(struct trace_event_call *call, void *ctx)
+{
+ struct bpf_prog_array *prog_array;
+
+ might_fault();
+ guard(rcu_tasks_trace)();
+
+ prog_array = rcu_dereference_check(call->prog_array,
+ rcu_read_lock_trace_held());
+ return bpf_prog_run_array_sleepable(prog_array, ctx, bpf_prog_run,
+ false);
+}
+
#ifdef CONFIG_BPF_KPROBE_OVERRIDE
BPF_CALL_2(bpf_override_return, struct pt_regs *, regs, unsigned long, rc)
{
diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 37317b81fcda..fe46114c4b3b 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -1372,33 +1372,33 @@ static DECLARE_BITMAP(enabled_perf_exit_syscalls, NR_syscalls);
static int sys_perf_refcount_enter;
static int sys_perf_refcount_exit;
-static int perf_call_bpf_enter(struct trace_event_call *call, struct pt_regs *regs,
+static int perf_call_bpf_enter(struct trace_event_call *call,
struct syscall_metadata *sys_data,
- struct syscall_trace_enter *rec)
+ int syscall_nr, unsigned long *args)
{
struct syscall_tp_t {
struct trace_entry ent;
int syscall_nr;
unsigned long args[SYSCALL_DEFINE_MAXARGS];
} __aligned(8) param;
+ struct pt_regs regs = {};
int i;
BUILD_BUG_ON(sizeof(param.ent) < sizeof(void *));
- /* bpf prog requires 'regs' to be the first member in the ctx (a.k.a. ¶m) */
- perf_fetch_caller_regs(regs);
- *(struct pt_regs **)¶m = regs;
- param.syscall_nr = rec->nr;
+ /* bpf prog requires 'regs' to be the first member in the ctx */
+ perf_fetch_caller_regs(®s);
+ *(struct pt_regs **)¶m = ®s;
+ param.syscall_nr = syscall_nr;
for (i = 0; i < sys_data->nb_args; i++)
- param.args[i] = rec->args[i];
- return trace_call_bpf(call, ¶m);
+ param.args[i] = args[i];
+ return trace_call_bpf_faultable(call, ¶m);
}
static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id)
{
struct syscall_metadata *sys_data;
struct syscall_trace_enter *rec;
- struct pt_regs *fake_regs;
struct hlist_head *head;
unsigned long args[6];
bool valid_prog_array;
@@ -1411,12 +1411,7 @@ static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id)
int size = 0;
int uargs = 0;
- /*
- * Syscall probe called with preemption enabled, but the ring
- * buffer and per-cpu data require preemption to be disabled.
- */
might_fault();
- guard(preempt_notrace)();
syscall_nr = trace_get_syscall_nr(current, regs);
if (syscall_nr < 0 || syscall_nr >= NR_syscalls)
@@ -1430,6 +1425,26 @@ static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id)
syscall_get_arguments(current, regs, args);
+ /*
+ * Run BPF filter in faultable context before per-cpu buffer
+ * allocation, allowing sleepable BPF programs to execute.
+ */
+ valid_prog_array = bpf_prog_array_valid(sys_data->enter_event);
+ if (valid_prog_array &&
+ !perf_call_bpf_enter(sys_data->enter_event, sys_data,
+ syscall_nr, args))
+ return;
+
+ /*
+ * Per-cpu ring buffer and perf event list operations require
+ * preemption to be disabled.
+ */
+ guard(preempt_notrace)();
+
+ head = this_cpu_ptr(sys_data->enter_event->perf_events);
+ if (hlist_empty(head))
+ return;
+
/* Check if this syscall event faults in user space memory */
mayfault = sys_data->user_mask != 0;
@@ -1439,17 +1454,12 @@ static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id)
return;
}
- head = this_cpu_ptr(sys_data->enter_event->perf_events);
- valid_prog_array = bpf_prog_array_valid(sys_data->enter_event);
- if (!valid_prog_array && hlist_empty(head))
- return;
-
/* get the size after alignment with the u32 buffer size field */
size += sizeof(unsigned long) * sys_data->nb_args + sizeof(*rec);
size = ALIGN(size + sizeof(u32), sizeof(u64));
size -= sizeof(u32);
- rec = perf_trace_buf_alloc(size, &fake_regs, &rctx);
+ rec = perf_trace_buf_alloc(size, NULL, &rctx);
if (!rec)
return;
@@ -1459,13 +1469,6 @@ static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id)
if (mayfault)
syscall_put_data(sys_data, rec, user_ptr, size, user_sizes, uargs);
- if ((valid_prog_array &&
- !perf_call_bpf_enter(sys_data->enter_event, fake_regs, sys_data, rec)) ||
- hlist_empty(head)) {
- perf_swevent_put_recursion_context(rctx);
- return;
- }
-
perf_trace_buf_submit(rec, size, rctx,
sys_data->enter_event->event.type, 1, regs,
head, NULL);
@@ -1515,40 +1518,35 @@ static void perf_sysenter_disable(struct trace_event_call *call)
syscall_fault_buffer_disable();
}
-static int perf_call_bpf_exit(struct trace_event_call *call, struct pt_regs *regs,
- struct syscall_trace_exit *rec)
+static int perf_call_bpf_exit(struct trace_event_call *call,
+ int syscall_nr, long ret_val)
{
struct syscall_tp_t {
struct trace_entry ent;
int syscall_nr;
unsigned long ret;
} __aligned(8) param;
-
- /* bpf prog requires 'regs' to be the first member in the ctx (a.k.a. ¶m) */
- perf_fetch_caller_regs(regs);
- *(struct pt_regs **)¶m = regs;
- param.syscall_nr = rec->nr;
- param.ret = rec->ret;
- return trace_call_bpf(call, ¶m);
+ struct pt_regs regs = {};
+
+ /* bpf prog requires 'regs' to be the first member in the ctx */
+ perf_fetch_caller_regs(®s);
+ *(struct pt_regs **)¶m = ®s;
+ param.syscall_nr = syscall_nr;
+ param.ret = ret_val;
+ return trace_call_bpf_faultable(call, ¶m);
}
static void perf_syscall_exit(void *ignore, struct pt_regs *regs, long ret)
{
struct syscall_metadata *sys_data;
struct syscall_trace_exit *rec;
- struct pt_regs *fake_regs;
struct hlist_head *head;
bool valid_prog_array;
int syscall_nr;
int rctx;
int size;
- /*
- * Syscall probe called with preemption enabled, but the ring
- * buffer and per-cpu data require preemption to be disabled.
- */
might_fault();
- guard(preempt_notrace)();
syscall_nr = trace_get_syscall_nr(current, regs);
if (syscall_nr < 0 || syscall_nr >= NR_syscalls)
@@ -1560,29 +1558,37 @@ static void perf_syscall_exit(void *ignore, struct pt_regs *regs, long ret)
if (!sys_data)
return;
- head = this_cpu_ptr(sys_data->exit_event->perf_events);
+ /*
+ * Run BPF filter in faultable context before per-cpu buffer
+ * allocation, allowing sleepable BPF programs to execute.
+ */
valid_prog_array = bpf_prog_array_valid(sys_data->exit_event);
- if (!valid_prog_array && hlist_empty(head))
+ if (valid_prog_array &&
+ !perf_call_bpf_exit(sys_data->exit_event, syscall_nr,
+ syscall_get_return_value(current, regs)))
+ return;
+
+ /*
+ * Per-cpu ring buffer and perf event list operations require
+ * preemption to be disabled.
+ */
+ guard(preempt_notrace)();
+
+ head = this_cpu_ptr(sys_data->exit_event->perf_events);
+ if (hlist_empty(head))
return;
/* We can probably do that at build time */
size = ALIGN(sizeof(*rec) + sizeof(u32), sizeof(u64));
size -= sizeof(u32);
- rec = perf_trace_buf_alloc(size, &fake_regs, &rctx);
+ rec = perf_trace_buf_alloc(size, NULL, &rctx);
if (!rec)
return;
rec->nr = syscall_nr;
rec->ret = syscall_get_return_value(current, regs);
- if ((valid_prog_array &&
- !perf_call_bpf_exit(sys_data->exit_event, fake_regs, rec)) ||
- hlist_empty(head)) {
- perf_swevent_put_recursion_context(rctx);
- return;
- }
-
perf_trace_buf_submit(rec, size, rctx, sys_data->exit_event->event.type,
1, regs, head, NULL);
}
--
2.52.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next v6 4/6] bpf: Verifier support for sleepable tracepoint programs
2026-03-24 19:03 [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs Mykyta Yatsenko
` (2 preceding siblings ...)
2026-03-24 19:03 ` [PATCH bpf-next v6 3/6] bpf: Add sleepable support for classic tracepoint programs Mykyta Yatsenko
@ 2026-03-24 19:03 ` Mykyta Yatsenko
2026-03-24 19:03 ` [PATCH bpf-next v6 5/6] libbpf: Add section handlers for sleepable tracepoints Mykyta Yatsenko
` (2 subsequent siblings)
6 siblings, 0 replies; 16+ messages in thread
From: Mykyta Yatsenko @ 2026-03-24 19:03 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor
Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Allow BPF_PROG_TYPE_RAW_TRACEPOINT, BPF_PROG_TYPE_TRACEPOINT, and
BPF_TRACE_RAW_TP (tp_btf) programs to be sleepable by adding them
to can_be_sleepable().
For BTF-based raw tracepoints (tp_btf), add a load-time check in
bpf_check_attach_target() that rejects sleepable programs attaching
to non-faultable tracepoints with a descriptive error message.
For classic raw tracepoints (raw_tp), add an attach-time check in
bpf_raw_tp_link_attach() that rejects sleepable programs on
non-faultable tracepoints. The attach-time check is needed because
the tracepoint name is not known at load time for classic raw_tp.
The attach-time check for classic tracepoints (tp) in
__perf_event_set_bpf_prog() was added in the previous patch.
Replace the verbose error message that enumerates allowed program
types with a generic "Program of this type cannot be sleepable"
message, since the list of sleepable-capable types keeps growing.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
kernel/bpf/syscall.c | 5 +++++
kernel/bpf/verifier.c | 13 +++++++++++--
2 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 274039e36465..bc19f6cdf752 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -4261,6 +4261,11 @@ static int bpf_raw_tp_link_attach(struct bpf_prog *prog,
if (!btp)
return -ENOENT;
+ if (prog->sleepable && !tracepoint_is_faultable(btp->tp)) {
+ bpf_put_raw_tracepoint(btp);
+ return -EINVAL;
+ }
+
link = kzalloc_obj(*link, GFP_USER);
if (!link) {
err = -ENOMEM;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 01c18f4268de..a4836f564cb1 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -25255,6 +25255,12 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
btp = bpf_get_raw_tracepoint(tname);
if (!btp)
return -EINVAL;
+ if (prog->sleepable && !tracepoint_is_faultable(btp->tp)) {
+ bpf_log(log, "Sleepable program cannot attach to non-faultable tracepoint %s\n",
+ tname);
+ bpf_put_raw_tracepoint(btp);
+ return -EINVAL;
+ }
fname = kallsyms_lookup((unsigned long)btp->bpf_func, NULL, NULL, NULL,
trace_symbol);
bpf_put_raw_tracepoint(btp);
@@ -25471,6 +25477,7 @@ static bool can_be_sleepable(struct bpf_prog *prog)
case BPF_MODIFY_RETURN:
case BPF_TRACE_ITER:
case BPF_TRACE_FSESSION:
+ case BPF_TRACE_RAW_TP:
return true;
default:
return false;
@@ -25478,7 +25485,9 @@ static bool can_be_sleepable(struct bpf_prog *prog)
}
return prog->type == BPF_PROG_TYPE_LSM ||
prog->type == BPF_PROG_TYPE_KPROBE /* only for uprobes */ ||
- prog->type == BPF_PROG_TYPE_STRUCT_OPS;
+ prog->type == BPF_PROG_TYPE_STRUCT_OPS ||
+ prog->type == BPF_PROG_TYPE_RAW_TRACEPOINT ||
+ prog->type == BPF_PROG_TYPE_TRACEPOINT;
}
static int check_attach_btf_id(struct bpf_verifier_env *env)
@@ -25500,7 +25509,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
}
if (prog->sleepable && !can_be_sleepable(prog)) {
- verbose(env, "Only fentry/fexit/fmod_ret, lsm, iter, uprobe, and struct_ops programs can be sleepable\n");
+ verbose(env, "Program of this type cannot be sleepable\n");
return -EINVAL;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next v6 5/6] libbpf: Add section handlers for sleepable tracepoints
2026-03-24 19:03 [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs Mykyta Yatsenko
` (3 preceding siblings ...)
2026-03-24 19:03 ` [PATCH bpf-next v6 4/6] bpf: Verifier support for sleepable " Mykyta Yatsenko
@ 2026-03-24 19:03 ` Mykyta Yatsenko
2026-03-24 19:03 ` [PATCH bpf-next v6 6/6] selftests/bpf: Add tests for sleepable tracepoint programs Mykyta Yatsenko
2026-03-24 20:05 ` [PATCH bpf-next v6 0/6] bpf: Add support " Kumar Kartikeya Dwivedi
6 siblings, 0 replies; 16+ messages in thread
From: Mykyta Yatsenko @ 2026-03-24 19:03 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor
Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Add SEC_DEF entries for sleepable tracepoint variants:
- "tp_btf.s+" for sleepable BTF-based raw tracepoints
- "raw_tp.s+" for sleepable classic raw tracepoints
- "raw_tracepoint.s+" (alias)
- "tp.s+" for sleepable classic tracepoints
- "tracepoint.s+" (alias)
Update attach_raw_tp() to recognize "raw_tp.s" and
"raw_tracepoint.s" prefixes when extracting the tracepoint name.
Rewrite attach_tp() to use a prefix array including "tp.s/" and
"tracepoint.s/" variants for proper section name parsing.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
tools/lib/bpf/libbpf.c | 39 ++++++++++++++++++++++++++++++++-------
1 file changed, 32 insertions(+), 7 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 0662d72bad20..625d49a21bcf 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -9858,11 +9858,16 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("netkit/peer", SCHED_CLS, BPF_NETKIT_PEER, SEC_NONE),
SEC_DEF("tracepoint+", TRACEPOINT, 0, SEC_NONE, attach_tp),
SEC_DEF("tp+", TRACEPOINT, 0, SEC_NONE, attach_tp),
+ SEC_DEF("tracepoint.s+", TRACEPOINT, 0, SEC_SLEEPABLE, attach_tp),
+ SEC_DEF("tp.s+", TRACEPOINT, 0, SEC_SLEEPABLE, attach_tp),
SEC_DEF("raw_tracepoint+", RAW_TRACEPOINT, 0, SEC_NONE, attach_raw_tp),
SEC_DEF("raw_tp+", RAW_TRACEPOINT, 0, SEC_NONE, attach_raw_tp),
+ SEC_DEF("raw_tracepoint.s+", RAW_TRACEPOINT, 0, SEC_SLEEPABLE, attach_raw_tp),
+ SEC_DEF("raw_tp.s+", RAW_TRACEPOINT, 0, SEC_SLEEPABLE, attach_raw_tp),
SEC_DEF("raw_tracepoint.w+", RAW_TRACEPOINT_WRITABLE, 0, SEC_NONE, attach_raw_tp),
SEC_DEF("raw_tp.w+", RAW_TRACEPOINT_WRITABLE, 0, SEC_NONE, attach_raw_tp),
SEC_DEF("tp_btf+", TRACING, BPF_TRACE_RAW_TP, SEC_ATTACH_BTF, attach_trace),
+ SEC_DEF("tp_btf.s+", TRACING, BPF_TRACE_RAW_TP, SEC_ATTACH_BTF | SEC_SLEEPABLE, attach_trace),
SEC_DEF("fentry+", TRACING, BPF_TRACE_FENTRY, SEC_ATTACH_BTF, attach_trace),
SEC_DEF("fmod_ret+", TRACING, BPF_MODIFY_RETURN, SEC_ATTACH_BTF, attach_trace),
SEC_DEF("fexit+", TRACING, BPF_TRACE_FEXIT, SEC_ATTACH_BTF, attach_trace),
@@ -12985,23 +12990,41 @@ struct bpf_link *bpf_program__attach_tracepoint(const struct bpf_program *prog,
static int attach_tp(const struct bpf_program *prog, long cookie, struct bpf_link **link)
{
+ static const char *const prefixes[] = {
+ "tp.s/",
+ "tp/",
+ "tracepoint.s/",
+ "tracepoint/",
+ };
char *sec_name, *tp_cat, *tp_name;
+ size_t i;
*link = NULL;
- /* no auto-attach for SEC("tp") or SEC("tracepoint") */
- if (strcmp(prog->sec_name, "tp") == 0 || strcmp(prog->sec_name, "tracepoint") == 0)
+ /* no auto-attach for bare SEC("tp"), SEC("tracepoint") and .s variants */
+ if (strcmp(prog->sec_name, "tp") == 0 ||
+ strcmp(prog->sec_name, "tracepoint") == 0 ||
+ strcmp(prog->sec_name, "tp.s") == 0 ||
+ strcmp(prog->sec_name, "tracepoint.s") == 0)
return 0;
sec_name = strdup(prog->sec_name);
if (!sec_name)
return -ENOMEM;
- /* extract "tp/<category>/<name>" or "tracepoint/<category>/<name>" */
- if (str_has_pfx(prog->sec_name, "tp/"))
- tp_cat = sec_name + sizeof("tp/") - 1;
- else
- tp_cat = sec_name + sizeof("tracepoint/") - 1;
+ /* extract "<prefix><category>/<name>" */
+ tp_cat = NULL;
+ for (i = 0; i < ARRAY_SIZE(prefixes); i++) {
+ if (str_has_pfx(prog->sec_name, prefixes[i])) {
+ tp_cat = sec_name + strlen(prefixes[i]);
+ break;
+ }
+ }
+ if (!tp_cat) {
+ free(sec_name);
+ return -EINVAL;
+ }
+
tp_name = strchr(tp_cat, '/');
if (!tp_name) {
free(sec_name);
@@ -13065,6 +13088,8 @@ static int attach_raw_tp(const struct bpf_program *prog, long cookie, struct bpf
"raw_tracepoint",
"raw_tp.w",
"raw_tracepoint.w",
+ "raw_tp.s",
+ "raw_tracepoint.s",
};
size_t i;
const char *tp_name = NULL;
--
2.52.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next v6 6/6] selftests/bpf: Add tests for sleepable tracepoint programs
2026-03-24 19:03 [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs Mykyta Yatsenko
` (4 preceding siblings ...)
2026-03-24 19:03 ` [PATCH bpf-next v6 5/6] libbpf: Add section handlers for sleepable tracepoints Mykyta Yatsenko
@ 2026-03-24 19:03 ` Mykyta Yatsenko
2026-03-24 20:05 ` [PATCH bpf-next v6 0/6] bpf: Add support " Kumar Kartikeya Dwivedi
6 siblings, 0 replies; 16+ messages in thread
From: Mykyta Yatsenko @ 2026-03-24 19:03 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor
Cc: Mykyta Yatsenko
From: Mykyta Yatsenko <yatsenko@meta.com>
Add functional tests for sleepable tracepoint programs that attach to
the nanosleep syscall and use bpf_copy_from_user() to read user memory:
- tp_btf: BTF-based raw tracepoint using SEC("tp_btf.s/sys_enter")
with PT_REGS_PARM1_SYSCALL (non-CO-RE macro for BTF programs).
- classic: Classic raw tracepoint using SEC("raw_tp.s/sys_enter")
with PT_REGS_PARM1_CORE_SYSCALL (CO-RE macro needed for classic).
- tracepoint: Classic tracepoint using
SEC("tp.s/syscalls/sys_enter_nanosleep") receiving
struct syscall_trace_enter with direct access to args[].
Add a negative test (handle_sched_switch) that verifies sleepable
programs are rejected on non-faultable tracepoints (sched_switch).
Update verifier/sleepable.c tests:
- Add "sleepable raw tracepoint accept" test for sys_enter.
- Rename reject test and update error message to match the new
descriptive "Sleepable program cannot attach to non-faultable
tracepoint" message.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
.../bpf/prog_tests/sleepable_tracepoints.c | 121 +++++++++++++++++++++
.../bpf/progs/test_sleepable_tracepoints.c | 117 ++++++++++++++++++++
.../bpf/progs/test_sleepable_tracepoints_fail.c | 18 +++
tools/testing/selftests/bpf/verifier/sleepable.c | 17 ++-
4 files changed, 271 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/sleepable_tracepoints.c b/tools/testing/selftests/bpf/prog_tests/sleepable_tracepoints.c
new file mode 100644
index 000000000000..46b7de1262b8
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/sleepable_tracepoints.c
@@ -0,0 +1,121 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+
+#include <test_progs.h>
+#include <time.h>
+#include "test_sleepable_tracepoints.skel.h"
+#include "test_sleepable_tracepoints_fail.skel.h"
+
+static void trigger_nanosleep(void)
+{
+ syscall(__NR_nanosleep, &(struct timespec){ .tv_nsec = 555 }, NULL);
+}
+
+static void run_test(struct test_sleepable_tracepoints *skel)
+{
+ skel->bss->target_pid = getpid();
+ skel->bss->triggered = 0;
+ skel->bss->err = 0;
+ skel->bss->copied_tv_nsec = 0;
+
+ trigger_nanosleep();
+
+ ASSERT_EQ(skel->bss->triggered, 1, "triggered");
+ ASSERT_EQ(skel->bss->err, 0, "err");
+ ASSERT_EQ(skel->bss->copied_tv_nsec, 555, "copied_tv_nsec");
+}
+
+static void run_auto_attach_test(struct bpf_program *prog, struct test_sleepable_tracepoints *skel)
+{
+ struct bpf_link *link;
+
+ link = bpf_program__attach(prog);
+ if (!ASSERT_OK_PTR(link, "prog_attach"))
+ return;
+
+ run_test(skel);
+ bpf_link__destroy(link);
+}
+
+void test_sleepable_tracepoints(void)
+{
+ struct test_sleepable_tracepoints *skel;
+ struct bpf_link *link;
+
+ skel = test_sleepable_tracepoints__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
+ return;
+
+ /* Primary functional tests: full bpf_copy_from_user exercise */
+
+ if (test__start_subtest("tp_btf"))
+ run_auto_attach_test(skel->progs.handle_sys_enter_tp_btf, skel);
+
+ if (test__start_subtest("raw_tp"))
+ run_auto_attach_test(skel->progs.handle_sys_enter_raw_tp, skel);
+
+ if (test__start_subtest("tracepoint"))
+ run_auto_attach_test(skel->progs.handle_sys_enter_tp, skel);
+
+ /* Alias SEC variants: verify libbpf prefix parsing */
+
+ if (test__start_subtest("tracepoint_alias")) {
+ link = bpf_program__attach(skel->progs.handle_sys_enter_tp_alias);
+ if (ASSERT_OK_PTR(link, "tp_alias_attach"))
+ bpf_link__destroy(link);
+ }
+
+ if (test__start_subtest("raw_tracepoint_alias")) {
+ link = bpf_program__attach(skel->progs.handle_sys_enter_raw_tp_alias);
+ if (ASSERT_OK_PTR(link, "raw_tp_alias_attach"))
+ bpf_link__destroy(link);
+ }
+
+ /* Bare SEC variants: verify manual attach */
+
+ if (test__start_subtest("raw_tp_bare")) {
+ link = bpf_program__attach_raw_tracepoint(skel->progs.handle_raw_tp_bare,
+ "sys_enter");
+ if (ASSERT_OK_PTR(link, "raw_tp_bare_attach"))
+ bpf_link__destroy(link);
+ }
+
+ if (test__start_subtest("tp_bare")) {
+ link = bpf_program__attach_tracepoint(skel->progs.handle_tp_bare, "syscalls",
+ "sys_enter_nanosleep");
+ if (ASSERT_OK_PTR(link, "tp_bare_attach"))
+ bpf_link__destroy(link);
+ }
+
+ /* Sys exit test */
+
+ if (test__start_subtest("sys_exit")) {
+ link = bpf_program__attach(skel->progs.handle_sys_exit_tp);
+ if (ASSERT_OK_PTR(link, "sys_exit_attach")) {
+ skel->bss->target_pid = getpid();
+ skel->bss->exit_triggered = 0;
+
+ trigger_nanosleep();
+
+ ASSERT_EQ(skel->bss->exit_triggered, 1, "exit_triggered");
+ bpf_link__destroy(link);
+ }
+ }
+
+ /* Negative: attach-time rejection on non-faultable tracepoints */
+
+ if (test__start_subtest("raw_tp_non_faultable")) {
+ link = bpf_program__attach(skel->progs.handle_raw_tp_non_faultable);
+ ASSERT_ERR_PTR(link, "raw_tp_non_faultable_attach");
+ }
+
+ if (test__start_subtest("tp_non_syscall")) {
+ link = bpf_program__attach(skel->progs.handle_tp_non_syscall);
+ ASSERT_ERR_PTR(link, "tp_non_syscall_attach");
+ }
+
+ test_sleepable_tracepoints__destroy(skel);
+
+ /* Negative: load-time rejection (separate BPF object) */
+ RUN_TESTS(test_sleepable_tracepoints_fail);
+}
diff --git a/tools/testing/selftests/bpf/progs/test_sleepable_tracepoints.c b/tools/testing/selftests/bpf/progs/test_sleepable_tracepoints.c
new file mode 100644
index 000000000000..bc8a7fd43ccc
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_sleepable_tracepoints.c
@@ -0,0 +1,117 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <asm/unistd.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+#include <bpf/bpf_helpers.h>
+
+char _license[] SEC("license") = "GPL";
+
+int target_pid;
+int triggered;
+int exit_triggered;
+long err;
+long copied_tv_nsec;
+
+static int copy_nanosleep_arg(struct __kernel_timespec *ts)
+{
+ long tv_nsec;
+
+ err = bpf_copy_from_user(&tv_nsec, sizeof(tv_nsec), &ts->tv_nsec);
+ if (err)
+ return err;
+
+ copied_tv_nsec = tv_nsec;
+ triggered = 1;
+ return 0;
+}
+
+static __always_inline bool is_target_nanosleep(long id)
+{
+ return (bpf_get_current_pid_tgid() >> 32) == target_pid &&
+ id == __NR_nanosleep;
+}
+
+/* Primary functional tests: full bpf_copy_from_user exercise */
+
+SEC("tp_btf.s/sys_enter")
+int BPF_PROG(handle_sys_enter_tp_btf, struct pt_regs *regs, long id)
+{
+ if (!is_target_nanosleep(id))
+ return 0;
+
+ return copy_nanosleep_arg((void *)PT_REGS_PARM1_SYSCALL(regs));
+}
+
+SEC("raw_tp.s/sys_enter")
+int BPF_PROG(handle_sys_enter_raw_tp, struct pt_regs *regs, long id)
+{
+ if (!is_target_nanosleep(id))
+ return 0;
+
+ return copy_nanosleep_arg((void *)PT_REGS_PARM1_CORE_SYSCALL(regs));
+}
+
+SEC("tp.s/syscalls/sys_enter_nanosleep")
+int handle_sys_enter_tp(struct syscall_trace_enter *args)
+{
+ if ((bpf_get_current_pid_tgid() >> 32) != target_pid)
+ return 0;
+
+ return copy_nanosleep_arg((void *)args->args[0]);
+}
+
+SEC("tp.s/syscalls/sys_exit_nanosleep")
+int handle_sys_exit_tp(struct syscall_trace_exit *args)
+{
+ if ((bpf_get_current_pid_tgid() >> 32) != target_pid)
+ return 0;
+
+ exit_triggered = 1;
+ return 0;
+}
+
+/* Bare SEC variants: test manual attach without tracepoint in section name */
+
+SEC("raw_tp.s")
+int BPF_PROG(handle_raw_tp_bare, struct pt_regs *regs, long id)
+{
+ return 0;
+}
+
+SEC("tp.s")
+int handle_tp_bare(void *ctx)
+{
+ return 0;
+}
+
+/* Alias SEC variants: test libbpf prefix parsing for long-form names */
+
+SEC("tracepoint.s/syscalls/sys_enter_nanosleep")
+int handle_sys_enter_tp_alias(struct syscall_trace_enter *args)
+{
+ return 0;
+}
+
+SEC("raw_tracepoint.s/sys_enter")
+int BPF_PROG(handle_sys_enter_raw_tp_alias, struct pt_regs *regs, long id)
+{
+ return 0;
+}
+
+/* Negative: sleepable on non-faultable tracepoint (attach-time rejection) */
+
+SEC("raw_tp.s/sched_switch")
+int BPF_PROG(handle_raw_tp_non_faultable, bool preempt,
+ struct task_struct *prev, struct task_struct *next)
+{
+ return 0;
+}
+
+SEC("tp.s/sched/sched_switch")
+int handle_tp_non_syscall(void *ctx)
+{
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/test_sleepable_tracepoints_fail.c b/tools/testing/selftests/bpf/progs/test_sleepable_tracepoints_fail.c
new file mode 100644
index 000000000000..1a0748a9520b
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_sleepable_tracepoints_fail.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2025 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+char _license[] SEC("license") = "GPL";
+
+/* Sleepable program on a non-faultable tracepoint should fail to load */
+SEC("tp_btf.s/sched_switch")
+__failure __msg("Sleepable program cannot attach to non-faultable tracepoint")
+int BPF_PROG(handle_sched_switch, bool preempt,
+ struct task_struct *prev, struct task_struct *next)
+{
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/verifier/sleepable.c b/tools/testing/selftests/bpf/verifier/sleepable.c
index 1f0d2bdc673f..6dabc5522945 100644
--- a/tools/testing/selftests/bpf/verifier/sleepable.c
+++ b/tools/testing/selftests/bpf/verifier/sleepable.c
@@ -76,7 +76,20 @@
.runs = -1,
},
{
- "sleepable raw tracepoint reject",
+ "sleepable raw tracepoint accept",
+ .insns = {
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_TRACING,
+ .expected_attach_type = BPF_TRACE_RAW_TP,
+ .kfunc = "sys_enter",
+ .result = ACCEPT,
+ .flags = BPF_F_SLEEPABLE,
+ .runs = -1,
+},
+{
+ "sleepable raw tracepoint reject non-faultable",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
@@ -85,7 +98,7 @@
.expected_attach_type = BPF_TRACE_RAW_TP,
.kfunc = "sched_switch",
.result = REJECT,
- .errstr = "Only fentry/fexit/fmod_ret, lsm, iter, uprobe, and struct_ops programs can be sleepable",
+ .errstr = "Sleepable program cannot attach to non-faultable tracepoint",
.flags = BPF_F_SLEEPABLE,
.runs = -1,
},
--
2.52.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next v6 1/6] bpf: Add sleepable support for raw tracepoint programs
2026-03-24 19:03 ` [PATCH bpf-next v6 1/6] bpf: Add sleepable support for raw " Mykyta Yatsenko
@ 2026-03-24 19:46 ` Alexei Starovoitov
2026-03-24 22:25 ` Mykyta Yatsenko
0 siblings, 1 reply; 16+ messages in thread
From: Alexei Starovoitov @ 2026-03-24 19:46 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin Lau, Kernel Team, Eduard, Kumar Kartikeya Dwivedi,
Mykyta Yatsenko
On Tue, Mar 24, 2026 at 12:03 PM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Rework __bpf_trace_run() to support sleepable BPF programs by using
> explicit RCU flavor selection, following the uprobe_prog_run() pattern.
>
> For sleepable programs, use rcu_read_lock_tasks_trace() for lifetime
> protection and add a might_fault() annotation. For non-sleepable
> programs, use the regular rcu_read_lock(). Replace the combined
> rcu_read_lock_dont_migrate() with separate rcu_read_lock()/
> migrate_disable() calls, since sleepable programs need
> rcu_read_lock_tasks_trace() instead of rcu_read_lock().
>
> Remove the preempt_disable_notrace/preempt_enable_notrace pair from
> the faultable tracepoint BPF probe wrapper in bpf_probe.h, since
> migration protection and RCU locking are now handled per-program
> inside __bpf_trace_run().
>
> This prepares the runtime execution path for both BTF-based raw
> tracepoints (tp_btf) and classic raw tracepoints (raw_tp) to support
> sleepable BPF programs on faultable tracepoints (e.g. syscall
> tracepoints). The verifier changes to allow loading sleepable
> programs are in a subsequent patch.
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> include/trace/bpf_probe.h | 2 --
> kernel/trace/bpf_trace.c | 21 ++++++++++++++++++---
> 2 files changed, 18 insertions(+), 5 deletions(-)
>
> diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
> index 9391d54d3f12..d1de8f9aa07f 100644
> --- a/include/trace/bpf_probe.h
> +++ b/include/trace/bpf_probe.h
> @@ -58,9 +58,7 @@ static notrace void \
> __bpf_trace_##call(void *__data, proto) \
> { \
> might_fault(); \
> - preempt_disable_notrace(); \
> CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \
> - preempt_enable_notrace(); \
> }
>
> #undef DECLARE_EVENT_SYSCALL_CLASS
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 0b040a417442..35ed53807cfd 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -2072,11 +2072,18 @@ void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp)
> static __always_inline
> void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 *args)
> {
> + struct srcu_ctr __percpu *scp = NULL;
> struct bpf_prog *prog = link->link.prog;
> + bool sleepable = prog->sleepable;
> struct bpf_run_ctx *old_run_ctx;
> struct bpf_trace_run_ctx run_ctx;
>
> - rcu_read_lock_dont_migrate();
> + if (sleepable)
> + scp = rcu_read_lock_tasks_trace();
> + else
> + rcu_read_lock();
> +
> + migrate_disable();
Pls don't sacrifice performance just to have common migrate_disable path.
> if (unlikely(!bpf_prog_get_recursion_context(prog))) {
> bpf_prog_inc_misses_counter(prog);
> goto out;
> @@ -2085,12 +2092,20 @@ void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 *args)
> run_ctx.bpf_cookie = link->cookie;
> old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
>
> - (void) bpf_prog_run(prog, args);
> + if (sleepable)
> + might_fault();
why?
might_fault() is there __BPF_DECLARE_TRACE_SYSCALL.
pw-bot: cr
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next v6 3/6] bpf: Add sleepable support for classic tracepoint programs
2026-03-24 19:03 ` [PATCH bpf-next v6 3/6] bpf: Add sleepable support for classic tracepoint programs Mykyta Yatsenko
@ 2026-03-24 19:56 ` Alexei Starovoitov
0 siblings, 0 replies; 16+ messages in thread
From: Alexei Starovoitov @ 2026-03-24 19:56 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin Lau, Kernel Team, Eduard, Kumar Kartikeya Dwivedi,
Mykyta Yatsenko
On Tue, Mar 24, 2026 at 12:03 PM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> From: Mykyta Yatsenko <yatsenko@meta.com>
>
> Add trace_call_bpf_faultable(), a variant of trace_call_bpf() for
> faultable tracepoints that supports sleepable BPF programs. It uses
> rcu_tasks_trace for lifetime protection and
> bpf_prog_run_array_sleepable() for per-program RCU flavor selection,
> following the uprobe_prog_run() pattern.
>
> Restructure perf_syscall_enter() and perf_syscall_exit() to run BPF
> filter before perf event processing. Previously, BPF ran after the
> per-cpu perf trace buffer was allocated under preempt_disable,
> requiring cleanup via perf_swevent_put_recursion_context() on filter.
> Now BPF runs in faultable context before preempt_disable, reading
> syscall arguments from local variables instead of the per-cpu trace
> record, removing the dependency on buffer allocation. This allows
> sleepable BPF programs to execute and avoids unnecessary buffer
> allocation when BPF filters the event. The perf event submission
> path (buffer allocation, fill, submit) remains under preempt_disable
> as before.
>
> Add an attach-time check in __perf_event_set_bpf_prog() to reject
> sleepable BPF_PROG_TYPE_TRACEPOINT programs on non-syscall
> tracepoints, since only syscall tracepoints run in faultable context.
>
> This prepares the classic tracepoint runtime and attach paths for
> sleepable programs. The verifier changes to allow loading sleepable
> BPF_PROG_TYPE_TRACEPOINT programs are in a subsequent patch.
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> include/linux/trace_events.h | 6 +++
> kernel/events/core.c | 9 ++++
> kernel/trace/bpf_trace.c | 29 +++++++++++
> kernel/trace/trace_syscalls.c | 110 ++++++++++++++++++++++--------------------
cc Peter and Steven when you respin. We need acks here.
Or drop this part for now.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs
2026-03-24 19:03 [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs Mykyta Yatsenko
` (5 preceding siblings ...)
2026-03-24 19:03 ` [PATCH bpf-next v6 6/6] selftests/bpf: Add tests for sleepable tracepoint programs Mykyta Yatsenko
@ 2026-03-24 20:05 ` Kumar Kartikeya Dwivedi
2026-03-24 22:20 ` Mykyta Yatsenko
6 siblings, 1 reply; 16+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2026-03-24 20:05 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
Mykyta Yatsenko
On Tue, 24 Mar 2026 at 20:03, Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> This series adds support for sleepable BPF programs attached to raw
> tracepoints (tp_btf), classic raw tracepoints (raw_tp), and classic
> tracepoints (tp). The motivation is to allow BPF programs on syscall
> tracepoints to use sleepable helpers such as bpf_copy_from_user(),
> enabling reliable user memory reads that can page-fault.
>
> This series removes restriction for faultable tracepoints:
>
> Patch 1 modifies __bpf_trace_run() to support sleepable programs
> following the uprobe_prog_run() pattern: use explicit
> rcu_read_lock_trace() for sleepable programs and rcu_read_lock() for
> non-sleepable programs. Also removes preempt_disable from the faultable
> tracepoint BPF callback wrapper, since migration protection and RCU
> locking are now managed per-program inside __bpf_trace_run().
>
> Patch 2 renames bpf_prog_run_array_uprobe() to
> bpf_prog_run_array_sleepable() to support new usecase.
>
> Patch 3 adds sleepable support for classic tracepoints
> (BPF_PROG_TYPE_TRACEPOINT) by introducing trace_call_bpf_faultable()
> and restructuring perf_syscall_enter/exit() to run BPF programs in
> faultable context before the preempt-disabled per-cpu buffer
> allocation. trace_call_bpf_faultable() uses rcu_tasks_trace for
> lifetime protection, following the uprobe pattern. This adds
> rcu_tasks_trace overhead for all classic tracepoint BPF programs on
> syscall tracepoints, not just sleepable ones.
>
> Patch 4 allows BPF_TRACE_RAW_TP, BPF_PROG_TYPE_RAW_TRACEPOINT, and
> BPF_PROG_TYPE_TRACEPOINT programs to be loaded as sleepable, with
> load-time and attach-time checks to reject sleepable programs on
> non-faultable tracepoints.
>
> Patch 5 adds libbpf SEC_DEF handlers: tp_btf.s, raw_tp.s,
> raw_tracepoint.s, tp.s, and tracepoint.s.
>
> Patch 6 adds selftests covering tp_btf.s, raw_tp.s, and tp.s positive
> cases using bpf_copy_from_user() on sys_enter nanosleep, plus negative
> tests for non-faultable tracepoints.
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> Changes in v6:
> - Remove recursion check from trace_call_bpf_faultable(), sleepable
> tracepoints are called from syscall enter/exit, no recursion is
> possible.(Kumar)
Is this true? Can't the same program reenter due to the same syscall
from a different task which preempted the task running the previous
program?
Just asking because it wasn't clear to me, since we discussed
bpf_prog_get_recursion_context() instead.
> [...]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs
2026-03-24 20:05 ` [PATCH bpf-next v6 0/6] bpf: Add support " Kumar Kartikeya Dwivedi
@ 2026-03-24 22:20 ` Mykyta Yatsenko
2026-03-24 23:57 ` Alexei Starovoitov
0 siblings, 1 reply; 16+ messages in thread
From: Mykyta Yatsenko @ 2026-03-24 22:20 UTC (permalink / raw)
To: Kumar Kartikeya Dwivedi
Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
Mykyta Yatsenko
Kumar Kartikeya Dwivedi <memxor@gmail.com> writes:
> On Tue, 24 Mar 2026 at 20:03, Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
>>
>> This series adds support for sleepable BPF programs attached to raw
>> tracepoints (tp_btf), classic raw tracepoints (raw_tp), and classic
>> tracepoints (tp). The motivation is to allow BPF programs on syscall
>> tracepoints to use sleepable helpers such as bpf_copy_from_user(),
>> enabling reliable user memory reads that can page-fault.
>>
>> This series removes restriction for faultable tracepoints:
>>
>> Patch 1 modifies __bpf_trace_run() to support sleepable programs
>> following the uprobe_prog_run() pattern: use explicit
>> rcu_read_lock_trace() for sleepable programs and rcu_read_lock() for
>> non-sleepable programs. Also removes preempt_disable from the faultable
>> tracepoint BPF callback wrapper, since migration protection and RCU
>> locking are now managed per-program inside __bpf_trace_run().
>>
>> Patch 2 renames bpf_prog_run_array_uprobe() to
>> bpf_prog_run_array_sleepable() to support new usecase.
>>
>> Patch 3 adds sleepable support for classic tracepoints
>> (BPF_PROG_TYPE_TRACEPOINT) by introducing trace_call_bpf_faultable()
>> and restructuring perf_syscall_enter/exit() to run BPF programs in
>> faultable context before the preempt-disabled per-cpu buffer
>> allocation. trace_call_bpf_faultable() uses rcu_tasks_trace for
>> lifetime protection, following the uprobe pattern. This adds
>> rcu_tasks_trace overhead for all classic tracepoint BPF programs on
>> syscall tracepoints, not just sleepable ones.
>>
>> Patch 4 allows BPF_TRACE_RAW_TP, BPF_PROG_TYPE_RAW_TRACEPOINT, and
>> BPF_PROG_TYPE_TRACEPOINT programs to be loaded as sleepable, with
>> load-time and attach-time checks to reject sleepable programs on
>> non-faultable tracepoints.
>>
>> Patch 5 adds libbpf SEC_DEF handlers: tp_btf.s, raw_tp.s,
>> raw_tracepoint.s, tp.s, and tracepoint.s.
>>
>> Patch 6 adds selftests covering tp_btf.s, raw_tp.s, and tp.s positive
>> cases using bpf_copy_from_user() on sys_enter nanosleep, plus negative
>> tests for non-faultable tracepoints.
>>
>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>> ---
>> Changes in v6:
>> - Remove recursion check from trace_call_bpf_faultable(), sleepable
>> tracepoints are called from syscall enter/exit, no recursion is
>> possible.(Kumar)
>
> Is this true? Can't the same program reenter due to the same syscall
> from a different task which preempted the task running the previous
> program?
> Just asking because it wasn't clear to me, since we discussed
> bpf_prog_get_recursion_context() instead.
>
Good question, this is my understanding, though I'm not 100% confident:
There are 2 mechanisms we discussed:
* bpf_prog_get_recursion_context() - per program recursion guard, I
understand this is used to prevent self-recursion, where BPF program
infinitely generates calls to itself.
* bpf_prog_active - per CPU counter used to prevent deadlock between
instrumentation BPF prog (tracepoint, kprobe) and map operation
generating an event triggering those BPF progs when holding map bucket
spinlock.
I suggest that none of these applies for the usecase, because we open
sleepable only for syscall enter/exit:
* Deadlock on the map bucket spinlock is not possible (bpf_prog_active)
* BPF program can't generate another syscall enter/exit - no infinite
recursion (bpf_prog_get_recursion_context())
As for the situation that you describe: same program reenter due to the
same syscall from a different task which preempted the task running the
previous - this is normally supported, behaviour - similar to what
we have when NMI runs on the same CPU and calls common BPF subprog, or
two tasks schedule sleepable bpf_task_works callbacks, that can
interleave on the same CPU.
Let me know if my understanding is not correct. Thanks for checking this
series!
>> [...]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next v6 1/6] bpf: Add sleepable support for raw tracepoint programs
2026-03-24 19:46 ` Alexei Starovoitov
@ 2026-03-24 22:25 ` Mykyta Yatsenko
0 siblings, 0 replies; 16+ messages in thread
From: Mykyta Yatsenko @ 2026-03-24 22:25 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin Lau, Kernel Team, Eduard, Kumar Kartikeya Dwivedi,
Mykyta Yatsenko
Alexei Starovoitov <alexei.starovoitov@gmail.com> writes:
> On Tue, Mar 24, 2026 at 12:03 PM Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
>>
>> From: Mykyta Yatsenko <yatsenko@meta.com>
>>
>> Rework __bpf_trace_run() to support sleepable BPF programs by using
>> explicit RCU flavor selection, following the uprobe_prog_run() pattern.
>>
>> For sleepable programs, use rcu_read_lock_tasks_trace() for lifetime
>> protection and add a might_fault() annotation. For non-sleepable
>> programs, use the regular rcu_read_lock(). Replace the combined
>> rcu_read_lock_dont_migrate() with separate rcu_read_lock()/
>> migrate_disable() calls, since sleepable programs need
>> rcu_read_lock_tasks_trace() instead of rcu_read_lock().
>>
>> Remove the preempt_disable_notrace/preempt_enable_notrace pair from
>> the faultable tracepoint BPF probe wrapper in bpf_probe.h, since
>> migration protection and RCU locking are now handled per-program
>> inside __bpf_trace_run().
>>
>> This prepares the runtime execution path for both BTF-based raw
>> tracepoints (tp_btf) and classic raw tracepoints (raw_tp) to support
>> sleepable BPF programs on faultable tracepoints (e.g. syscall
>> tracepoints). The verifier changes to allow loading sleepable
>> programs are in a subsequent patch.
>>
>> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
>> ---
>> include/trace/bpf_probe.h | 2 --
>> kernel/trace/bpf_trace.c | 21 ++++++++++++++++++---
>> 2 files changed, 18 insertions(+), 5 deletions(-)
>>
>> diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
>> index 9391d54d3f12..d1de8f9aa07f 100644
>> --- a/include/trace/bpf_probe.h
>> +++ b/include/trace/bpf_probe.h
>> @@ -58,9 +58,7 @@ static notrace void \
>> __bpf_trace_##call(void *__data, proto) \
>> { \
>> might_fault(); \
>> - preempt_disable_notrace(); \
>> CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, CAST_TO_U64(args)); \
>> - preempt_enable_notrace(); \
>> }
>>
>> #undef DECLARE_EVENT_SYSCALL_CLASS
>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>> index 0b040a417442..35ed53807cfd 100644
>> --- a/kernel/trace/bpf_trace.c
>> +++ b/kernel/trace/bpf_trace.c
>> @@ -2072,11 +2072,18 @@ void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp)
>> static __always_inline
>> void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 *args)
>> {
>> + struct srcu_ctr __percpu *scp = NULL;
>> struct bpf_prog *prog = link->link.prog;
>> + bool sleepable = prog->sleepable;
>> struct bpf_run_ctx *old_run_ctx;
>> struct bpf_trace_run_ctx run_ctx;
>>
>> - rcu_read_lock_dont_migrate();
>> + if (sleepable)
>> + scp = rcu_read_lock_tasks_trace();
>> + else
>> + rcu_read_lock();
>> +
>> + migrate_disable();
>
> Pls don't sacrifice performance just to have common migrate_disable path.
>
>> if (unlikely(!bpf_prog_get_recursion_context(prog))) {
>> bpf_prog_inc_misses_counter(prog);
>> goto out;
>> @@ -2085,12 +2092,20 @@ void __bpf_trace_run(struct bpf_raw_tp_link *link, u64 *args)
>> run_ctx.bpf_cookie = link->cookie;
>> old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
>>
>> - (void) bpf_prog_run(prog, args);
>> + if (sleepable)
>> + might_fault();
>
> why?
> might_fault() is there __BPF_DECLARE_TRACE_SYSCALL.
>
Agreed, thanks.
> pw-bot: cr
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs
2026-03-24 22:20 ` Mykyta Yatsenko
@ 2026-03-24 23:57 ` Alexei Starovoitov
2026-03-24 23:59 ` Alexei Starovoitov
0 siblings, 1 reply; 16+ messages in thread
From: Alexei Starovoitov @ 2026-03-24 23:57 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: Kumar Kartikeya Dwivedi, bpf, Alexei Starovoitov, Andrii Nakryiko,
Daniel Borkmann, Martin Lau, Kernel Team, Eduard, Mykyta Yatsenko
On Tue, Mar 24, 2026 at 3:20 PM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> Kumar Kartikeya Dwivedi <memxor@gmail.com> writes:
>
> > On Tue, 24 Mar 2026 at 20:03, Mykyta Yatsenko
> > <mykyta.yatsenko5@gmail.com> wrote:
> >>
> >> This series adds support for sleepable BPF programs attached to raw
> >> tracepoints (tp_btf), classic raw tracepoints (raw_tp), and classic
> >> tracepoints (tp). The motivation is to allow BPF programs on syscall
> >> tracepoints to use sleepable helpers such as bpf_copy_from_user(),
> >> enabling reliable user memory reads that can page-fault.
> >>
> >> This series removes restriction for faultable tracepoints:
> >>
> >> Patch 1 modifies __bpf_trace_run() to support sleepable programs
> >> following the uprobe_prog_run() pattern: use explicit
> >> rcu_read_lock_trace() for sleepable programs and rcu_read_lock() for
> >> non-sleepable programs. Also removes preempt_disable from the faultable
> >> tracepoint BPF callback wrapper, since migration protection and RCU
> >> locking are now managed per-program inside __bpf_trace_run().
> >>
> >> Patch 2 renames bpf_prog_run_array_uprobe() to
> >> bpf_prog_run_array_sleepable() to support new usecase.
> >>
> >> Patch 3 adds sleepable support for classic tracepoints
> >> (BPF_PROG_TYPE_TRACEPOINT) by introducing trace_call_bpf_faultable()
> >> and restructuring perf_syscall_enter/exit() to run BPF programs in
> >> faultable context before the preempt-disabled per-cpu buffer
> >> allocation. trace_call_bpf_faultable() uses rcu_tasks_trace for
> >> lifetime protection, following the uprobe pattern. This adds
> >> rcu_tasks_trace overhead for all classic tracepoint BPF programs on
> >> syscall tracepoints, not just sleepable ones.
> >>
> >> Patch 4 allows BPF_TRACE_RAW_TP, BPF_PROG_TYPE_RAW_TRACEPOINT, and
> >> BPF_PROG_TYPE_TRACEPOINT programs to be loaded as sleepable, with
> >> load-time and attach-time checks to reject sleepable programs on
> >> non-faultable tracepoints.
> >>
> >> Patch 5 adds libbpf SEC_DEF handlers: tp_btf.s, raw_tp.s,
> >> raw_tracepoint.s, tp.s, and tracepoint.s.
> >>
> >> Patch 6 adds selftests covering tp_btf.s, raw_tp.s, and tp.s positive
> >> cases using bpf_copy_from_user() on sys_enter nanosleep, plus negative
> >> tests for non-faultable tracepoints.
> >>
> >> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> >> ---
> >> Changes in v6:
> >> - Remove recursion check from trace_call_bpf_faultable(), sleepable
> >> tracepoints are called from syscall enter/exit, no recursion is
> >> possible.(Kumar)
> >
> > Is this true? Can't the same program reenter due to the same syscall
> > from a different task which preempted the task running the previous
> > program?
> > Just asking because it wasn't clear to me, since we discussed
> > bpf_prog_get_recursion_context() instead.
> >
> Good question, this is my understanding, though I'm not 100% confident:
> There are 2 mechanisms we discussed:
> * bpf_prog_get_recursion_context() - per program recursion guard, I
> understand this is used to prevent self-recursion, where BPF program
> infinitely generates calls to itself.
> * bpf_prog_active - per CPU counter used to prevent deadlock between
> instrumentation BPF prog (tracepoint, kprobe) and map operation
> generating an event triggering those BPF progs when holding map bucket
> spinlock.
>
> I suggest that none of these applies for the usecase, because we open
> sleepable only for syscall enter/exit:
> * Deadlock on the map bucket spinlock is not possible (bpf_prog_active)
> * BPF program can't generate another syscall enter/exit - no infinite
> recursion (bpf_prog_get_recursion_context())
>
> As for the situation that you describe: same program reenter due to the
> same syscall from a different task which preempted the task running the
> previous - this is normally supported, behaviour - similar to what
> we have when NMI runs on the same CPU and calls common BPF subprog, or
> two tasks schedule sleepable bpf_task_works callbacks, that can
> interleave on the same CPU.
>
> Let me know if my understanding is not correct. Thanks for checking this
> series!
I think it's fine to skip recursion check, but
make sure private stack is disabled too.
See bpf_enable_priv_stack() and bpf_prog_check_recur().
priv stack is per-cpu per-prog.
So if sleepable prog attached to a syscall gets preempted
and that syscall fires again bad things happen with priv stack.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs
2026-03-24 23:57 ` Alexei Starovoitov
@ 2026-03-24 23:59 ` Alexei Starovoitov
0 siblings, 0 replies; 16+ messages in thread
From: Alexei Starovoitov @ 2026-03-24 23:59 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: Kumar Kartikeya Dwivedi, bpf, Alexei Starovoitov, Andrii Nakryiko,
Daniel Borkmann, Martin Lau, Kernel Team, Eduard, Mykyta Yatsenko
On Tue, Mar 24, 2026 at 4:57 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, Mar 24, 2026 at 3:20 PM Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
> >
> > Kumar Kartikeya Dwivedi <memxor@gmail.com> writes:
> >
> > > On Tue, 24 Mar 2026 at 20:03, Mykyta Yatsenko
> > > <mykyta.yatsenko5@gmail.com> wrote:
> > >>
> > >> This series adds support for sleepable BPF programs attached to raw
> > >> tracepoints (tp_btf), classic raw tracepoints (raw_tp), and classic
> > >> tracepoints (tp). The motivation is to allow BPF programs on syscall
> > >> tracepoints to use sleepable helpers such as bpf_copy_from_user(),
> > >> enabling reliable user memory reads that can page-fault.
> > >>
> > >> This series removes restriction for faultable tracepoints:
> > >>
> > >> Patch 1 modifies __bpf_trace_run() to support sleepable programs
> > >> following the uprobe_prog_run() pattern: use explicit
> > >> rcu_read_lock_trace() for sleepable programs and rcu_read_lock() for
> > >> non-sleepable programs. Also removes preempt_disable from the faultable
> > >> tracepoint BPF callback wrapper, since migration protection and RCU
> > >> locking are now managed per-program inside __bpf_trace_run().
> > >>
> > >> Patch 2 renames bpf_prog_run_array_uprobe() to
> > >> bpf_prog_run_array_sleepable() to support new usecase.
> > >>
> > >> Patch 3 adds sleepable support for classic tracepoints
> > >> (BPF_PROG_TYPE_TRACEPOINT) by introducing trace_call_bpf_faultable()
> > >> and restructuring perf_syscall_enter/exit() to run BPF programs in
> > >> faultable context before the preempt-disabled per-cpu buffer
> > >> allocation. trace_call_bpf_faultable() uses rcu_tasks_trace for
> > >> lifetime protection, following the uprobe pattern. This adds
> > >> rcu_tasks_trace overhead for all classic tracepoint BPF programs on
> > >> syscall tracepoints, not just sleepable ones.
> > >>
> > >> Patch 4 allows BPF_TRACE_RAW_TP, BPF_PROG_TYPE_RAW_TRACEPOINT, and
> > >> BPF_PROG_TYPE_TRACEPOINT programs to be loaded as sleepable, with
> > >> load-time and attach-time checks to reject sleepable programs on
> > >> non-faultable tracepoints.
> > >>
> > >> Patch 5 adds libbpf SEC_DEF handlers: tp_btf.s, raw_tp.s,
> > >> raw_tracepoint.s, tp.s, and tracepoint.s.
> > >>
> > >> Patch 6 adds selftests covering tp_btf.s, raw_tp.s, and tp.s positive
> > >> cases using bpf_copy_from_user() on sys_enter nanosleep, plus negative
> > >> tests for non-faultable tracepoints.
> > >>
> > >> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> > >> ---
> > >> Changes in v6:
> > >> - Remove recursion check from trace_call_bpf_faultable(), sleepable
> > >> tracepoints are called from syscall enter/exit, no recursion is
> > >> possible.(Kumar)
> > >
> > > Is this true? Can't the same program reenter due to the same syscall
> > > from a different task which preempted the task running the previous
> > > program?
> > > Just asking because it wasn't clear to me, since we discussed
> > > bpf_prog_get_recursion_context() instead.
> > >
> > Good question, this is my understanding, though I'm not 100% confident:
> > There are 2 mechanisms we discussed:
> > * bpf_prog_get_recursion_context() - per program recursion guard, I
> > understand this is used to prevent self-recursion, where BPF program
> > infinitely generates calls to itself.
> > * bpf_prog_active - per CPU counter used to prevent deadlock between
> > instrumentation BPF prog (tracepoint, kprobe) and map operation
> > generating an event triggering those BPF progs when holding map bucket
> > spinlock.
> >
> > I suggest that none of these applies for the usecase, because we open
> > sleepable only for syscall enter/exit:
> > * Deadlock on the map bucket spinlock is not possible (bpf_prog_active)
> > * BPF program can't generate another syscall enter/exit - no infinite
> > recursion (bpf_prog_get_recursion_context())
> >
> > As for the situation that you describe: same program reenter due to the
> > same syscall from a different task which preempted the task running the
> > previous - this is normally supported, behaviour - similar to what
> > we have when NMI runs on the same CPU and calls common BPF subprog, or
> > two tasks schedule sleepable bpf_task_works callbacks, that can
> > interleave on the same CPU.
> >
> > Let me know if my understanding is not correct. Thanks for checking this
> > series!
>
> I think it's fine to skip recursion check, but
> make sure private stack is disabled too.
> See bpf_enable_priv_stack() and bpf_prog_check_recur().
>
> priv stack is per-cpu per-prog.
> So if sleepable prog attached to a syscall gets preempted
> and that syscall fires again bad things happen with priv stack.
or add per-prog recursion check similar to all trampoline based progs
and allow priv stack.
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs
@ 2026-03-25 18:55 Mykyta Yatsenko
2026-03-26 16:52 ` Kumar Kartikeya Dwivedi
0 siblings, 1 reply; 16+ messages in thread
From: Mykyta Yatsenko @ 2026-03-25 18:55 UTC (permalink / raw)
To: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87, memxor
Cc: Mykyta Yatsenko, Peter Zijlstra, Steven Rostedt
This series adds support for sleepable BPF programs attached to raw
tracepoints (tp_btf), classic raw tracepoints (raw_tp), and classic
tracepoints (tp). The motivation is to allow BPF programs on syscall
tracepoints to use sleepable helpers such as bpf_copy_from_user(),
enabling reliable user memory reads that can page-fault.
This series removes restriction for faultable tracepoints:
Patch 1 modifies __bpf_trace_run() to support sleepable programs
following the uprobe_prog_run() pattern: use explicit
rcu_read_lock_trace() for sleepable programs and rcu_read_lock() for
non-sleepable programs. Also removes preempt_disable from the faultable
tracepoint BPF callback wrapper, since migration protection and RCU
locking are now managed per-program inside __bpf_trace_run().
Patch 2 renames bpf_prog_run_array_uprobe() to
bpf_prog_run_array_sleepable() to support new usecase.
Patch 3 adds sleepable support for classic tracepoints
(BPF_PROG_TYPE_TRACEPOINT) by introducing trace_call_bpf_faultable()
and restructuring perf_syscall_enter/exit() to run BPF programs in
faultable context before the preempt-disabled per-cpu buffer
allocation. trace_call_bpf_faultable() uses rcu_tasks_trace for
lifetime protection, following the uprobe pattern. This adds
rcu_tasks_trace overhead for all classic tracepoint BPF programs on
syscall tracepoints, not just sleepable ones.
Patch 4 allows BPF_TRACE_RAW_TP, BPF_PROG_TYPE_RAW_TRACEPOINT, and
BPF_PROG_TYPE_TRACEPOINT programs to be loaded as sleepable, with
load-time and attach-time checks to reject sleepable programs on
non-faultable tracepoints.
Patch 5 adds libbpf SEC_DEF handlers: tp_btf.s, raw_tp.s,
raw_tracepoint.s, tp.s, and tracepoint.s.
Patch 6 adds selftests covering tp_btf.s, raw_tp.s, and tp.s positive
cases using bpf_copy_from_user() on sys_enter nanosleep, plus negative
tests for non-faultable tracepoints.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
Changes in v7:
- Add recursion check (bpf_prog_get_recursion_context()) to make sure
private stack is safe when sleepable program is preempted by itself
(Alexei, Kumar)
- Use combined rcu_read_lock_dont_migrate() instead of separate
rcu_read_lock()/migrate_disable() calls for non-sleepable path (Alexei)
- Link to v6: https://lore.kernel.org/bpf/20260324-sleepable_tracepoints-v6-0-81bab3a43f25@meta.com/
Changes in v6:
- Remove recursion check from trace_call_bpf_faultable(), sleepable
tracepoints are called from syscall enter/exit, no recursion is
possible.(Kumar)
- Refactor bpf_prog_run_array_uprobe() to support tracepoints
usecase cleanly (Kumar)
- Link to v5: https://lore.kernel.org/r/20260316-sleepable_tracepoints-v5-0-85525de71d25@meta.com
Changes in v5:
- Addressed AI review: zero initialize struct pt_regs in
perf_call_bpf_enter(); changed handling tp.s and tracepoint.s in
attach_tp() in libbpf.
- Updated commit messages
- Link to v4: https://lore.kernel.org/r/20260313-sleepable_tracepoints-v4-0-debc688a66b3@meta.com
Changes in v4:
- Follow uprobe_prog_run() pattern with explicit rcu_read_lock_trace()
instead of relying on outer rcu_tasks_trace lock
- Add sleepable support for classic raw tracepoints (raw_tp.s)
- Add sleepable support for classic tracepoints (tp.s) with new
trace_call_bpf_faultable() and restructured perf_syscall_enter/exit()
- Add raw_tp.s, raw_tracepoint.s, tp.s, tracepoint.s SEC_DEF handlers
- Replace growing type enumeration in error message with generic
"program of this type cannot be sleepable"
- Use PT_REGS_PARM1_SYSCALL (non-CO-RE) in BTF test
- Add classic raw_tp and classic tracepoint sleepable tests
- Link to v3: https://lore.kernel.org/r/20260311-sleepable_tracepoints-v3-0-3e9bbde5bd22@meta.com
Changes in v3:
- Moved faultable tracepoint check from attach time to load time in
bpf_check_attach_target(), providing a clear verifier error message
- Folded preempt_disable removal into the sleepable execution path
patch
- Used RUN_TESTS() with __failure/__msg for negative test case instead
of explicit userspace program
- Reduced series from 6 patches to 4
- Link to v2: https://lore.kernel.org/r/20260225-sleepable_tracepoints-v2-0-0330dafd650f@meta.com
Changes in v2:
- Address AI review points - modified the order of the patches
- Link to v1: https://lore.kernel.org/bpf/20260218-sleepable_tracepoints-v1-0-ec2705497208@meta.com/
---
Mykyta Yatsenko (6):
bpf: Add sleepable support for raw tracepoint programs
bpf: Rename bpf_prog_run_array_uprobe() to bpf_prog_run_array_sleepable()
bpf: Add sleepable support for classic tracepoint programs
bpf: Verifier support for sleepable tracepoint programs
libbpf: Add section handlers for sleepable tracepoints
selftests/bpf: Add tests for sleepable tracepoint programs
include/linux/bpf.h | 24 +++-
include/linux/trace_events.h | 6 +
include/trace/bpf_probe.h | 2 -
kernel/bpf/syscall.c | 5 +
kernel/bpf/verifier.c | 13 ++-
kernel/events/core.c | 9 ++
kernel/trace/bpf_trace.c | 49 ++++++++-
kernel/trace/trace_syscalls.c | 110 ++++++++++---------
kernel/trace/trace_uprobe.c | 2 +-
tools/lib/bpf/libbpf.c | 39 +++++--
.../bpf/prog_tests/sleepable_tracepoints.c | 121 +++++++++++++++++++++
.../bpf/progs/test_sleepable_tracepoints.c | 117 ++++++++++++++++++++
.../bpf/progs/test_sleepable_tracepoints_fail.c | 18 +++
tools/testing/selftests/bpf/verifier/sleepable.c | 17 ++-
14 files changed, 459 insertions(+), 73 deletions(-)
---
base-commit: 6c8e1a9eee0fec802b542dadf768c30c2a183b3c
change-id: 20260216-sleepable_tracepoints-381ae1410550
Best regards,
--
Mykyta Yatsenko <yatsenko@meta.com>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs
2026-03-25 18:55 Mykyta Yatsenko
@ 2026-03-26 16:52 ` Kumar Kartikeya Dwivedi
0 siblings, 0 replies; 16+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2026-03-26 16:52 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: bpf, ast, andrii, daniel, kafai, kernel-team, eddyz87,
Mykyta Yatsenko, Peter Zijlstra, Steven Rostedt
On Wed, 25 Mar 2026 at 19:55, Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> This series adds support for sleepable BPF programs attached to raw
> tracepoints (tp_btf), classic raw tracepoints (raw_tp), and classic
> tracepoints (tp). The motivation is to allow BPF programs on syscall
> tracepoints to use sleepable helpers such as bpf_copy_from_user(),
> enabling reliable user memory reads that can page-fault.
>
> This series removes restriction for faultable tracepoints:
>
> Patch 1 modifies __bpf_trace_run() to support sleepable programs
> following the uprobe_prog_run() pattern: use explicit
> rcu_read_lock_trace() for sleepable programs and rcu_read_lock() for
> non-sleepable programs. Also removes preempt_disable from the faultable
> tracepoint BPF callback wrapper, since migration protection and RCU
> locking are now managed per-program inside __bpf_trace_run().
>
> Patch 2 renames bpf_prog_run_array_uprobe() to
> bpf_prog_run_array_sleepable() to support new usecase.
>
> Patch 3 adds sleepable support for classic tracepoints
> (BPF_PROG_TYPE_TRACEPOINT) by introducing trace_call_bpf_faultable()
> and restructuring perf_syscall_enter/exit() to run BPF programs in
> faultable context before the preempt-disabled per-cpu buffer
> allocation. trace_call_bpf_faultable() uses rcu_tasks_trace for
> lifetime protection, following the uprobe pattern. This adds
> rcu_tasks_trace overhead for all classic tracepoint BPF programs on
> syscall tracepoints, not just sleepable ones.
>
> Patch 4 allows BPF_TRACE_RAW_TP, BPF_PROG_TYPE_RAW_TRACEPOINT, and
> BPF_PROG_TYPE_TRACEPOINT programs to be loaded as sleepable, with
> load-time and attach-time checks to reject sleepable programs on
> non-faultable tracepoints.
>
> Patch 5 adds libbpf SEC_DEF handlers: tp_btf.s, raw_tp.s,
> raw_tracepoint.s, tp.s, and tracepoint.s.
>
> Patch 6 adds selftests covering tp_btf.s, raw_tp.s, and tp.s positive
> cases using bpf_copy_from_user() on sys_enter nanosleep, plus negative
> tests for non-faultable tracepoints.
>
> Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
> ---
> Changes in v7:
> - Add recursion check (bpf_prog_get_recursion_context()) to make sure
> private stack is safe when sleepable program is preempted by itself
> (Alexei, Kumar)
> - Use combined rcu_read_lock_dont_migrate() instead of separate
> rcu_read_lock()/migrate_disable() calls for non-sleepable path (Alexei)
> - Link to v6: https://lore.kernel.org/bpf/20260324-sleepable_tracepoints-v6-0-81bab3a43f25@meta.com/
>
It generally looks good to me FWIW.
I looked through Sashiko's reviews [0], everything except the bit
about bpf_prog_test_run() looks bogus to me, but please double check.
bpf_prog_test_run() bit is definitely real, I can get a splat by doing
a run for sleepable raw_tp prog. Please add a test case to cover this.
I think you resent it as v6 by mistake, please fix that in respin
(should be v8 now).
Tests have copyright year as 2025, since you will respin that is also
something worth addressing while at it.
[0]: https://sashiko.dev/#/patchset/20260325-sleepable_tracepoints-v6-0-2b182dacea13%40meta.com
pw-bot: cr
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2026-03-26 16:53 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-24 19:03 [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs Mykyta Yatsenko
2026-03-24 19:03 ` [PATCH bpf-next v6 1/6] bpf: Add sleepable support for raw " Mykyta Yatsenko
2026-03-24 19:46 ` Alexei Starovoitov
2026-03-24 22:25 ` Mykyta Yatsenko
2026-03-24 19:03 ` [PATCH bpf-next v6 2/6] bpf: Rename bpf_prog_run_array_uprobe() to bpf_prog_run_array_sleepable() Mykyta Yatsenko
2026-03-24 19:03 ` [PATCH bpf-next v6 3/6] bpf: Add sleepable support for classic tracepoint programs Mykyta Yatsenko
2026-03-24 19:56 ` Alexei Starovoitov
2026-03-24 19:03 ` [PATCH bpf-next v6 4/6] bpf: Verifier support for sleepable " Mykyta Yatsenko
2026-03-24 19:03 ` [PATCH bpf-next v6 5/6] libbpf: Add section handlers for sleepable tracepoints Mykyta Yatsenko
2026-03-24 19:03 ` [PATCH bpf-next v6 6/6] selftests/bpf: Add tests for sleepable tracepoint programs Mykyta Yatsenko
2026-03-24 20:05 ` [PATCH bpf-next v6 0/6] bpf: Add support " Kumar Kartikeya Dwivedi
2026-03-24 22:20 ` Mykyta Yatsenko
2026-03-24 23:57 ` Alexei Starovoitov
2026-03-24 23:59 ` Alexei Starovoitov
-- strict thread matches above, loose matches on Subject: below --
2026-03-25 18:55 Mykyta Yatsenko
2026-03-26 16:52 ` Kumar Kartikeya Dwivedi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox