public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/3] tracing: Guard __DECLARE_TRACE() use of __DO_TRACE_CALL() with SRCU-fast
@ 2026-01-26 23:11 Steven Rostedt
  2026-01-26 23:11 ` [PATCH v6 1/3] tracing: perf: Have perf tracepoint callbacks always disable preemption Steven Rostedt
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Steven Rostedt @ 2026-01-26 23:11 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel, bpf
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	Paul E. McKenney, Sebastian Andrzej Siewior, Alexei Starovoitov


The current use of guard(preempt_notrace)() within __DECLARE_TRACE()
to protect invocation of __DO_TRACE_CALL() means that BPF programs
attached to tracepoints are non-preemptible.  This is unhelpful in
real-time systems, whose users apparently wish to use BPF while also
achieving low latencies.

Change the protection of tracepoints to use fast_srcu() instead.
This will allow the callbacks to be able to be preempted. This also
means that the callbacks themselves need to be able to handle this
new found preemption ability.

For perf, add a guard(preempt) inside its handler too keep the old behavior
of perf events being called with preemption disabled.

For BPF, add a migrate_disable() to its handler. Actually, just replace
the rcu_read_lock() with rcu_read_lock_dont_migrate() and make it
cover more of the BPF callback handler.

[ I would have sent this out earlier, but had a death in the family
  which cause everything to be postponed ]

Changes since v5: https://patch.msgid.link/20260108220550.2f6638f3@fedora

- Add separate patch for perf to call preempt_disable()

- Add patch that has bpf call migrate_disable() directly.

- Just change from preempt_disable() to srcu_fast() always
  Do not do anything different for PREEMPT_RT.
  Now that BPF disables migration directly, do not have tracepoints
  disable migration in its code.

Steven Rostedt (3):
      tracing: perf: Have perf tracepoint callbacks always disable preemption
      bpf: Have __bpf_trace_run() use rcu_read_lock_dont_migrate()
      tracing: Guard __DECLARE_TRACE() use of __DO_TRACE_CALL() with SRCU-fast

----
 include/linux/tracepoint.h   |  9 +++++----
 include/trace/perf.h         |  4 ++--
 include/trace/trace_events.h |  4 ++--
 kernel/trace/bpf_trace.c     |  5 ++---
 kernel/tracepoint.c          | 18 ++++++++++++++----
 5 files changed, 25 insertions(+), 15 deletions(-)

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2026-01-30  2:50 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-26 23:11 [PATCH v6 0/3] tracing: Guard __DECLARE_TRACE() use of __DO_TRACE_CALL() with SRCU-fast Steven Rostedt
2026-01-26 23:11 ` [PATCH v6 1/3] tracing: perf: Have perf tracepoint callbacks always disable preemption Steven Rostedt
2026-01-26 23:11 ` [PATCH v6 2/3] bpf: Have __bpf_trace_run() use rcu_read_lock_dont_migrate() Steven Rostedt
2026-01-26 23:11 ` [PATCH v6 3/3] tracing: Guard __DECLARE_TRACE() use of __DO_TRACE_CALL() with SRCU-fast Steven Rostedt
2026-01-27  2:39 ` [PATCH v6 0/3] " Steven Rostedt
2026-01-27 23:18   ` Paul E. McKenney
2026-01-30  0:33     ` Steven Rostedt
2026-01-30  1:32       ` Paul E. McKenney
2026-01-30  2:50         ` Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox