* [PATCH v2 00/19] tracepoint: Avoid double static_branch evaluation at guarded call sites
@ 2026-03-23 16:00 Vineeth Pillai (Google)
2026-03-23 16:00 ` [PATCH v2 01/19] tracepoint: Add trace_call__##name() API Vineeth Pillai (Google)
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Vineeth Pillai (Google) @ 2026-03-23 16:00 UTC (permalink / raw)
To: Steven Rostedt, Peter Zijlstra, Dmitry Ilvokhin
Cc: Vineeth Pillai (Google), Masami Hiramatsu, Mathieu Desnoyers,
Ingo Molnar, Jens Axboe, io-uring, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
Marcelo Ricardo Leitner, Xin Long, Jon Maloy, Aaron Conole,
Eelco Chaudron, Ilya Maximets, netdev, bpf, linux-sctp,
tipc-discussion, dev, Jiri Pirko, Oded Gabbay, Koby Elbaz,
dri-devel, Rafael J. Wysocki, Viresh Kumar, Gautham R. Shenoy,
Huang Rui, Mario Limonciello, Len Brown, Srinivas Pandruvada,
linux-pm, MyungJoo Ham, Kyungmin Park, Chanwoo Choi,
Christian König, Sumit Semwal, linaro-mm-sig, Eddie James,
Andrew Jeffery, Joel Stanley, linux-fsi, David Airlie,
Simona Vetter, Alex Deucher, Danilo Krummrich, Matthew Brost,
Philipp Stanner, Harry Wentland, Leo Li, amd-gfx, Jiri Kosina,
Benjamin Tissoires, linux-input, Wolfram Sang, linux-i2c,
Mark Brown, Michael Hennerich, Nuno Sá, linux-spi,
James E.J. Bottomley, Martin K. Petersen, linux-scsi, Chris Mason,
David Sterba, linux-btrfs, Thomas Gleixner, Andrew Morton,
SeongJae Park, linux-mm, Borislav Petkov, Dave Hansen, x86,
linux-trace-kernel, linux-kernel
When a caller already guards a tracepoint with an explicit enabled check:
if (trace_foo_enabled() && cond)
trace_foo(args);
trace_foo() internally re-evaluates the static_branch_unlikely() key.
Since static branches are patched binary instructions the compiler cannot
fold the two evaluations, so every such site pays the cost twice.
This series introduces trace_call__##name() as a companion to
trace_##name(). It calls __do_trace_##name() directly, bypassing the
redundant static-branch re-check, while preserving all other correctness
properties of the normal path (RCU-watching assertion, might_fault() for
syscall tracepoints). The internal __do_trace_##name() symbol is not
leaked to call sites; trace_call__##name() is the only new public API.
if (trace_foo_enabled() && cond)
trace_call__foo(args); /* calls __do_trace_foo() directly */
The first patch adds the three-location change to
include/linux/tracepoint.h (__DECLARE_TRACE, __DECLARE_TRACE_SYSCALL,
and the !TRACEPOINTS_ENABLED stub). The remaining 18 patches
mechanically convert all guarded call sites found in the tree:
kernel/, io_uring/, net/, accel/habanalabs, cpufreq/, devfreq/,
dma-buf/, fsi/, drm/, HID, i2c/, spi/, scsi/ufs/, btrfs/,
net/devlink/, kernel/time/, kernel/trace/, mm/damon/, and arch/x86/.
This series is motivated by Peter Zijlstra's observation in the discussion
around Dmitry Ilvokhin's locking tracepoint instrumentation series, where
he noted that compilers cannot optimize static branches and that guarded
call sites end up evaluating the static branch twice for no reason, and
by Steven Rostedt's suggestion to add a proper API instead of exposing
internal implementation details like __do_trace_##name() directly to
call sites:
https://lore.kernel.org/linux-trace-kernel/8298e098d3418cb446ef396f119edac58a3414e9.1772642407.git.d@ilvokhin.com
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Changes in v2:
- Renamed trace_invoke_##name() to trace_call__##name() (double
underscore) per review comments.
- Added 4 new patches covering sites missed in v1, found using
coccinelle to scan the tree (Keith Busch):
* net/devlink: guarded tracepoint_enabled() block in trap.c
* kernel/time: early-return guard in tick-sched.c (tick_stop)
* kernel/trace: early-return guard in trace_benchmark.c
* mm/damon: early-return guard in core.c
* arch/x86: do_trace_*() wrapper functions in lib/msr.c, which
are called exclusively from tracepoint_enabled()-guarded sites
in asm/msr.h
v1: https://lore.kernel.org/linux-trace-kernel/abSqrJ1J59RQC47U@kbusch-mbp/
Vineeth Pillai (Google) (19):
tracepoint: Add trace_call__##name() API
kernel: Use trace_call__##name() at guarded tracepoint call sites
io_uring: Use trace_call__##name() at guarded tracepoint call sites
net: Use trace_call__##name() at guarded tracepoint call sites
accel/habanalabs: Use trace_call__##name() at guarded tracepoint call
sites
cpufreq: Use trace_call__##name() at guarded tracepoint call sites
devfreq: Use trace_call__##name() at guarded tracepoint call sites
dma-buf: Use trace_call__##name() at guarded tracepoint call sites
fsi: Use trace_call__##name() at guarded tracepoint call sites
drm: Use trace_call__##name() at guarded tracepoint call sites
HID: Use trace_call__##name() at guarded tracepoint call sites
i2c: Use trace_call__##name() at guarded tracepoint call sites
spi: Use trace_call__##name() at guarded tracepoint call sites
scsi: ufs: Use trace_call__##name() at guarded tracepoint call sites
btrfs: Use trace_call__##name() at guarded tracepoint call sites
net: devlink: Use trace_call__##name() at guarded tracepoint call
sites
kernel: time, trace: Use trace_call__##name() at guarded tracepoint
call sites
mm: damon: Use trace_call__##name() at guarded tracepoint call sites
x86: msr: Use trace_call__##name() at guarded tracepoint call sites
arch/x86/lib/msr.c | 6 +++---
drivers/accel/habanalabs/common/device.c | 12 ++++++------
drivers/accel/habanalabs/common/mmu/mmu.c | 3 ++-
drivers/accel/habanalabs/common/pci/pci.c | 4 ++--
drivers/cpufreq/amd-pstate.c | 10 +++++-----
drivers/cpufreq/cpufreq.c | 2 +-
drivers/cpufreq/intel_pstate.c | 2 +-
drivers/devfreq/devfreq.c | 2 +-
drivers/dma-buf/dma-fence.c | 4 ++--
drivers/fsi/fsi-master-aspeed.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 4 ++--
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 +-
drivers/gpu/drm/scheduler/sched_entity.c | 4 ++--
drivers/hid/intel-ish-hid/ipc/pci-ish.c | 2 +-
drivers/i2c/i2c-core-slave.c | 2 +-
drivers/spi/spi-axi-spi-engine.c | 4 ++--
drivers/ufs/core/ufshcd.c | 12 ++++++------
fs/btrfs/extent_map.c | 4 ++--
fs/btrfs/raid56.c | 4 ++--
include/linux/tracepoint.h | 11 +++++++++++
io_uring/io_uring.h | 2 +-
kernel/irq_work.c | 2 +-
kernel/sched/ext.c | 2 +-
kernel/smp.c | 2 +-
kernel/time/tick-sched.c | 12 ++++++------
kernel/trace/trace_benchmark.c | 2 +-
mm/damon/core.c | 2 +-
net/core/dev.c | 2 +-
net/core/xdp.c | 2 +-
net/devlink/trap.c | 2 +-
net/openvswitch/actions.c | 2 +-
net/openvswitch/datapath.c | 2 +-
net/sctp/outqueue.c | 2 +-
net/tipc/node.c | 2 +-
35 files changed, 74 insertions(+), 62 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 01/19] tracepoint: Add trace_call__##name() API
2026-03-23 16:00 [PATCH v2 00/19] tracepoint: Avoid double static_branch evaluation at guarded call sites Vineeth Pillai (Google)
@ 2026-03-23 16:00 ` Vineeth Pillai (Google)
2026-03-26 1:28 ` Masami Hiramatsu
2026-03-23 16:00 ` [PATCH v2 06/19] cpufreq: Use trace_call__##name() at guarded tracepoint call sites Vineeth Pillai (Google)
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: Vineeth Pillai (Google) @ 2026-03-23 16:00 UTC (permalink / raw)
To: Steven Rostedt, Peter Zijlstra, Dmitry Ilvokhin
Cc: Vineeth Pillai (Google), Masami Hiramatsu, Mathieu Desnoyers,
Ingo Molnar, Jens Axboe, io-uring, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
Marcelo Ricardo Leitner, Xin Long, Jon Maloy, Aaron Conole,
Eelco Chaudron, Ilya Maximets, netdev, bpf, linux-sctp,
tipc-discussion, dev, Jiri Pirko, Oded Gabbay, Koby Elbaz,
dri-devel, Rafael J. Wysocki, Viresh Kumar, Gautham R. Shenoy,
Huang Rui, Mario Limonciello, Len Brown, Srinivas Pandruvada,
linux-pm, MyungJoo Ham, Kyungmin Park, Chanwoo Choi,
Christian König, Sumit Semwal, linaro-mm-sig, Eddie James,
Andrew Jeffery, Joel Stanley, linux-fsi, David Airlie,
Simona Vetter, Alex Deucher, Danilo Krummrich, Matthew Brost,
Philipp Stanner, Harry Wentland, Leo Li, amd-gfx, Jiri Kosina,
Benjamin Tissoires, linux-input, Wolfram Sang, linux-i2c,
Mark Brown, Michael Hennerich, Nuno Sá, linux-spi,
James E.J. Bottomley, Martin K. Petersen, linux-scsi, Chris Mason,
David Sterba, linux-btrfs, Thomas Gleixner, Andrew Morton,
SeongJae Park, linux-mm, Borislav Petkov, Dave Hansen, x86,
linux-trace-kernel, linux-kernel
Add trace_call__##name() as a companion to trace_##name(). When a
caller already guards a tracepoint with an explicit enabled check:
if (trace_foo_enabled() && cond)
trace_foo(args);
trace_foo() internally repeats the static_branch_unlikely() test, which
the compiler cannot fold since static branches are patched binary
instructions. This results in two static-branch evaluations for every
guarded call site.
trace_call__##name() calls __do_trace_##name() directly, skipping the
redundant static-branch re-check. This avoids leaking the internal
__do_trace_##name() symbol into call sites while still eliminating the
double evaluation:
if (trace_foo_enabled() && cond)
trace_invoke_foo(args); /* calls __do_trace_foo() directly */
Three locations are updated:
- __DECLARE_TRACE: invoke form omits static_branch_unlikely, retains
the LOCKDEP RCU-watching assertion.
- __DECLARE_TRACE_SYSCALL: same, plus retains might_fault().
- !TRACEPOINTS_ENABLED stub: empty no-op so callers compile cleanly
when tracepoints are compiled out.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Vineeth Pillai (Google) <vineeth@bitbyteword.org>
Assisted-by: Claude:claude-sonnet-4-6
---
include/linux/tracepoint.h | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 22ca1c8b54f32..ed969705341f1 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -294,6 +294,10 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
WARN_ONCE(!rcu_is_watching(), \
"RCU not watching for tracepoint"); \
} \
+ } \
+ static inline void trace_call__##name(proto) \
+ { \
+ __do_trace_##name(args); \
}
#define __DECLARE_TRACE_SYSCALL(name, proto, args, data_proto) \
@@ -313,6 +317,11 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
WARN_ONCE(!rcu_is_watching(), \
"RCU not watching for tracepoint"); \
} \
+ } \
+ static inline void trace_call__##name(proto) \
+ { \
+ might_fault(); \
+ __do_trace_##name(args); \
}
/*
@@ -398,6 +407,8 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
#define __DECLARE_TRACE_COMMON(name, proto, args, data_proto) \
static inline void trace_##name(proto) \
{ } \
+ static inline void trace_call__##name(proto) \
+ { } \
static inline int \
register_trace_##name(void (*probe)(data_proto), \
void *data) \
--
2.53.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 06/19] cpufreq: Use trace_call__##name() at guarded tracepoint call sites
2026-03-23 16:00 [PATCH v2 00/19] tracepoint: Avoid double static_branch evaluation at guarded call sites Vineeth Pillai (Google)
2026-03-23 16:00 ` [PATCH v2 01/19] tracepoint: Add trace_call__##name() API Vineeth Pillai (Google)
@ 2026-03-23 16:00 ` Vineeth Pillai (Google)
2026-03-26 10:24 ` Rafael J. Wysocki
2026-03-27 9:10 ` Gautham R. Shenoy
2026-03-23 16:00 ` [PATCH v2 07/19] devfreq: " Vineeth Pillai (Google)
2026-03-24 14:28 ` [PATCH v2 00/19] tracepoint: Avoid double static_branch evaluation at guarded " Steven Rostedt
3 siblings, 2 replies; 8+ messages in thread
From: Vineeth Pillai (Google) @ 2026-03-23 16:00 UTC (permalink / raw)
Cc: Vineeth Pillai (Google), Steven Rostedt, Peter Zijlstra,
Huang Rui, Gautham R. Shenoy, Mario Limonciello, Perry Yuan,
Rafael J. Wysocki, Viresh Kumar, Srinivas Pandruvada, Len Brown,
linux-pm, linux-kernel, linux-trace-kernel
Replace trace_foo() with the new trace_call__foo() at sites already
guarded by trace_foo_enabled(), avoiding a redundant
static_branch_unlikely() re-evaluation inside the tracepoint.
trace_call__foo() calls the tracepoint callbacks directly without
utilizing the static branch again.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Vineeth Pillai (Google) <vineeth@bitbyteword.org>
Assisted-by: Claude:claude-sonnet-4-6
---
drivers/cpufreq/amd-pstate.c | 10 +++++-----
drivers/cpufreq/cpufreq.c | 2 +-
drivers/cpufreq/intel_pstate.c | 2 +-
3 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 5aa9fcd80cf51..4c47324aa2f73 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -247,7 +247,7 @@ static int msr_update_perf(struct cpufreq_policy *policy, u8 min_perf,
if (trace_amd_pstate_epp_perf_enabled()) {
union perf_cached perf = READ_ONCE(cpudata->perf);
- trace_amd_pstate_epp_perf(cpudata->cpu,
+ trace_call__amd_pstate_epp_perf(cpudata->cpu,
perf.highest_perf,
epp,
min_perf,
@@ -298,7 +298,7 @@ static int msr_set_epp(struct cpufreq_policy *policy, u8 epp)
if (trace_amd_pstate_epp_perf_enabled()) {
union perf_cached perf = cpudata->perf;
- trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
+ trace_call__amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
epp,
FIELD_GET(AMD_CPPC_MIN_PERF_MASK,
cpudata->cppc_req_cached),
@@ -343,7 +343,7 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
if (trace_amd_pstate_epp_perf_enabled()) {
union perf_cached perf = cpudata->perf;
- trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
+ trace_call__amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
epp,
FIELD_GET(AMD_CPPC_MIN_PERF_MASK,
cpudata->cppc_req_cached),
@@ -507,7 +507,7 @@ static int shmem_update_perf(struct cpufreq_policy *policy, u8 min_perf,
if (trace_amd_pstate_epp_perf_enabled()) {
union perf_cached perf = READ_ONCE(cpudata->perf);
- trace_amd_pstate_epp_perf(cpudata->cpu,
+ trace_call__amd_pstate_epp_perf(cpudata->cpu,
perf.highest_perf,
epp,
min_perf,
@@ -588,7 +588,7 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf,
}
if (trace_amd_pstate_perf_enabled() && amd_pstate_sample(cpudata)) {
- trace_amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq,
+ trace_call__amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq,
cpudata->cur.mperf, cpudata->cur.aperf, cpudata->cur.tsc,
cpudata->cpu, fast_switch);
}
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 277884d91913c..58901047eae5a 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -2222,7 +2222,7 @@ unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy,
if (trace_cpu_frequency_enabled()) {
for_each_cpu(cpu, policy->cpus)
- trace_cpu_frequency(freq, cpu);
+ trace_call__cpu_frequency(freq, cpu);
}
return freq;
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 11c58af419006..70be952209144 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -3132,7 +3132,7 @@ static void intel_cpufreq_trace(struct cpudata *cpu, unsigned int trace_type, in
return;
sample = &cpu->sample;
- trace_pstate_sample(trace_type,
+ trace_call__pstate_sample(trace_type,
0,
old_pstate,
cpu->pstate.current_pstate,
--
2.53.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 07/19] devfreq: Use trace_call__##name() at guarded tracepoint call sites
2026-03-23 16:00 [PATCH v2 00/19] tracepoint: Avoid double static_branch evaluation at guarded call sites Vineeth Pillai (Google)
2026-03-23 16:00 ` [PATCH v2 01/19] tracepoint: Add trace_call__##name() API Vineeth Pillai (Google)
2026-03-23 16:00 ` [PATCH v2 06/19] cpufreq: Use trace_call__##name() at guarded tracepoint call sites Vineeth Pillai (Google)
@ 2026-03-23 16:00 ` Vineeth Pillai (Google)
2026-03-24 14:28 ` [PATCH v2 00/19] tracepoint: Avoid double static_branch evaluation at guarded " Steven Rostedt
3 siblings, 0 replies; 8+ messages in thread
From: Vineeth Pillai (Google) @ 2026-03-23 16:00 UTC (permalink / raw)
Cc: Vineeth Pillai (Google), Steven Rostedt, Peter Zijlstra,
MyungJoo Ham, Kyungmin Park, Chanwoo Choi, linux-pm, linux-kernel,
linux-trace-kernel
Replace trace_foo() with the new trace_call__foo() at sites already
guarded by trace_foo_enabled(), avoiding a redundant
static_branch_unlikely() re-evaluation inside the tracepoint.
trace_call__foo() calls the tracepoint callbacks directly without
utilizing the static branch again.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Vineeth Pillai (Google) <vineeth@bitbyteword.org>
Assisted-by: Claude:claude-sonnet-4-6
---
drivers/devfreq/devfreq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
index c0a74091b9041..d1b27d9b753df 100644
--- a/drivers/devfreq/devfreq.c
+++ b/drivers/devfreq/devfreq.c
@@ -370,7 +370,7 @@ static int devfreq_set_target(struct devfreq *devfreq, unsigned long new_freq,
* change order of between devfreq device and passive devfreq device.
*/
if (trace_devfreq_frequency_enabled() && new_freq != cur_freq)
- trace_devfreq_frequency(devfreq, new_freq, cur_freq);
+ trace_call__devfreq_frequency(devfreq, new_freq, cur_freq);
freqs.new = new_freq;
devfreq_notify_transition(devfreq, &freqs, DEVFREQ_POSTCHANGE);
--
2.53.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 00/19] tracepoint: Avoid double static_branch evaluation at guarded call sites
2026-03-23 16:00 [PATCH v2 00/19] tracepoint: Avoid double static_branch evaluation at guarded call sites Vineeth Pillai (Google)
` (2 preceding siblings ...)
2026-03-23 16:00 ` [PATCH v2 07/19] devfreq: " Vineeth Pillai (Google)
@ 2026-03-24 14:28 ` Steven Rostedt
3 siblings, 0 replies; 8+ messages in thread
From: Steven Rostedt @ 2026-03-24 14:28 UTC (permalink / raw)
To: Vineeth Pillai (Google)
Cc: Peter Zijlstra, Dmitry Ilvokhin, Masami Hiramatsu,
Mathieu Desnoyers, Ingo Molnar, Jens Axboe, io-uring,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Alexei Starovoitov, Daniel Borkmann, Marcelo Ricardo Leitner,
Xin Long, Jon Maloy, Aaron Conole, Eelco Chaudron, Ilya Maximets,
netdev, bpf, linux-sctp, tipc-discussion, dev, Jiri Pirko,
Oded Gabbay, Koby Elbaz, dri-devel, Rafael J. Wysocki,
Viresh Kumar, Gautham R. Shenoy, Huang Rui, Mario Limonciello,
Len Brown, Srinivas Pandruvada, linux-pm, MyungJoo Ham,
Kyungmin Park, Chanwoo Choi, Christian König, Sumit Semwal,
linaro-mm-sig, Eddie James, Andrew Jeffery, Joel Stanley,
linux-fsi, David Airlie, Simona Vetter, Alex Deucher,
Danilo Krummrich, Matthew Brost, Philipp Stanner, Harry Wentland,
Leo Li, amd-gfx, Jiri Kosina, Benjamin Tissoires, linux-input,
Wolfram Sang, linux-i2c, Mark Brown, Michael Hennerich,
Nuno Sá, linux-spi, James E.J. Bottomley, Martin K. Petersen,
linux-scsi, Chris Mason, David Sterba, linux-btrfs,
Thomas Gleixner, Andrew Morton, SeongJae Park, linux-mm,
Borislav Petkov, Dave Hansen, x86, linux-trace-kernel,
linux-kernel
On Mon, 23 Mar 2026 12:00:19 -0400
"Vineeth Pillai (Google)" <vineeth@bitbyteword.org> wrote:
> When a caller already guards a tracepoint with an explicit enabled check:
>
> if (trace_foo_enabled() && cond)
> trace_foo(args);
Thanks Vineeth!
I'm going to start pulling in this series. I'll take the first patch, and
then any patch that has an Acked-by or Reviewed-by from the maintainer.
For patches without acks, I'll leave alone and then after the first patch
gets merged into mainline, the maintainers could pull in their own patches
at their own convenience. Unless of course they speak up now if they want
me to take them ;-)
-- Steve
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 01/19] tracepoint: Add trace_call__##name() API
2026-03-23 16:00 ` [PATCH v2 01/19] tracepoint: Add trace_call__##name() API Vineeth Pillai (Google)
@ 2026-03-26 1:28 ` Masami Hiramatsu
0 siblings, 0 replies; 8+ messages in thread
From: Masami Hiramatsu @ 2026-03-26 1:28 UTC (permalink / raw)
To: Vineeth Pillai (Google)
Cc: Steven Rostedt, Peter Zijlstra, Dmitry Ilvokhin, Masami Hiramatsu,
Mathieu Desnoyers, Ingo Molnar, Jens Axboe, io-uring,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Alexei Starovoitov, Daniel Borkmann, Marcelo Ricardo Leitner,
Xin Long, Jon Maloy, Aaron Conole, Eelco Chaudron, Ilya Maximets,
netdev, bpf, linux-sctp, tipc-discussion, dev, Jiri Pirko,
Oded Gabbay, Koby Elbaz, dri-devel, Rafael J. Wysocki,
Viresh Kumar, Gautham R. Shenoy, Huang Rui, Mario Limonciello,
Len Brown, Srinivas Pandruvada, linux-pm, MyungJoo Ham,
Kyungmin Park, Chanwoo Choi, Christian König, Sumit Semwal,
linaro-mm-sig, Eddie James, Andrew Jeffery, Joel Stanley,
linux-fsi, David Airlie, Simona Vetter, Alex Deucher,
Danilo Krummrich, Matthew Brost, Philipp Stanner, Harry Wentland,
Leo Li, amd-gfx, Jiri Kosina, Benjamin Tissoires, linux-input,
Wolfram Sang, linux-i2c, Mark Brown, Michael Hennerich,
Nuno Sá, linux-spi, James E.J. Bottomley, Martin K. Petersen,
linux-scsi, Chris Mason, David Sterba, linux-btrfs,
Thomas Gleixner, Andrew Morton, SeongJae Park, linux-mm,
Borislav Petkov, Dave Hansen, x86, linux-trace-kernel,
linux-kernel
On Mon, 23 Mar 2026 12:00:20 -0400
"Vineeth Pillai (Google)" <vineeth@bitbyteword.org> wrote:
> Add trace_call__##name() as a companion to trace_##name(). When a
> caller already guards a tracepoint with an explicit enabled check:
>
> if (trace_foo_enabled() && cond)
> trace_foo(args);
>
> trace_foo() internally repeats the static_branch_unlikely() test, which
> the compiler cannot fold since static branches are patched binary
> instructions. This results in two static-branch evaluations for every
> guarded call site.
>
> trace_call__##name() calls __do_trace_##name() directly, skipping the
> redundant static-branch re-check. This avoids leaking the internal
> __do_trace_##name() symbol into call sites while still eliminating the
> double evaluation:
>
> if (trace_foo_enabled() && cond)
> trace_invoke_foo(args); /* calls __do_trace_foo() directly */
nit: trace_call_foo() instead of trace_invoke_foo()?
Anyway looks good to me.
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
>
> Three locations are updated:
> - __DECLARE_TRACE: invoke form omits static_branch_unlikely, retains
> the LOCKDEP RCU-watching assertion.
> - __DECLARE_TRACE_SYSCALL: same, plus retains might_fault().
> - !TRACEPOINTS_ENABLED stub: empty no-op so callers compile cleanly
> when tracepoints are compiled out.
>
> Suggested-by: Steven Rostedt <rostedt@goodmis.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Vineeth Pillai (Google) <vineeth@bitbyteword.org>
> Assisted-by: Claude:claude-sonnet-4-6
> ---
> include/linux/tracepoint.h | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
> diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
> index 22ca1c8b54f32..ed969705341f1 100644
> --- a/include/linux/tracepoint.h
> +++ b/include/linux/tracepoint.h
> @@ -294,6 +294,10 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
> WARN_ONCE(!rcu_is_watching(), \
> "RCU not watching for tracepoint"); \
> } \
> + } \
> + static inline void trace_call__##name(proto) \
> + { \
> + __do_trace_##name(args); \
> }
>
> #define __DECLARE_TRACE_SYSCALL(name, proto, args, data_proto) \
> @@ -313,6 +317,11 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
> WARN_ONCE(!rcu_is_watching(), \
> "RCU not watching for tracepoint"); \
> } \
> + } \
> + static inline void trace_call__##name(proto) \
> + { \
> + might_fault(); \
> + __do_trace_##name(args); \
> }
>
> /*
> @@ -398,6 +407,8 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
> #define __DECLARE_TRACE_COMMON(name, proto, args, data_proto) \
> static inline void trace_##name(proto) \
> { } \
> + static inline void trace_call__##name(proto) \
> + { } \
> static inline int \
> register_trace_##name(void (*probe)(data_proto), \
> void *data) \
> --
> 2.53.0
>
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 06/19] cpufreq: Use trace_call__##name() at guarded tracepoint call sites
2026-03-23 16:00 ` [PATCH v2 06/19] cpufreq: Use trace_call__##name() at guarded tracepoint call sites Vineeth Pillai (Google)
@ 2026-03-26 10:24 ` Rafael J. Wysocki
2026-03-27 9:10 ` Gautham R. Shenoy
1 sibling, 0 replies; 8+ messages in thread
From: Rafael J. Wysocki @ 2026-03-26 10:24 UTC (permalink / raw)
To: Vineeth Pillai (Google)
Cc: Steven Rostedt, Peter Zijlstra, Huang Rui, Gautham R. Shenoy,
Mario Limonciello, Perry Yuan, Rafael J. Wysocki, Viresh Kumar,
Srinivas Pandruvada, Len Brown, linux-pm, linux-kernel,
linux-trace-kernel
On Mon, Mar 23, 2026 at 5:01 PM Vineeth Pillai (Google)
<vineeth@bitbyteword.org> wrote:
>
> Replace trace_foo() with the new trace_call__foo() at sites already
> guarded by trace_foo_enabled(), avoiding a redundant
> static_branch_unlikely() re-evaluation inside the tracepoint.
> trace_call__foo() calls the tracepoint callbacks directly without
> utilizing the static branch again.
>
> Suggested-by: Steven Rostedt <rostedt@goodmis.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Vineeth Pillai (Google) <vineeth@bitbyteword.org>
> Assisted-by: Claude:claude-sonnet-4-6
Acked-by: Rafael J. Wysocki (Intel) <rafael@kernel.org> # cpufreq core
& intel_pstate
> ---
> drivers/cpufreq/amd-pstate.c | 10 +++++-----
> drivers/cpufreq/cpufreq.c | 2 +-
> drivers/cpufreq/intel_pstate.c | 2 +-
> 3 files changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index 5aa9fcd80cf51..4c47324aa2f73 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -247,7 +247,7 @@ static int msr_update_perf(struct cpufreq_policy *policy, u8 min_perf,
> if (trace_amd_pstate_epp_perf_enabled()) {
> union perf_cached perf = READ_ONCE(cpudata->perf);
>
> - trace_amd_pstate_epp_perf(cpudata->cpu,
> + trace_call__amd_pstate_epp_perf(cpudata->cpu,
> perf.highest_perf,
> epp,
> min_perf,
> @@ -298,7 +298,7 @@ static int msr_set_epp(struct cpufreq_policy *policy, u8 epp)
> if (trace_amd_pstate_epp_perf_enabled()) {
> union perf_cached perf = cpudata->perf;
>
> - trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
> + trace_call__amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
> epp,
> FIELD_GET(AMD_CPPC_MIN_PERF_MASK,
> cpudata->cppc_req_cached),
> @@ -343,7 +343,7 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
> if (trace_amd_pstate_epp_perf_enabled()) {
> union perf_cached perf = cpudata->perf;
>
> - trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
> + trace_call__amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
> epp,
> FIELD_GET(AMD_CPPC_MIN_PERF_MASK,
> cpudata->cppc_req_cached),
> @@ -507,7 +507,7 @@ static int shmem_update_perf(struct cpufreq_policy *policy, u8 min_perf,
> if (trace_amd_pstate_epp_perf_enabled()) {
> union perf_cached perf = READ_ONCE(cpudata->perf);
>
> - trace_amd_pstate_epp_perf(cpudata->cpu,
> + trace_call__amd_pstate_epp_perf(cpudata->cpu,
> perf.highest_perf,
> epp,
> min_perf,
> @@ -588,7 +588,7 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf,
> }
>
> if (trace_amd_pstate_perf_enabled() && amd_pstate_sample(cpudata)) {
> - trace_amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq,
> + trace_call__amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq,
> cpudata->cur.mperf, cpudata->cur.aperf, cpudata->cur.tsc,
> cpudata->cpu, fast_switch);
> }
> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
> index 277884d91913c..58901047eae5a 100644
> --- a/drivers/cpufreq/cpufreq.c
> +++ b/drivers/cpufreq/cpufreq.c
> @@ -2222,7 +2222,7 @@ unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy,
>
> if (trace_cpu_frequency_enabled()) {
> for_each_cpu(cpu, policy->cpus)
> - trace_cpu_frequency(freq, cpu);
> + trace_call__cpu_frequency(freq, cpu);
> }
>
> return freq;
> diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
> index 11c58af419006..70be952209144 100644
> --- a/drivers/cpufreq/intel_pstate.c
> +++ b/drivers/cpufreq/intel_pstate.c
> @@ -3132,7 +3132,7 @@ static void intel_cpufreq_trace(struct cpudata *cpu, unsigned int trace_type, in
> return;
>
> sample = &cpu->sample;
> - trace_pstate_sample(trace_type,
> + trace_call__pstate_sample(trace_type,
> 0,
> old_pstate,
> cpu->pstate.current_pstate,
> --
> 2.53.0
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 06/19] cpufreq: Use trace_call__##name() at guarded tracepoint call sites
2026-03-23 16:00 ` [PATCH v2 06/19] cpufreq: Use trace_call__##name() at guarded tracepoint call sites Vineeth Pillai (Google)
2026-03-26 10:24 ` Rafael J. Wysocki
@ 2026-03-27 9:10 ` Gautham R. Shenoy
1 sibling, 0 replies; 8+ messages in thread
From: Gautham R. Shenoy @ 2026-03-27 9:10 UTC (permalink / raw)
To: Vineeth Pillai (Google)
Cc: Steven Rostedt, Peter Zijlstra, Huang Rui, Mario Limonciello,
Perry Yuan, Rafael J. Wysocki, Viresh Kumar, Srinivas Pandruvada,
Len Brown, linux-pm, linux-kernel, linux-trace-kernel
Hello Vineeth,
On Mon, Mar 23, 2026 at 12:00:25PM -0400, Vineeth Pillai (Google) wrote:
> Replace trace_foo() with the new trace_call__foo() at sites already
> guarded by trace_foo_enabled(), avoiding a redundant
> static_branch_unlikely() re-evaluation inside the tracepoint.
> trace_call__foo() calls the tracepoint callbacks directly without
> utilizing the static branch again.
>
> Suggested-by: Steven Rostedt <rostedt@goodmis.org>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Vineeth Pillai (Google) <vineeth@bitbyteword.org>
> Assisted-by: Claude:claude-sonnet-4-6
For drivers/cpufreq/amd-pstate.c and drivers/cpufreq/cpufreq.c
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@amd.com>
--
Thanks and Regards
gautham.
> ---
> drivers/cpufreq/amd-pstate.c | 10 +++++-----
> drivers/cpufreq/cpufreq.c | 2 +-
> drivers/cpufreq/intel_pstate.c | 2 +-
> 3 files changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index 5aa9fcd80cf51..4c47324aa2f73 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -247,7 +247,7 @@ static int msr_update_perf(struct cpufreq_policy *policy, u8 min_perf,
> if (trace_amd_pstate_epp_perf_enabled()) {
> union perf_cached perf = READ_ONCE(cpudata->perf);
>
> - trace_amd_pstate_epp_perf(cpudata->cpu,
> + trace_call__amd_pstate_epp_perf(cpudata->cpu,
> perf.highest_perf,
> epp,
> min_perf,
> @@ -298,7 +298,7 @@ static int msr_set_epp(struct cpufreq_policy *policy, u8 epp)
> if (trace_amd_pstate_epp_perf_enabled()) {
> union perf_cached perf = cpudata->perf;
>
> - trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
> + trace_call__amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
> epp,
> FIELD_GET(AMD_CPPC_MIN_PERF_MASK,
> cpudata->cppc_req_cached),
> @@ -343,7 +343,7 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp)
> if (trace_amd_pstate_epp_perf_enabled()) {
> union perf_cached perf = cpudata->perf;
>
> - trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
> + trace_call__amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf,
> epp,
> FIELD_GET(AMD_CPPC_MIN_PERF_MASK,
> cpudata->cppc_req_cached),
> @@ -507,7 +507,7 @@ static int shmem_update_perf(struct cpufreq_policy *policy, u8 min_perf,
> if (trace_amd_pstate_epp_perf_enabled()) {
> union perf_cached perf = READ_ONCE(cpudata->perf);
>
> - trace_amd_pstate_epp_perf(cpudata->cpu,
> + trace_call__amd_pstate_epp_perf(cpudata->cpu,
> perf.highest_perf,
> epp,
> min_perf,
> @@ -588,7 +588,7 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf,
> }
>
> if (trace_amd_pstate_perf_enabled() && amd_pstate_sample(cpudata)) {
> - trace_amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq,
> + trace_call__amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq,
> cpudata->cur.mperf, cpudata->cur.aperf, cpudata->cur.tsc,
> cpudata->cpu, fast_switch);
> }
> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
> index 277884d91913c..58901047eae5a 100644
> --- a/drivers/cpufreq/cpufreq.c
> +++ b/drivers/cpufreq/cpufreq.c
> @@ -2222,7 +2222,7 @@ unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy,
>
> if (trace_cpu_frequency_enabled()) {
> for_each_cpu(cpu, policy->cpus)
> - trace_cpu_frequency(freq, cpu);
> + trace_call__cpu_frequency(freq, cpu);
> }
>
> return freq;
> diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
> index 11c58af419006..70be952209144 100644
> --- a/drivers/cpufreq/intel_pstate.c
> +++ b/drivers/cpufreq/intel_pstate.c
> @@ -3132,7 +3132,7 @@ static void intel_cpufreq_trace(struct cpudata *cpu, unsigned int trace_type, in
> return;
>
> sample = &cpu->sample;
> - trace_pstate_sample(trace_type,
> + trace_call__pstate_sample(trace_type,
> 0,
> old_pstate,
> cpu->pstate.current_pstate,
> --
> 2.53.0
>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-03-27 9:10 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-23 16:00 [PATCH v2 00/19] tracepoint: Avoid double static_branch evaluation at guarded call sites Vineeth Pillai (Google)
2026-03-23 16:00 ` [PATCH v2 01/19] tracepoint: Add trace_call__##name() API Vineeth Pillai (Google)
2026-03-26 1:28 ` Masami Hiramatsu
2026-03-23 16:00 ` [PATCH v2 06/19] cpufreq: Use trace_call__##name() at guarded tracepoint call sites Vineeth Pillai (Google)
2026-03-26 10:24 ` Rafael J. Wysocki
2026-03-27 9:10 ` Gautham R. Shenoy
2026-03-23 16:00 ` [PATCH v2 07/19] devfreq: " Vineeth Pillai (Google)
2026-03-24 14:28 ` [PATCH v2 00/19] tracepoint: Avoid double static_branch evaluation at guarded " Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox