* [PATCH v3 1/4] tracing/preemptirq: Optimize preempt_disable/enable() tracepoint overhead
2026-03-11 12:50 [PATCH v3 0/4] tracing/preemptirq: Optimize disabled tracepoint overhead Wander Lairson Costa
@ 2026-03-11 12:50 ` Wander Lairson Costa
2026-03-11 19:35 ` Peter Zijlstra
2026-03-11 12:50 ` [PATCH v3 2/4] trace/preemptirq: make TRACE_PREEMPT_TOGGLE user-selectable Wander Lairson Costa
` (2 subsequent siblings)
3 siblings, 1 reply; 16+ messages in thread
From: Wander Lairson Costa @ 2026-03-11 12:50 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Masami Hiramatsu, Mathieu Desnoyers,
Andrew Morton, open list:SCHEDULER, open list:TRACING
Cc: acme, williams, gmonaco, Wander Lairson Costa
When CONFIG_TRACE_PREEMPT_TOGGLE is enabled, preempt_count_add() and
preempt_count_sub() become external function calls (defined in
kernel/sched/core.c) rather than inlined operations. These functions
also perform preempt_count() checks and call trace_preempt_on/off()
unconditionally, even when no tracing consumer is active.
Reduce this overhead by splitting the #if logic in preempt.h into
three cases. When CONFIG_DEBUG_PREEMPT or CONFIG_PREEMPT_TRACER is
set, keep external function calls because DEBUG_PREEMPT needs runtime
validation checks, and PREEMPT_TRACER needs the preemptoff latency
tracer hooks (tracer_preempt_on/off, called via trace_preempt_on/off).
When CONFIG_TRACE_PREEMPT_TOGGLE alone is set, provide new inline
versions of preempt_count_add/sub() that check the tracepoint static
key via the __preempt_trace_enabled() macro before calling into the
tracing path. The macro evaluates to true when the preempt_enable or
preempt_disable tracepoint has subscribers AND the preempt count
equals val (indicating the first preempt disable or last preempt
enable), preserving the original preempt_latency_start/stop semantics.
When none of the above are set, use pure inline macros with no tracing
overhead.
The preempt_count_dec_and_test() macro is refactored out of the
three-way #if into a separate block shared by the first two cases,
since both need it to call the (potentially inline)
preempt_count_sub() before checking should_resched().
The inline path calls thin __trace_preempt_on/off() wrappers (added
in trace_preemptirq.c) that invoke trace_preempt_on/off(), keeping
the full tracepoint machinery out of the inline code.
The #include <linux/tracepoint-defs.h> is placed inside the
CONFIG_TRACE_PREEMPT_TOGGLE block rather than at the top of the file
to avoid a circular include dependency on architectures where
asm/irqflags.h includes linux/preempt.h (e.g. m68k):
preempt.h -> tracepoint-defs.h -> static_key.h -> jump_label.h ->
atomic.h -> irqflags.h -> asm/irqflags.h -> preempt.h (guarded)
If the include were at the top, this chain would be traversed before
hardirq_count() is defined (at line 110), causing a build failure on
m68k. Placing it inside the #elif block ensures it is only pulled in
when CONFIG_TRACE_PREEMPT_TOGGLE is enabled and avoids the cycle for
configurations that do not select it.
In core.c, narrow the compilation guard for the external
preempt_count_add/sub() from CONFIG_DEBUG_PREEMPT ||
CONFIG_TRACE_PREEMPT_TOGGLE to CONFIG_DEBUG_PREEMPT ||
CONFIG_PREEMPT_TRACER, since CONFIG_TRACE_PREEMPT_TOGGLE is now
handled inline.
Signed-off-by: Wander Lairson Costa <wander@redhat.com>
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
---
include/linux/preempt.h | 49 +++++++++++++++++++++++++++++++--
kernel/sched/core.c | 2 +-
kernel/trace/trace_preemptirq.c | 19 +++++++++++++
3 files changed, 66 insertions(+), 4 deletions(-)
diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index d964f965c8ffc..f59a92f930d81 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -189,17 +189,60 @@ static __always_inline unsigned char interrupt_context_level(void)
*/
#define in_atomic_preempt_off() (preempt_count() != PREEMPT_DISABLE_OFFSET)
-#if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_TRACE_PREEMPT_TOGGLE)
+#if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_PREEMPT_TRACER)
extern void preempt_count_add(int val);
extern void preempt_count_sub(int val);
-#define preempt_count_dec_and_test() \
- ({ preempt_count_sub(1); should_resched(0); })
+#elif defined(CONFIG_TRACE_PREEMPT_TOGGLE)
+/*
+ * Avoid the circular dependency on architectures where asm/irqflags.h
+ * includes linux/preempt.h (e.g. m68k):
+ *
+ * preempt.h <--------------------+
+ * tracepoint-defs.h |
+ * static_key.h |
+ * jump_label.h |
+ * atomic.h |
+ * irqflags.h |
+ * asm/irqflags.h |
+ * preempt.h --------------+
+ */
+#include <linux/tracepoint-defs.h>
+
+extern void __trace_preempt_on(void);
+extern void __trace_preempt_off(void);
+
+DECLARE_TRACEPOINT(preempt_enable);
+DECLARE_TRACEPOINT(preempt_disable);
+
+#define __preempt_trace_enabled(type, val) \
+ (tracepoint_enabled(preempt_##type) && preempt_count() == (val))
+
+static __always_inline void preempt_count_add(int val)
+{
+ __preempt_count_add(val);
+
+ if (__preempt_trace_enabled(disable, val))
+ __trace_preempt_off();
+}
+
+static __always_inline void preempt_count_sub(int val)
+{
+ if (__preempt_trace_enabled(enable, val))
+ __trace_preempt_on();
+
+ __preempt_count_sub(val);
+}
#else
#define preempt_count_add(val) __preempt_count_add(val)
#define preempt_count_sub(val) __preempt_count_sub(val)
#define preempt_count_dec_and_test() __preempt_count_dec_and_test()
#endif
+#if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_TRACE_PREEMPT_TOGGLE)
+#define preempt_count_dec_and_test() \
+ ({ preempt_count_sub(1); should_resched(0); })
+#endif
+
#define __preempt_count_inc() __preempt_count_add(1)
#define __preempt_count_dec() __preempt_count_sub(1)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b7f77c165a6e0..125e5d71d1bd3 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5733,7 +5733,7 @@ static inline void sched_tick_stop(int cpu) { }
#endif /* !CONFIG_NO_HZ_FULL */
#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
- defined(CONFIG_TRACE_PREEMPT_TOGGLE))
+ defined(CONFIG_PREEMPT_TRACER))
/*
* If the value passed in is equal to the current preempt count
* then we just disabled preemption. Start timing the latency.
diff --git a/kernel/trace/trace_preemptirq.c b/kernel/trace/trace_preemptirq.c
index 0c42b15c38004..9f098fcb28012 100644
--- a/kernel/trace/trace_preemptirq.c
+++ b/kernel/trace/trace_preemptirq.c
@@ -115,6 +115,25 @@ NOKPROBE_SYMBOL(trace_hardirqs_off);
#ifdef CONFIG_TRACE_PREEMPT_TOGGLE
+#if !defined(CONFIG_DEBUG_PREEMPT) && !defined(CONFIG_PREEMPT_TRACER)
+EXPORT_TRACEPOINT_SYMBOL(preempt_disable);
+EXPORT_TRACEPOINT_SYMBOL(preempt_enable);
+
+void __trace_preempt_on(void)
+{
+ trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
+}
+EXPORT_SYMBOL(__trace_preempt_on);
+NOKPROBE_SYMBOL(__trace_preempt_on);
+
+void __trace_preempt_off(void)
+{
+ trace_preempt_off(CALLER_ADDR0, get_lock_parent_ip());
+}
+EXPORT_SYMBOL(__trace_preempt_off);
+NOKPROBE_SYMBOL(__trace_preempt_off);
+#endif /* !CONFIG_DEBUG_PREEMPT */
+
void trace_preempt_on(unsigned long a0, unsigned long a1)
{
trace(preempt_enable, TP_ARGS(a0, a1));
--
2.53.0
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH v3 1/4] tracing/preemptirq: Optimize preempt_disable/enable() tracepoint overhead
2026-03-11 12:50 ` [PATCH v3 1/4] tracing/preemptirq: Optimize preempt_disable/enable() " Wander Lairson Costa
@ 2026-03-11 19:35 ` Peter Zijlstra
2026-03-12 17:19 ` Wander Lairson Costa
0 siblings, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2026-03-11 19:35 UTC (permalink / raw)
To: Wander Lairson Costa
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Masami Hiramatsu, Mathieu Desnoyers, Andrew Morton,
open list:SCHEDULER, open list:TRACING, acme, williams, gmonaco
On Wed, Mar 11, 2026 at 09:50:15AM -0300, Wander Lairson Costa wrote:
> +extern void __trace_preempt_on(void);
> +extern void __trace_preempt_off(void);
> +
> +DECLARE_TRACEPOINT(preempt_enable);
> +DECLARE_TRACEPOINT(preempt_disable);
> +
> +#define __preempt_trace_enabled(type, val) \
> + (tracepoint_enabled(preempt_##type) && preempt_count() == (val))
> +
> +static __always_inline void preempt_count_add(int val)
> +{
> + __preempt_count_add(val);
> +
> + if (__preempt_trace_enabled(disable, val))
> + __trace_preempt_off();
> +}
> +
> +static __always_inline void preempt_count_sub(int val)
> +{
> + if (__preempt_trace_enabled(enable, val))
> + __trace_preempt_on();
> +
> + __preempt_count_sub(val);
> +}
> #else
> #define preempt_count_add(val) __preempt_count_add(val)
> #define preempt_count_sub(val) __preempt_count_sub(val)
> #define preempt_count_dec_and_test() __preempt_count_dec_and_test()
> #endif
>
> +#if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_TRACE_PREEMPT_TOGGLE)
> +#define preempt_count_dec_and_test() \
> + ({ preempt_count_sub(1); should_resched(0); })
> +#endif
Why!?!
Why can't you simply have:
static __always_inline bool preempt_count_dec_and_test(void)
{
if (__preempt_trace_enabled(enable, 1))
__trace_preempt_on();
return __preempt_count_dec_and_test();
}
Also, given how !x86 architectures were just complaining about how
terrible their preempt_emable() is, I'm really not liking this much at
all.
Currently the x86 preempt_disable() is _1_ instruction and
preempt_enable() is all of 3. Adding in these tracepoints will bloat
every single such site by at least another 4-5.
That's significant bloat, for really very little gain. Realistically
nobody is going to need these.
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH v3 1/4] tracing/preemptirq: Optimize preempt_disable/enable() tracepoint overhead
2026-03-11 19:35 ` Peter Zijlstra
@ 2026-03-12 17:19 ` Wander Lairson Costa
2026-03-13 9:04 ` Peter Zijlstra
0 siblings, 1 reply; 16+ messages in thread
From: Wander Lairson Costa @ 2026-03-12 17:19 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Masami Hiramatsu, Mathieu Desnoyers, Andrew Morton,
open list:SCHEDULER, open list:TRACING, acme, williams, gmonaco
On Wed, Mar 11, 2026 at 08:35:03PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 11, 2026 at 09:50:15AM -0300, Wander Lairson Costa wrote:
>
> > +extern void __trace_preempt_on(void);
> > +extern void __trace_preempt_off(void);
> > +
> > +DECLARE_TRACEPOINT(preempt_enable);
> > +DECLARE_TRACEPOINT(preempt_disable);
> > +
> > +#define __preempt_trace_enabled(type, val) \
> > + (tracepoint_enabled(preempt_##type) && preempt_count() == (val))
> > +
> > +static __always_inline void preempt_count_add(int val)
> > +{
> > + __preempt_count_add(val);
> > +
> > + if (__preempt_trace_enabled(disable, val))
> > + __trace_preempt_off();
> > +}
> > +
> > +static __always_inline void preempt_count_sub(int val)
> > +{
> > + if (__preempt_trace_enabled(enable, val))
> > + __trace_preempt_on();
> > +
> > + __preempt_count_sub(val);
> > +}
> > #else
> > #define preempt_count_add(val) __preempt_count_add(val)
> > #define preempt_count_sub(val) __preempt_count_sub(val)
> > #define preempt_count_dec_and_test() __preempt_count_dec_and_test()
> > #endif
> >
> > +#if defined(CONFIG_DEBUG_PREEMPT) || defined(CONFIG_TRACE_PREEMPT_TOGGLE)
> > +#define preempt_count_dec_and_test() \
> > + ({ preempt_count_sub(1); should_resched(0); })
> > +#endif
>
> Why!?!
>
> Why can't you simply have:
>
> static __always_inline bool preempt_count_dec_and_test(void)
> {
> if (__preempt_trace_enabled(enable, 1))
> __trace_preempt_on();
>
> return __preempt_count_dec_and_test();
> }
Indeed it improved the generated code. Thanks a lot.
>
> Also, given how !x86 architectures were just complaining about how
> terrible their preempt_emable() is, I'm really not liking this much at
> all.
>
I will look deeper in arm64 and riscv generated code. If there are other
specific architectures that concerns you, please let me know.
> Currently the x86 preempt_disable() is _1_ instruction and
> preempt_enable() is all of 3. Adding in these tracepoints will bloat
> every single such site by at least another 4-5.
>
Yes, there is a tradeoff. As I said before, I am looking at the hot path
(tracepoint no activated). Furthermore, when CONFIG_TRACE_PREEMPT_TOGGLE
and CONFIG_TRACE_IRQFLAGS_TOGGLE are off (default config), the code
generated is the same, byte by byte, of the stock kernel.
> That's significant bloat, for really very little gain. Realistically
> nobody is going to need these.
>
Of course, I can't speak for others, but more than once I debugged issues
that those tracepoints had made my life far easier. Those cases convinced
me that such a feature would be worth it. But if you don't see
value and will reject the patches no matter what, nothing can be done,
and I will have to accept defeat.
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH v3 1/4] tracing/preemptirq: Optimize preempt_disable/enable() tracepoint overhead
2026-03-12 17:19 ` Wander Lairson Costa
@ 2026-03-13 9:04 ` Peter Zijlstra
2026-03-13 15:36 ` Wander Lairson Costa
0 siblings, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2026-03-13 9:04 UTC (permalink / raw)
To: Wander Lairson Costa
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Masami Hiramatsu, Mathieu Desnoyers, Andrew Morton,
open list:SCHEDULER, open list:TRACING, acme, williams, gmonaco
On Thu, Mar 12, 2026 at 02:19:15PM -0300, Wander Lairson Costa wrote:
> > That's significant bloat, for really very little gain. Realistically
> > nobody is going to need these.
> >
>
> Of course, I can't speak for others, but more than once I debugged issues
> that those tracepoints had made my life far easier. Those cases convinced
> me that such a feature would be worth it. But if you don't see
> value and will reject the patches no matter what, nothing can be done,
> and I will have to accept defeat.
If distros are going to enable this, I suppose I'm not going to stop
this. But I do very much worry about the general bloat of things, there
are a *LOT* of preempt_{dis,en}able() sites.
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH v3 1/4] tracing/preemptirq: Optimize preempt_disable/enable() tracepoint overhead
2026-03-13 9:04 ` Peter Zijlstra
@ 2026-03-13 15:36 ` Wander Lairson Costa
0 siblings, 0 replies; 16+ messages in thread
From: Wander Lairson Costa @ 2026-03-13 15:36 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Masami Hiramatsu, Mathieu Desnoyers, Andrew Morton,
open list:SCHEDULER, open list:TRACING, acme, williams, gmonaco
On Fri, Mar 13, 2026 at 10:04:04AM +0100, Peter Zijlstra wrote:
> On Thu, Mar 12, 2026 at 02:19:15PM -0300, Wander Lairson Costa wrote:
>
> > > That's significant bloat, for really very little gain. Realistically
> > > nobody is going to need these.
> > >
> >
> > Of course, I can't speak for others, but more than once I debugged issues
> > that those tracepoints had made my life far easier. Those cases convinced
> > me that such a feature would be worth it. But if you don't see
> > value and will reject the patches no matter what, nothing can be done,
> > and I will have to accept defeat.
>
> If distros are going to enable this, I suppose I'm not going to stop
> this. But I do very much worry about the general bloat of things, there
> are a *LOT* of preempt_{dis,en}able() sites.
>
We plan to enable these tracepoints in the RHEL kernel-rt to track
extended non-preemptible states that cause high latencies. These
issues occasionally surface in customer OpenShift deployments, where
deploying a custom debug kernel is highly impractical. Having these
tracepoints available in the distribution kernel would be handful for
debugging these production systems. That said, I expect enabling this
feature to be the exception rather than the rule — most distribution
kernels would leave it disabled.
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v3 2/4] trace/preemptirq: make TRACE_PREEMPT_TOGGLE user-selectable
2026-03-11 12:50 [PATCH v3 0/4] tracing/preemptirq: Optimize disabled tracepoint overhead Wander Lairson Costa
2026-03-11 12:50 ` [PATCH v3 1/4] tracing/preemptirq: Optimize preempt_disable/enable() " Wander Lairson Costa
@ 2026-03-11 12:50 ` Wander Lairson Costa
2026-03-11 12:50 ` [PATCH v3 3/4] trace/preemptirq: add TRACE_IRQFLAGS_TOGGLE Wander Lairson Costa
2026-03-11 12:50 ` [PATCH v3 4/4] trace/preemptirq: Implement trace_irqflags hooks Wander Lairson Costa
3 siblings, 0 replies; 16+ messages in thread
From: Wander Lairson Costa @ 2026-03-11 12:50 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Masami Hiramatsu, Mathieu Desnoyers,
Andrew Morton, open list:SCHEDULER, open list:TRACING
Cc: acme, williams, gmonaco, Wander Lairson Costa
Make TRACE_PREEMPT_TOGGLE directly selectable so that
preempt_enable/preempt_disable tracepoints can be enabled in
production kernels without requiring the preemptoff latency tracer
overhead.
Signed-off-by: Wander Lairson Costa <wander@redhat.com>
---
kernel/trace/Kconfig | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 49de13cae4288..e007459ecf361 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -413,10 +413,10 @@ config STACK_TRACER
Say N if unsure.
config TRACE_PREEMPT_TOGGLE
- bool
+ bool "Preempt disable/enable tracing hooks"
help
- Enables hooks which will be called when preemption is first disabled,
- and last enabled.
+ Enables hooks into preemption disable and enable paths for
+ tracing or latency measurement.
config IRQSOFF_TRACER
bool "Interrupts-off Latency Tracer"
--
2.53.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 3/4] trace/preemptirq: add TRACE_IRQFLAGS_TOGGLE
2026-03-11 12:50 [PATCH v3 0/4] tracing/preemptirq: Optimize disabled tracepoint overhead Wander Lairson Costa
2026-03-11 12:50 ` [PATCH v3 1/4] tracing/preemptirq: Optimize preempt_disable/enable() " Wander Lairson Costa
2026-03-11 12:50 ` [PATCH v3 2/4] trace/preemptirq: make TRACE_PREEMPT_TOGGLE user-selectable Wander Lairson Costa
@ 2026-03-11 12:50 ` Wander Lairson Costa
2026-03-11 12:50 ` [PATCH v3 4/4] trace/preemptirq: Implement trace_irqflags hooks Wander Lairson Costa
3 siblings, 0 replies; 16+ messages in thread
From: Wander Lairson Costa @ 2026-03-11 12:50 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Masami Hiramatsu, Mathieu Desnoyers,
Andrew Morton, open list:SCHEDULER, open list:TRACING
Cc: acme, williams, gmonaco, Wander Lairson Costa
The IRQ disable/enable tracepoints are currently gated behind
TRACE_IRQFLAGS, a hidden config that cannot be selected directly by
users. It is only enabled when selected by PROVE_LOCKING or
IRQSOFF_TRACER, both of which carry overhead beyond what is needed
for just the tracepoints.
Introduce TRACE_IRQFLAGS_TOGGLE, a user-selectable config that enables
the irq_disable and irq_enable tracepoints independently. It is
mutually exclusive with TRACE_IRQFLAGS and mirrors how
TRACE_PREEMPT_TOGGLE works for preemption tracepoints.
Make this option depend on CONFIG_JUMP_LABEL to avoid a circular header
dependency. Without TRACE_IRQFLAGS, irqflags.h must check the static
key before invoking the tracepoint. Using tracepoint_enabled() for this
check pulls in tracepoint_defs.h, which eventually includes atomic.h
and cmpxchg.h, circling back to irqflags.h. Enforcing CONFIG_JUMP_LABEL
allows the use of static_key_false() directly, avoiding the inclusion
of the full tracepoint header chain and preventing the cycle.
Signed-off-by: Wander Lairson Costa <wander@redhat.com>
---
kernel/trace/Kconfig | 13 ++++++++++++-
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index e007459ecf361..8bea77b5f1200 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -162,7 +162,7 @@ config RING_BUFFER_ALLOW_SWAP
config PREEMPTIRQ_TRACEPOINTS
bool
- depends on TRACE_PREEMPT_TOGGLE || TRACE_IRQFLAGS
+ depends on TRACE_PREEMPT_TOGGLE || TRACE_IRQFLAGS || TRACE_IRQFLAGS_TOGGLE
select TRACING
default y
help
@@ -418,6 +418,17 @@ config TRACE_PREEMPT_TOGGLE
Enables hooks into preemption disable and enable paths for
tracing or latency measurement.
+config TRACE_IRQFLAGS_TOGGLE
+ bool "IRQ disable/enable tracing hooks"
+ default n
+ depends on TRACE_IRQFLAGS_SUPPORT && JUMP_LABEL && !TRACE_IRQFLAGS
+ help
+ Enables hooks into IRQ disable and enable paths for tracing.
+
+ This provides the irq_disable and irq_enable tracepoints
+ without pulling in the full TRACE_IRQFLAGS infrastructure
+ used by lockdep and the irqsoff latency tracer.
+
config IRQSOFF_TRACER
bool "Interrupts-off Latency Tracer"
default n
--
2.53.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v3 4/4] trace/preemptirq: Implement trace_irqflags hooks
2026-03-11 12:50 [PATCH v3 0/4] tracing/preemptirq: Optimize disabled tracepoint overhead Wander Lairson Costa
` (2 preceding siblings ...)
2026-03-11 12:50 ` [PATCH v3 3/4] trace/preemptirq: add TRACE_IRQFLAGS_TOGGLE Wander Lairson Costa
@ 2026-03-11 12:50 ` Wander Lairson Costa
2026-03-11 19:43 ` Peter Zijlstra
3 siblings, 1 reply; 16+ messages in thread
From: Wander Lairson Costa @ 2026-03-11 12:50 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Masami Hiramatsu, Mathieu Desnoyers,
Andrew Morton, Wander Lairson Costa, open list, open list:TRACING
Cc: acme, williams, gmonaco
The previous commit introduced the CONFIG_TRACE_IRQFLAGS_TOGGLE symbol.
This patch implements the actual infrastructure to allow tracing
irq_disable and irq_enable events without pulling in the heavy
CONFIG_TRACE_IRQFLAGS dependencies like lockdep or the irqsoff tracer.
The implementation hooks into the local_irq_* macros in irqflags.h.
Instead of using the heavy trace_hardirqs_on/off calls, it uses
lightweight tracepoint_enabled() checks. If the tracepoint is enabled,
it calls into specific wrapper functions in trace_preemptirq.c.
These wrappers check the raw hardware state via raw_irqs_disabled() to
filter out redundant events, such as disabling interrupts when they
are already disabled. This approach is simpler than the full
TRACE_IRQFLAGS method which requires maintaining a per-cpu software
state variable.
To support this, the tracepoint definitions are exposed under the new
configuration. Additionally, a circular header dependency involving
irqflags.h, tracepoint-defs.h, and atomic.h is resolved by moving the
atomic.h inclusion to tracepoint.h, allowing irqflags.h to include
tracepoint-defs.h safely.
Signed-off-by: Wander Lairson Costa <wander@redhat.com>
---
include/linux/irqflags.h | 62 ++++++++++++++++++++++++++++++-
include/linux/tracepoint-defs.h | 1 -
include/linux/tracepoint.h | 1 +
include/trace/events/preemptirq.h | 2 +-
kernel/trace/trace_preemptirq.c | 49 ++++++++++++++++++++++++
5 files changed, 112 insertions(+), 3 deletions(-)
diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h
index 57b074e0cfbbb..f40557bebd325 100644
--- a/include/linux/irqflags.h
+++ b/include/linux/irqflags.h
@@ -18,6 +18,19 @@
#include <asm/irqflags.h>
#include <asm/percpu.h>
+/*
+ * Avoid the circular dependency
+ * irqflags.h <-----------------+
+ * tracepoint_defs.h |
+ * static_key.h |
+ * jump_label.h |
+ * atomic.h |
+ * cmpxchg.h ---------+
+ */
+#ifdef CONFIG_TRACE_IRQFLAGS_TOGGLE
+#include <linux/tracepoint-defs.h>
+#endif
+
struct task_struct;
/* Currently lockdep_softirqs_on/off is used only by lockdep */
@@ -232,7 +245,54 @@ extern void warn_bogus_irq_restore(void);
} while (0)
-#else /* !CONFIG_TRACE_IRQFLAGS */
+#elif defined(CONFIG_TRACE_IRQFLAGS_TOGGLE) /* !CONFIG_TRACE_IRQFLAGS */
+
+DECLARE_TRACEPOINT(irq_enable);
+DECLARE_TRACEPOINT(irq_disable);
+
+void trace_local_irq_enable(void);
+void trace_local_irq_disable(void);
+void trace_local_irq_save(unsigned long flags);
+void trace_local_irq_restore(unsigned long flags);
+void trace_safe_halt(void);
+
+#define local_irq_enable() \
+ do { \
+ if (tracepoint_enabled(irq_enable)) \
+ trace_local_irq_enable(); \
+ raw_local_irq_enable(); \
+ } while (0)
+
+#define local_irq_disable() \
+ do { \
+ if (tracepoint_enabled(irq_disable)) \
+ trace_local_irq_disable(); \
+ else \
+ raw_local_irq_disable(); \
+ } while (0)
+
+#define local_irq_save(flags) \
+ do { \
+ raw_local_irq_save(flags); \
+ if (tracepoint_enabled(irq_disable)) \
+ trace_local_irq_save(flags); \
+ } while (0)
+
+#define local_irq_restore(flags) \
+ do { \
+ if (tracepoint_enabled(irq_enable)) \
+ trace_local_irq_restore(flags); \
+ raw_local_irq_restore(flags); \
+ } while (0)
+
+#define safe_halt() \
+ do { \
+ if (tracepoint_enabled(irq_enable)) \
+ trace_safe_halt(); \
+ raw_safe_halt(); \
+ } while (0)
+
+#else /* !CONFIG_TRACE_IRQFLAGS_TOGGLE */
#define local_irq_enable() do { raw_local_irq_enable(); } while (0)
#define local_irq_disable() do { raw_local_irq_disable(); } while (0)
diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h
index aebf0571c736e..cb1f15a4e43f0 100644
--- a/include/linux/tracepoint-defs.h
+++ b/include/linux/tracepoint-defs.h
@@ -8,7 +8,6 @@
* trace_print_flags{_u64}. Otherwise linux/tracepoint.h should be used.
*/
-#include <linux/atomic.h>
#include <linux/static_key.h>
struct static_call_key;
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 22ca1c8b54f32..e7d8c5ca00c79 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -20,6 +20,7 @@
#include <linux/rcupdate_trace.h>
#include <linux/tracepoint-defs.h>
#include <linux/static_call.h>
+#include <linux/atomic.h>
struct module;
struct tracepoint;
diff --git a/include/trace/events/preemptirq.h b/include/trace/events/preemptirq.h
index f99562d2b496b..a607a6f4e29ca 100644
--- a/include/trace/events/preemptirq.h
+++ b/include/trace/events/preemptirq.h
@@ -32,7 +32,7 @@ DECLARE_EVENT_CLASS(preemptirq_template,
(void *)((unsigned long)(_stext) + __entry->parent_offs))
);
-#ifdef CONFIG_TRACE_IRQFLAGS
+#if defined(CONFIG_TRACE_IRQFLAGS) || defined(CONFIG_TRACE_IRQFLAGS_TOGGLE)
DEFINE_EVENT(preemptirq_template, irq_disable,
TP_PROTO(unsigned long ip, unsigned long parent_ip),
TP_ARGS(ip, parent_ip));
diff --git a/kernel/trace/trace_preemptirq.c b/kernel/trace/trace_preemptirq.c
index 9f098fcb28012..0f32da96d2f01 100644
--- a/kernel/trace/trace_preemptirq.c
+++ b/kernel/trace/trace_preemptirq.c
@@ -111,8 +111,57 @@ void trace_hardirqs_off(void)
}
EXPORT_SYMBOL(trace_hardirqs_off);
NOKPROBE_SYMBOL(trace_hardirqs_off);
+
#endif /* CONFIG_TRACE_IRQFLAGS */
+#ifdef CONFIG_TRACE_IRQFLAGS_TOGGLE
+EXPORT_TRACEPOINT_SYMBOL(irq_disable);
+EXPORT_TRACEPOINT_SYMBOL(irq_enable);
+
+void trace_local_irq_enable(void)
+{
+ if (raw_irqs_disabled())
+ trace(irq_enable, TP_ARGS(CALLER_ADDR0, CALLER_ADDR1));
+}
+EXPORT_SYMBOL(trace_local_irq_enable);
+NOKPROBE_SYMBOL(trace_local_irq_enable);
+
+void trace_local_irq_disable(void)
+{
+ const bool was_disabled = raw_irqs_disabled();
+
+ raw_local_irq_disable();
+ if (!was_disabled)
+ trace(irq_disable, TP_ARGS(CALLER_ADDR0, CALLER_ADDR1));
+}
+EXPORT_SYMBOL(trace_local_irq_disable);
+NOKPROBE_SYMBOL(trace_local_irq_disable);
+
+void trace_local_irq_save(unsigned long flags)
+{
+ if (!raw_irqs_disabled_flags(flags))
+ trace(irq_disable, TP_ARGS(CALLER_ADDR0, CALLER_ADDR1));
+}
+EXPORT_SYMBOL(trace_local_irq_save);
+NOKPROBE_SYMBOL(trace_local_irq_save);
+
+void trace_local_irq_restore(unsigned long flags)
+{
+ if (!raw_irqs_disabled_flags(flags) && raw_irqs_disabled())
+ trace(irq_enable, TP_ARGS(CALLER_ADDR0, CALLER_ADDR1));
+}
+EXPORT_SYMBOL(trace_local_irq_restore);
+NOKPROBE_SYMBOL(trace_local_irq_restore);
+
+void trace_safe_halt(void)
+{
+ if (raw_irqs_disabled())
+ trace(irq_enable, TP_ARGS(CALLER_ADDR0, CALLER_ADDR1));
+}
+EXPORT_SYMBOL(trace_safe_halt);
+NOKPROBE_SYMBOL(trace_safe_halt);
+#endif /* CONFIG_TRACE_IRQFLAGS_TOGGLE */
+
#ifdef CONFIG_TRACE_PREEMPT_TOGGLE
#if !defined(CONFIG_DEBUG_PREEMPT) && !defined(CONFIG_PREEMPT_TRACER)
--
2.53.0
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH v3 4/4] trace/preemptirq: Implement trace_irqflags hooks
2026-03-11 12:50 ` [PATCH v3 4/4] trace/preemptirq: Implement trace_irqflags hooks Wander Lairson Costa
@ 2026-03-11 19:43 ` Peter Zijlstra
2026-03-11 19:48 ` Steven Rostedt
2026-03-12 17:09 ` Wander Lairson Costa
0 siblings, 2 replies; 16+ messages in thread
From: Peter Zijlstra @ 2026-03-11 19:43 UTC (permalink / raw)
To: Wander Lairson Costa
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Masami Hiramatsu, Mathieu Desnoyers, Andrew Morton, open list,
open list:TRACING, acme, williams, gmonaco
On Wed, Mar 11, 2026 at 09:50:18AM -0300, Wander Lairson Costa wrote:
> +#define local_irq_enable() \
> + do { \
> + if (tracepoint_enabled(irq_enable)) \
> + trace_local_irq_enable(); \
I'm thinking you didn't even look at the assembly generated :/
Otherwise you would have written this like:
if (tracepoint_enabled(irq_enable))
__do_trace_local_irq_enable();
> + raw_local_irq_enable(); \
> + } while (0)
Again, this was one instruction, and you clearly didn't bother looking
at the mess you've generated :/
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH v3 4/4] trace/preemptirq: Implement trace_irqflags hooks
2026-03-11 19:43 ` Peter Zijlstra
@ 2026-03-11 19:48 ` Steven Rostedt
2026-03-11 19:53 ` Peter Zijlstra
2026-03-12 17:09 ` Wander Lairson Costa
1 sibling, 1 reply; 16+ messages in thread
From: Steven Rostedt @ 2026-03-11 19:48 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Wander Lairson Costa, Ingo Molnar, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Ben Segall, Mel Gorman, Valentin Schneider,
Masami Hiramatsu, Mathieu Desnoyers, Andrew Morton, open list,
open list:TRACING, acme, williams, gmonaco, Vineeth Pillai
On Wed, 11 Mar 2026 20:43:05 +0100
Peter Zijlstra <peterz@infradead.org> wrote:
> On Wed, Mar 11, 2026 at 09:50:18AM -0300, Wander Lairson Costa wrote:
> > +#define local_irq_enable() \
> > + do { \
> > + if (tracepoint_enabled(irq_enable)) \
> > + trace_local_irq_enable(); \
>
> I'm thinking you didn't even look at the assembly generated :/
>
> Otherwise you would have written this like:
>
> if (tracepoint_enabled(irq_enable))
> __do_trace_local_irq_enable();
Please don't use the internal functions outside of the tracepoint.h
Vineeth is currently working on a patch set to properly do that. It's going
to introduce:
trace_invoke_<event>()
Which basically is just __do_trace_<event>(), but as a wrapper that can
handle updates that may be needed, but supplies a proper API where thing
wont randomly break when __do_trace_<event>() changes.
-- Steve
>
> > + raw_local_irq_enable(); \
> > + } while (0)
>
> Again, this was one instruction, and you clearly didn't bother looking
> at the mess you've generated :/
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH v3 4/4] trace/preemptirq: Implement trace_irqflags hooks
2026-03-11 19:48 ` Steven Rostedt
@ 2026-03-11 19:53 ` Peter Zijlstra
2026-03-11 20:07 ` Steven Rostedt
0 siblings, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2026-03-11 19:53 UTC (permalink / raw)
To: Steven Rostedt
Cc: Wander Lairson Costa, Ingo Molnar, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Ben Segall, Mel Gorman, Valentin Schneider,
Masami Hiramatsu, Mathieu Desnoyers, Andrew Morton, open list,
open list:TRACING, acme, williams, gmonaco, Vineeth Pillai
On Wed, Mar 11, 2026 at 03:48:42PM -0400, Steven Rostedt wrote:
> On Wed, 11 Mar 2026 20:43:05 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
>
> > On Wed, Mar 11, 2026 at 09:50:18AM -0300, Wander Lairson Costa wrote:
> > > +#define local_irq_enable() \
> > > + do { \
> > > + if (tracepoint_enabled(irq_enable)) \
> > > + trace_local_irq_enable(); \
> >
> > I'm thinking you didn't even look at the assembly generated :/
> >
> > Otherwise you would have written this like:
> >
> > if (tracepoint_enabled(irq_enable))
> > __do_trace_local_irq_enable();
>
> Please don't use the internal functions outside of the tracepoint.h
>
> Vineeth is currently working on a patch set to properly do that. It's going
> to introduce:
>
> trace_invoke_<event>()
>
> Which basically is just __do_trace_<event>(), but as a wrapper that can
> handle updates that may be needed, but supplies a proper API where thing
> wont randomly break when __do_trace_<event>() changes.
That's like a 3 line patch, hardly worth the effort. Its not like it'll
be hard to find and fix any users if you do ever change that.
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH v3 4/4] trace/preemptirq: Implement trace_irqflags hooks
2026-03-11 19:53 ` Peter Zijlstra
@ 2026-03-11 20:07 ` Steven Rostedt
2026-03-11 20:46 ` Peter Zijlstra
0 siblings, 1 reply; 16+ messages in thread
From: Steven Rostedt @ 2026-03-11 20:07 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Wander Lairson Costa, Ingo Molnar, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Ben Segall, Mel Gorman, Valentin Schneider,
Masami Hiramatsu, Mathieu Desnoyers, Andrew Morton, open list,
open list:TRACING, acme, williams, gmonaco, Vineeth Pillai
On Wed, 11 Mar 2026 20:53:10 +0100
Peter Zijlstra <peterz@infradead.org> wrote:
> > Which basically is just __do_trace_<event>(), but as a wrapper that can
> > handle updates that may be needed, but supplies a proper API where thing
> > wont randomly break when __do_trace_<event>() changes.
>
> That's like a 3 line patch, hardly worth the effort. Its not like it'll
> be hard to find and fix any users if you do ever change that.
No, but I prefer clean code, and not hacks that use internal functions with
underscores in their names. Not to mention, it properly handles different
cases:
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 22ca1c8b54f3..07219316a8e1 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -294,6 +294,10 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
WARN_ONCE(!rcu_is_watching(), \
"RCU not watching for tracepoint"); \
} \
+ } \
+ static inline void trace_invoke_##name(proto) \
+ { \
+ __do_trace_##name(args); \
}
#define __DECLARE_TRACE_SYSCALL(name, proto, args, data_proto) \
@@ -313,6 +317,11 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
WARN_ONCE(!rcu_is_watching(), \
"RCU not watching for tracepoint"); \
} \
+ } \
+ static inline void trace_invoke_##name(proto) \
+ { \
+ might_fault(); \
+ __do_trace_##name(args); \
}
Then it goes through and updates every location that has a:
if (trace_<event>_enabled()) {
[..]
trace_<event>();
}
With the new proper API.
-- Steve
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH v3 4/4] trace/preemptirq: Implement trace_irqflags hooks
2026-03-11 20:07 ` Steven Rostedt
@ 2026-03-11 20:46 ` Peter Zijlstra
2026-03-11 23:16 ` Steven Rostedt
0 siblings, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2026-03-11 20:46 UTC (permalink / raw)
To: Steven Rostedt
Cc: Wander Lairson Costa, Ingo Molnar, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Ben Segall, Mel Gorman, Valentin Schneider,
Masami Hiramatsu, Mathieu Desnoyers, Andrew Morton, open list,
open list:TRACING, acme, williams, gmonaco, Vineeth Pillai
On Wed, Mar 11, 2026 at 04:07:14PM -0400, Steven Rostedt wrote:
> On Wed, 11 Mar 2026 20:53:10 +0100
> Peter Zijlstra <peterz@infradead.org> wrote:
>
> > > Which basically is just __do_trace_<event>(), but as a wrapper that can
> > > handle updates that may be needed, but supplies a proper API where thing
> > > wont randomly break when __do_trace_<event>() changes.
> >
> > That's like a 3 line patch, hardly worth the effort. Its not like it'll
> > be hard to find and fix any users if you do ever change that.
>
> No, but I prefer clean code, and not hacks that use internal functions with
> underscores in their names. Not to mention, it properly handles different
> cases:
>
> diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
> index 22ca1c8b54f3..07219316a8e1 100644
> --- a/include/linux/tracepoint.h
> +++ b/include/linux/tracepoint.h
> @@ -294,6 +294,10 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
> WARN_ONCE(!rcu_is_watching(), \
> "RCU not watching for tracepoint"); \
> } \
> + } \
> + static inline void trace_invoke_##name(proto) \
> + { \
> + __do_trace_##name(args); \
> }
>
> #define __DECLARE_TRACE_SYSCALL(name, proto, args, data_proto) \
> @@ -313,6 +317,11 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
> WARN_ONCE(!rcu_is_watching(), \
> "RCU not watching for tracepoint"); \
> } \
> + } \
> + static inline void trace_invoke_##name(proto) \
> + { \
> + might_fault(); \
> + __do_trace_##name(args); \
> }
>
>
> Then it goes through and updates every location that has a:
>
> if (trace_<event>_enabled()) {
> [..]
> trace_<event>();
> }
We have Cocinelle for that :-), and while I absolutely suck at writing
Cocinelle, I had some limited success using Gemini to write some for me
the other day.
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH v3 4/4] trace/preemptirq: Implement trace_irqflags hooks
2026-03-11 20:46 ` Peter Zijlstra
@ 2026-03-11 23:16 ` Steven Rostedt
0 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-03-11 23:16 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Wander Lairson Costa, Ingo Molnar, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Ben Segall, Mel Gorman, Valentin Schneider,
Masami Hiramatsu, Mathieu Desnoyers, Andrew Morton, open list,
open list:TRACING, acme, williams, gmonaco, Vineeth Pillai
On Wed, 11 Mar 2026 21:46:07 +0100
Peter Zijlstra <peterz@infradead.org> wrote:
> > Then it goes through and updates every location that has a:
> >
> > if (trace_<event>_enabled()) {
> > [..]
> > trace_<event>();
> > }
>
> We have Cocinelle for that :-), and while I absolutely suck at writing
> Cocinelle, I had some limited success using Gemini to write some for me
> the other day.
Heh, I believe Vineeth used claude ;-)
-- Steve
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v3 4/4] trace/preemptirq: Implement trace_irqflags hooks
2026-03-11 19:43 ` Peter Zijlstra
2026-03-11 19:48 ` Steven Rostedt
@ 2026-03-12 17:09 ` Wander Lairson Costa
1 sibling, 0 replies; 16+ messages in thread
From: Wander Lairson Costa @ 2026-03-12 17:09 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Masami Hiramatsu, Mathieu Desnoyers, Andrew Morton, open list,
open list:TRACING, acme, williams, gmonaco
On Wed, Mar 11, 2026 at 08:43:05PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 11, 2026 at 09:50:18AM -0300, Wander Lairson Costa wrote:
> > +#define local_irq_enable() \
> > + do { \
> > + if (tracepoint_enabled(irq_enable)) \
> > + trace_local_irq_enable(); \
>
> I'm thinking you didn't even look at the assembly generated :/
Yes, I did. But I was more concerned with the hot path, making sure it
only adds the NOP instruction generated by the static_key machinery.
>
> Otherwise you would have written this like:
>
> if (tracepoint_enabled(irq_enable))
> __do_trace_local_irq_enable();
>
> > + raw_local_irq_enable(); \
> > + } while (0)
>
Steve already opposed to using internal functions. When trace invoke
public API is available, we can always come back and replace the call.
> Again, this was one instruction, and you clearly didn't bother looking
> at the mess you've generated :/
>
I might have missed something (as was the case for preempt_enable), but
I spent quite some time looking at the assembly code. I did my best, but
there lots of people far more skilled than me. And patch submission is
the way to connect to these people.
^ permalink raw reply [flat|nested] 16+ messages in thread