* [for-next][PATCH 00/15] tracing: Updates for 6.20
@ 2026-01-27 15:05 Steven Rostedt
2026-01-27 15:05 ` [for-next][PATCH 01/15] tracing: Properly process error handling in event_hist_trigger_parse() Steven Rostedt
` (14 more replies)
0 siblings, 15 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:05 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
trace/for-next
Head SHA1: e0c033d8caa7bf1edf99e40f598ad4223d6252af
Aaron Tomlin (3):
tracing: Add bitmask-list option for human-readable bitmask display
tracing: Add show_event_filters to expose active event filters
tracing: Add show_event_triggers to expose active event triggers
Guenter Roeck (1):
ftrace: Introduce and use ENTRIES_PER_PAGE_GROUP macro
Marco Crivellari (1):
tracing: Replace use of system_wq with system_dfl_wq
Miaoqian Lin (1):
tracing: Properly process error handling in event_hist_trigger_parse()
Petr Tesarik (1):
ring-buffer: Use a housekeeping CPU to wake up waiters
Steven Rostedt (8):
tracing: Remove redundant call to event_trigger_reset_filter() in event_hist_trigger_parse()
tracing: Check the return value of tracing_update_buffers()
tracing: Have show_event_trigger/filter format a bit more in columns
tracing: Disable trace_printk buffer on warning too
tracing: Have hist_debug show what function a field uses
tracing: Remove notrace from trace_event_raw_event_synth()
tracing: Up the hist stacktrace size from 16 to 31
tracing: Remove duplicate ENABLE_EVENT_STR and DISABLE_EVENT_STR macros
----
Documentation/trace/ftrace.rst | 25 +++++
include/linux/trace_events.h | 8 +-
include/linux/trace_seq.h | 12 ++-
include/trace/stages/stage3_trace_output.h | 4 +-
kernel/trace/ftrace.c | 7 +-
kernel/trace/ring_buffer.c | 24 ++++-
kernel/trace/trace.c | 16 +++-
kernel/trace/trace.h | 3 +-
kernel/trace/trace_events.c | 145 ++++++++++++++++++++++++++++-
kernel/trace/trace_events_filter.c | 2 +-
kernel/trace/trace_events_hist.c | 81 +++++++++-------
kernel/trace/trace_events_synth.c | 6 +-
kernel/trace/trace_output.c | 30 +++++-
kernel/trace/trace_seq.c | 29 +++++-
14 files changed, 327 insertions(+), 65 deletions(-)
^ permalink raw reply [flat|nested] 16+ messages in thread
* [for-next][PATCH 01/15] tracing: Properly process error handling in event_hist_trigger_parse()
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
@ 2026-01-27 15:05 ` Steven Rostedt
2026-01-27 15:05 ` [for-next][PATCH 02/15] tracing: Remove redundant call to event_trigger_reset_filter() " Steven Rostedt
` (13 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:05 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Tom Zanussi, Miaoqian Lin
From: Miaoqian Lin <linmq006@gmail.com>
Memory allocated with trigger_data_alloc() requires trigger_data_free()
for proper cleanup.
Replace kfree() with trigger_data_free() to fix this.
Found via static analysis and code review.
This isn't a real bug due to the current code basically being an open
coded version of trigger_data_free() without the synchronization. The
synchronization isn't needed as this is the error path of creation and
there's nothing to synchronize against yet. Replace the kfree() to be
consistent with the allocation.
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Tom Zanussi <zanussi@kernel.org>
Link: https://patch.msgid.link/20251211100058.2381268-1-linmq006@gmail.com
Fixes: e1f187d09e11 ("tracing: Have existing event_command.parse() implementations use helpers")
Signed-off-by: Miaoqian Lin <linmq006@gmail.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace_events_hist.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index c97bb2fda5c0..7e50df8b800b 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -6911,7 +6911,7 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
remove_hist_vars(hist_data);
- kfree(trigger_data);
+ trigger_data_free(trigger_data);
destroy_hist_data(hist_data);
goto out;
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 02/15] tracing: Remove redundant call to event_trigger_reset_filter() in event_hist_trigger_parse()
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
2026-01-27 15:05 ` [for-next][PATCH 01/15] tracing: Properly process error handling in event_hist_trigger_parse() Steven Rostedt
@ 2026-01-27 15:05 ` Steven Rostedt
2026-01-27 15:05 ` [for-next][PATCH 03/15] tracing: Add bitmask-list option for human-readable bitmask display Steven Rostedt
` (12 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:05 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Miaoqian Lin
From: Steven Rostedt <rostedt@goodmis.org>
With the change to replace kfree() with trigger_data_free(), which starts
out doing the exact same thing as event_trigger_reset_filter(), there's no
reason to call event_trigger_reset_filter() before calling
trigger_data_free(). Remove the call to it.
Link: https://lore.kernel.org/linux-trace-kernel/20251211204520.0f3ba6d1@fedora/
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Miaoqian Lin <linmq006@gmail.com>
Link: https://patch.msgid.link/20260108174429.2d9ca51f@gandalf.local.home
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace_events_hist.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 7e50df8b800b..0908a9f7e289 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -6907,8 +6907,6 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
out_unreg:
event_trigger_unregister(cmd_ops, file, glob+1, trigger_data);
out_free:
- event_trigger_reset_filter(cmd_ops, trigger_data);
-
remove_hist_vars(hist_data);
trigger_data_free(trigger_data);
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 03/15] tracing: Add bitmask-list option for human-readable bitmask display
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
2026-01-27 15:05 ` [for-next][PATCH 01/15] tracing: Properly process error handling in event_hist_trigger_parse() Steven Rostedt
2026-01-27 15:05 ` [for-next][PATCH 02/15] tracing: Remove redundant call to event_trigger_reset_filter() " Steven Rostedt
@ 2026-01-27 15:05 ` Steven Rostedt
2026-01-27 15:05 ` [for-next][PATCH 04/15] tracing: Replace use of system_wq with system_dfl_wq Steven Rostedt
` (11 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:05 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Aaron Tomlin
From: Aaron Tomlin <atomlin@atomlin.com>
Add support for displaying bitmasks in human-readable list format (e.g.,
0,2-5,7) in addition to the default hexadecimal bitmap representation.
This is particularly useful when tracing CPU masks and other large
bitmasks where individual bit positions are more meaningful than their
hexadecimal encoding.
When the "bitmask-list" option is enabled, the printk "%*pbl" format
specifier is used to render bitmasks as comma-separated ranges, making
trace output easier to interpret for complex CPU configurations and
large bitmask values.
Link: https://patch.msgid.link/20251226160724.2246493-2-atomlin@atomlin.com
Signed-off-by: Aaron Tomlin <atomlin@atomlin.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Documentation/trace/ftrace.rst | 9 +++++++
include/linux/trace_events.h | 8 +++---
include/linux/trace_seq.h | 12 ++++++++-
include/trace/stages/stage3_trace_output.h | 4 +--
kernel/trace/trace.h | 1 +
kernel/trace/trace_output.c | 30 +++++++++++++++++++---
kernel/trace/trace_seq.c | 29 ++++++++++++++++++++-
7 files changed, 82 insertions(+), 11 deletions(-)
diff --git a/Documentation/trace/ftrace.rst b/Documentation/trace/ftrace.rst
index d1f313a5f4ad..639f4d95732f 100644
--- a/Documentation/trace/ftrace.rst
+++ b/Documentation/trace/ftrace.rst
@@ -1290,6 +1290,15 @@ Here are the available options:
This will be useful if you want to find out which hashed
value is corresponding to the real value in trace log.
+ bitmask-list
+ When enabled, bitmasks are displayed as a human-readable list of
+ ranges (e.g., 0,2-5,7) using the printk "%*pbl" format specifier.
+ When disabled (the default), bitmasks are displayed in the
+ traditional hexadecimal bitmap representation. The list format is
+ particularly useful for tracing CPU masks and other large bitmasks
+ where individual bit positions are more meaningful than their
+ hexadecimal encoding.
+
record-cmd
When any event or tracer is enabled, a hook is enabled
in the sched_switch trace point to fill comm cache
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 3690221ba3d8..0a2b8229b999 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -38,7 +38,10 @@ const char *trace_print_symbols_seq_u64(struct trace_seq *p,
*symbol_array);
#endif
-const char *trace_print_bitmask_seq(struct trace_seq *p, void *bitmask_ptr,
+struct trace_iterator;
+struct trace_event;
+
+const char *trace_print_bitmask_seq(struct trace_iterator *iter, void *bitmask_ptr,
unsigned int bitmask_size);
const char *trace_print_hex_seq(struct trace_seq *p,
@@ -54,9 +57,6 @@ trace_print_hex_dump_seq(struct trace_seq *p, const char *prefix_str,
int prefix_type, int rowsize, int groupsize,
const void *buf, size_t len, bool ascii);
-struct trace_iterator;
-struct trace_event;
-
int trace_raw_output_prep(struct trace_iterator *iter,
struct trace_event *event);
extern __printf(2, 3)
diff --git a/include/linux/trace_seq.h b/include/linux/trace_seq.h
index 4a0b8c172d27..697d619aafdc 100644
--- a/include/linux/trace_seq.h
+++ b/include/linux/trace_seq.h
@@ -114,7 +114,11 @@ extern void trace_seq_putmem_hex(struct trace_seq *s, const void *mem,
extern int trace_seq_path(struct trace_seq *s, const struct path *path);
extern void trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
- int nmaskbits);
+ int nmaskbits);
+
+extern void trace_seq_bitmask_list(struct trace_seq *s,
+ const unsigned long *maskp,
+ int nmaskbits);
extern int trace_seq_hex_dump(struct trace_seq *s, const char *prefix_str,
int prefix_type, int rowsize, int groupsize,
@@ -137,6 +141,12 @@ trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
{
}
+static inline void
+trace_seq_bitmask_list(struct trace_seq *s, const unsigned long *maskp,
+ int nmaskbits)
+{
+}
+
static inline int trace_print_seq(struct seq_file *m, struct trace_seq *s)
{
return 0;
diff --git a/include/trace/stages/stage3_trace_output.h b/include/trace/stages/stage3_trace_output.h
index 1e7b0bef95f5..fce85ea2df1c 100644
--- a/include/trace/stages/stage3_trace_output.h
+++ b/include/trace/stages/stage3_trace_output.h
@@ -39,7 +39,7 @@
void *__bitmask = __get_dynamic_array(field); \
unsigned int __bitmask_size; \
__bitmask_size = __get_dynamic_array_len(field); \
- trace_print_bitmask_seq(p, __bitmask, __bitmask_size); \
+ trace_print_bitmask_seq(iter, __bitmask, __bitmask_size); \
})
#undef __get_cpumask
@@ -51,7 +51,7 @@
void *__bitmask = __get_rel_dynamic_array(field); \
unsigned int __bitmask_size; \
__bitmask_size = __get_rel_dynamic_array_len(field); \
- trace_print_bitmask_seq(p, __bitmask, __bitmask_size); \
+ trace_print_bitmask_seq(iter, __bitmask, __bitmask_size); \
})
#undef __get_rel_cpumask
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index b6d42fe06115..8888fc9335b6 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1411,6 +1411,7 @@ extern int trace_get_user(struct trace_parser *parser, const char __user *ubuf,
C(COPY_MARKER, "copy_trace_marker"), \
C(PAUSE_ON_TRACE, "pause-on-trace"), \
C(HASH_PTR, "hash-ptr"), /* Print hashed pointer */ \
+ C(BITMASK_LIST, "bitmask-list"), \
FUNCTION_FLAGS \
FGRAPH_FLAGS \
STACK_FLAGS \
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index cc2d3306bb60..1996d7aba038 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -194,13 +194,37 @@ trace_print_symbols_seq_u64(struct trace_seq *p, unsigned long long val,
EXPORT_SYMBOL(trace_print_symbols_seq_u64);
#endif
+/**
+ * trace_print_bitmask_seq - print a bitmask to a sequence buffer
+ * @iter: The trace iterator for the current event instance
+ * @bitmask_ptr: The pointer to the bitmask data
+ * @bitmask_size: The size of the bitmask in bytes
+ *
+ * Prints a bitmask into a sequence buffer as either a hex string or a
+ * human-readable range list, depending on the instance's "bitmask-list"
+ * trace option. The bitmask is formatted into the iterator's temporary
+ * scratchpad rather than the primary sequence buffer. This avoids
+ * duplication and pointer-collision issues when the returned string is
+ * processed by a "%s" specifier in a TP_printk() macro.
+ *
+ * Returns a pointer to the formatted string within the temporary buffer.
+ */
const char *
-trace_print_bitmask_seq(struct trace_seq *p, void *bitmask_ptr,
+trace_print_bitmask_seq(struct trace_iterator *iter, void *bitmask_ptr,
unsigned int bitmask_size)
{
- const char *ret = trace_seq_buffer_ptr(p);
+ struct trace_seq *p = &iter->tmp_seq;
+ const struct trace_array *tr = iter->tr;
+ const char *ret;
+
+ trace_seq_init(p);
+ ret = trace_seq_buffer_ptr(p);
+
+ if (tr->trace_flags & TRACE_ITER(BITMASK_LIST))
+ trace_seq_bitmask_list(p, bitmask_ptr, bitmask_size * 8);
+ else
+ trace_seq_bitmask(p, bitmask_ptr, bitmask_size * 8);
- trace_seq_bitmask(p, bitmask_ptr, bitmask_size * 8);
trace_seq_putc(p, 0);
return ret;
diff --git a/kernel/trace/trace_seq.c b/kernel/trace/trace_seq.c
index 32684ef4fb9d..85f6f10d107f 100644
--- a/kernel/trace/trace_seq.c
+++ b/kernel/trace/trace_seq.c
@@ -106,7 +106,7 @@ EXPORT_SYMBOL_GPL(trace_seq_printf);
* Writes a ASCII representation of a bitmask string into @s.
*/
void trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
- int nmaskbits)
+ int nmaskbits)
{
unsigned int save_len = s->seq.len;
@@ -124,6 +124,33 @@ void trace_seq_bitmask(struct trace_seq *s, const unsigned long *maskp,
}
EXPORT_SYMBOL_GPL(trace_seq_bitmask);
+/**
+ * trace_seq_bitmask_list - write a bitmask array in its list representation
+ * @s: trace sequence descriptor
+ * @maskp: points to an array of unsigned longs that represent a bitmask
+ * @nmaskbits: The number of bits that are valid in @maskp
+ *
+ * Writes a list representation (e.g., 0-3,5-7) of a bitmask string into @s.
+ */
+void trace_seq_bitmask_list(struct trace_seq *s, const unsigned long *maskp,
+ int nmaskbits)
+{
+ unsigned int save_len = s->seq.len;
+
+ if (s->full)
+ return;
+
+ __trace_seq_init(s);
+
+ seq_buf_printf(&s->seq, "%*pbl", nmaskbits, maskp);
+
+ if (unlikely(seq_buf_has_overflowed(&s->seq))) {
+ s->seq.len = save_len;
+ s->full = 1;
+ }
+}
+EXPORT_SYMBOL_GPL(trace_seq_bitmask_list);
+
/**
* trace_seq_vprintf - sequence printing of trace information
* @s: trace sequence descriptor
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 04/15] tracing: Replace use of system_wq with system_dfl_wq
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
` (2 preceding siblings ...)
2026-01-27 15:05 ` [for-next][PATCH 03/15] tracing: Add bitmask-list option for human-readable bitmask display Steven Rostedt
@ 2026-01-27 15:05 ` Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 05/15] tracing: Add show_event_filters to expose active event filters Steven Rostedt
` (10 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:05 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Lai Jiangshan, Frederic Weisbecker, Sebastian Andrzej Siewior,
Michal Hocko, Tejun Heo, Marco Crivellari
From: Marco Crivellari <marco.crivellari@suse.com>
This patch continues the effort to refactor workqueue APIs, which has begun
with the changes introducing new workqueues and a new alloc_workqueue flag:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
The point of the refactoring is to eventually alter the default behavior of
workqueues to become unbound by default so that their workload placement is
optimized by the scheduler.
Before that to happen after a careful review and conversion of each individual
case, workqueue users must be converted to the better named new workqueues with
no intended behaviour changes:
system_wq -> system_percpu_wq
system_unbound_wq -> system_dfl_wq
This specific workflow has no benefits being per-cpu, so instead of
system_percpu_wq the new unbound workqueue has been used (system_dfl_wq).
This way the old obsolete workqueues (system_wq, system_unbound_wq) can be
removed in the future.
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://patch.msgid.link/20251230142820.173712-1-marco.crivellari@suse.com
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace_events_filter.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
index 385af8405392..7001e34476ee 100644
--- a/kernel/trace/trace_events_filter.c
+++ b/kernel/trace/trace_events_filter.c
@@ -1375,7 +1375,7 @@ static void free_filter_list_tasks(struct rcu_head *rhp)
struct filter_head *filter_list = container_of(rhp, struct filter_head, rcu);
INIT_RCU_WORK(&filter_list->rwork, free_filter_list_work);
- queue_rcu_work(system_wq, &filter_list->rwork);
+ queue_rcu_work(system_dfl_wq, &filter_list->rwork);
}
/*
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 05/15] tracing: Add show_event_filters to expose active event filters
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
` (3 preceding siblings ...)
2026-01-27 15:05 ` [for-next][PATCH 04/15] tracing: Replace use of system_wq with system_dfl_wq Steven Rostedt
@ 2026-01-27 15:06 ` Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 06/15] tracing: Add show_event_triggers to expose active event triggers Steven Rostedt
` (9 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:06 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Aaron Tomlin
From: Aaron Tomlin <atomlin@atomlin.com>
Currently, to audit active Ftrace event filters, userspace must
recursively traverse the events/ directory and read each individual
filter file. This is inefficient for monitoring tools and debugging.
Introduce "show_event_filters" at the trace root directory. This file
displays all events that currently have a filter applied, alongside the
actual filter string, in a consolidated system:event [tab] filter
format.
The implementation reuses the existing trace_event_file iterators to
ensure atomic traversal of the event list and utilises guard(rcu)() for
automatic, scope-based protection when accessing volatile filter
strings.
Link: https://patch.msgid.link/20260105142939.2655342-2-atomlin@atomlin.com
Signed-off-by: Aaron Tomlin <atomlin@atomlin.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Documentation/trace/ftrace.rst | 8 +++++
kernel/trace/trace_events.c | 58 ++++++++++++++++++++++++++++++++++
2 files changed, 66 insertions(+)
diff --git a/Documentation/trace/ftrace.rst b/Documentation/trace/ftrace.rst
index 639f4d95732f..4ce01e726b09 100644
--- a/Documentation/trace/ftrace.rst
+++ b/Documentation/trace/ftrace.rst
@@ -684,6 +684,14 @@ of ftrace. Here is a list of some of the key files:
See events.rst for more information.
+ show_event_filters:
+
+ A list of events that have filters. This shows the
+ system/event pair along with the filter that is attached to
+ the event.
+
+ See events.rst for more information.
+
available_events:
A list of events that can be enabled in tracing.
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 137b4d9bb116..6cbd36508368 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -1662,6 +1662,32 @@ static void t_stop(struct seq_file *m, void *p)
mutex_unlock(&event_mutex);
}
+/**
+ * t_show_filters - seq_file callback to display active event filters
+ * @m: The seq_file interface for formatted output
+ * @v: The current trace_event_file being iterated
+ *
+ * Identifies and prints active filters for the current event file in the
+ * iteration. If a filter is applied to the current event and, if so,
+ * prints the system name, event name, and the filter string.
+ */
+static int t_show_filters(struct seq_file *m, void *v)
+{
+ struct trace_event_file *file = v;
+ struct trace_event_call *call = file->event_call;
+ struct event_filter *filter;
+
+ guard(rcu)();
+ filter = rcu_dereference(file->filter);
+ if (!filter || !filter->filter_string)
+ return 0;
+
+ seq_printf(m, "%s:%s\t%s\n", call->class->system,
+ trace_event_name(call), filter->filter_string);
+
+ return 0;
+}
+
#ifdef CONFIG_MODULES
static int s_show(struct seq_file *m, void *v)
{
@@ -2489,6 +2515,7 @@ ftrace_event_npid_write(struct file *filp, const char __user *ubuf,
static int ftrace_event_avail_open(struct inode *inode, struct file *file);
static int ftrace_event_set_open(struct inode *inode, struct file *file);
+static int ftrace_event_show_filters_open(struct inode *inode, struct file *file);
static int ftrace_event_set_pid_open(struct inode *inode, struct file *file);
static int ftrace_event_set_npid_open(struct inode *inode, struct file *file);
static int ftrace_event_release(struct inode *inode, struct file *file);
@@ -2507,6 +2534,13 @@ static const struct seq_operations show_set_event_seq_ops = {
.stop = s_stop,
};
+static const struct seq_operations show_show_event_filters_seq_ops = {
+ .start = t_start,
+ .next = t_next,
+ .show = t_show_filters,
+ .stop = t_stop,
+};
+
static const struct seq_operations show_set_pid_seq_ops = {
.start = p_start,
.next = p_next,
@@ -2536,6 +2570,13 @@ static const struct file_operations ftrace_set_event_fops = {
.release = ftrace_event_release,
};
+static const struct file_operations ftrace_show_event_filters_fops = {
+ .open = ftrace_event_show_filters_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
static const struct file_operations ftrace_set_event_pid_fops = {
.open = ftrace_event_set_pid_open,
.read = seq_read,
@@ -2680,6 +2721,20 @@ ftrace_event_set_open(struct inode *inode, struct file *file)
return ret;
}
+/**
+ * ftrace_event_show_filters_open - open interface for set_event_filters
+ * @inode: The inode of the file
+ * @file: The file being opened
+ *
+ * Connects the set_event_filters file to the sequence operations
+ * required to iterate over and display active event filters.
+ */
+static int
+ftrace_event_show_filters_open(struct inode *inode, struct file *file)
+{
+ return ftrace_event_open(inode, file, &show_show_event_filters_seq_ops);
+}
+
static int
ftrace_event_set_pid_open(struct inode *inode, struct file *file)
{
@@ -4400,6 +4455,9 @@ create_event_toplevel_files(struct dentry *parent, struct trace_array *tr)
if (!entry)
return -ENOMEM;
+ trace_create_file("show_event_filters", TRACE_MODE_READ, parent, tr,
+ &ftrace_show_event_filters_fops);
+
nr_entries = ARRAY_SIZE(events_entries);
e_events = eventfs_create_events_dir("events", parent, events_entries,
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 06/15] tracing: Add show_event_triggers to expose active event triggers
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
` (4 preceding siblings ...)
2026-01-27 15:06 ` [for-next][PATCH 05/15] tracing: Add show_event_filters to expose active event filters Steven Rostedt
@ 2026-01-27 15:06 ` Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 07/15] tracing: Check the return value of tracing_update_buffers() Steven Rostedt
` (8 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:06 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Aaron Tomlin
From: Aaron Tomlin <atomlin@atomlin.com>
To audit active event triggers, userspace currently must traverse the
events/ directory and read each individual trigger file. This is
cumbersome for system-wide auditing or debugging.
Introduce "show_event_triggers" at the trace root directory. This file
displays all events that currently have one or more triggers applied,
alongside the trigger configuration, in a consolidated
system:event [tab] trigger format.
The implementation leverages the existing trace_event_file iterators
and uses the trigger's own print() operation to ensure output
consistency with the per-event trigger files.
Link: https://patch.msgid.link/20260105142939.2655342-3-atomlin@atomlin.com
Signed-off-by: Aaron Tomlin <atomlin@atomlin.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Documentation/trace/ftrace.rst | 8 +++++
kernel/trace/trace_events.c | 64 ++++++++++++++++++++++++++++++++++
2 files changed, 72 insertions(+)
diff --git a/Documentation/trace/ftrace.rst b/Documentation/trace/ftrace.rst
index 4ce01e726b09..b9efb148a5c2 100644
--- a/Documentation/trace/ftrace.rst
+++ b/Documentation/trace/ftrace.rst
@@ -692,6 +692,14 @@ of ftrace. Here is a list of some of the key files:
See events.rst for more information.
+ show_event_triggers:
+
+ A list of events that have triggers. This shows the
+ system/event pair along with the trigger that is attached to
+ the event.
+
+ See events.rst for more information.
+
available_events:
A list of events that can be enabled in tracing.
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 6cbd36508368..36936697fa2a 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -1688,6 +1688,38 @@ static int t_show_filters(struct seq_file *m, void *v)
return 0;
}
+/**
+ * t_show_triggers - seq_file callback to display active event triggers
+ * @m: The seq_file interface for formatted output
+ * @v: The current trace_event_file being iterated
+ *
+ * Iterates through the trigger list of the current event file and prints
+ * each active trigger's configuration using its associated print
+ * operation.
+ */
+static int t_show_triggers(struct seq_file *m, void *v)
+{
+ struct trace_event_file *file = v;
+ struct trace_event_call *call = file->event_call;
+ struct event_trigger_data *data;
+
+ /*
+ * The event_mutex is held by t_start(), protecting the
+ * file->triggers list traversal.
+ */
+ if (list_empty(&file->triggers))
+ return 0;
+
+ list_for_each_entry_rcu(data, &file->triggers, list) {
+ seq_printf(m, "%s:%s\t", call->class->system,
+ trace_event_name(call));
+
+ data->cmd_ops->print(m, data);
+ }
+
+ return 0;
+}
+
#ifdef CONFIG_MODULES
static int s_show(struct seq_file *m, void *v)
{
@@ -2516,6 +2548,7 @@ ftrace_event_npid_write(struct file *filp, const char __user *ubuf,
static int ftrace_event_avail_open(struct inode *inode, struct file *file);
static int ftrace_event_set_open(struct inode *inode, struct file *file);
static int ftrace_event_show_filters_open(struct inode *inode, struct file *file);
+static int ftrace_event_show_triggers_open(struct inode *inode, struct file *file);
static int ftrace_event_set_pid_open(struct inode *inode, struct file *file);
static int ftrace_event_set_npid_open(struct inode *inode, struct file *file);
static int ftrace_event_release(struct inode *inode, struct file *file);
@@ -2541,6 +2574,13 @@ static const struct seq_operations show_show_event_filters_seq_ops = {
.stop = t_stop,
};
+static const struct seq_operations show_show_event_triggers_seq_ops = {
+ .start = t_start,
+ .next = t_next,
+ .show = t_show_triggers,
+ .stop = t_stop,
+};
+
static const struct seq_operations show_set_pid_seq_ops = {
.start = p_start,
.next = p_next,
@@ -2577,6 +2617,13 @@ static const struct file_operations ftrace_show_event_filters_fops = {
.release = seq_release,
};
+static const struct file_operations ftrace_show_event_triggers_fops = {
+ .open = ftrace_event_show_triggers_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
static const struct file_operations ftrace_set_event_pid_fops = {
.open = ftrace_event_set_pid_open,
.read = seq_read,
@@ -2735,6 +2782,20 @@ ftrace_event_show_filters_open(struct inode *inode, struct file *file)
return ftrace_event_open(inode, file, &show_show_event_filters_seq_ops);
}
+/**
+ * ftrace_event_show_triggers_open - open interface for show_event_triggers
+ * @inode: The inode of the file
+ * @file: The file being opened
+ *
+ * Connects the show_event_triggers file to the sequence operations
+ * required to iterate over and display active event triggers.
+ */
+static int
+ftrace_event_show_triggers_open(struct inode *inode, struct file *file)
+{
+ return ftrace_event_open(inode, file, &show_show_event_triggers_seq_ops);
+}
+
static int
ftrace_event_set_pid_open(struct inode *inode, struct file *file)
{
@@ -4458,6 +4519,9 @@ create_event_toplevel_files(struct dentry *parent, struct trace_array *tr)
trace_create_file("show_event_filters", TRACE_MODE_READ, parent, tr,
&ftrace_show_event_filters_fops);
+ trace_create_file("show_event_triggers", TRACE_MODE_READ, parent, tr,
+ &ftrace_show_event_triggers_fops);
+
nr_entries = ARRAY_SIZE(events_entries);
e_events = eventfs_create_events_dir("events", parent, events_entries,
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 07/15] tracing: Check the return value of tracing_update_buffers()
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
` (5 preceding siblings ...)
2026-01-27 15:06 ` [for-next][PATCH 06/15] tracing: Add show_event_triggers to expose active event triggers Steven Rostedt
@ 2026-01-27 15:06 ` Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 08/15] ring-buffer: Use a housekeeping CPU to wake up waiters Steven Rostedt
` (7 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:06 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Li Zhong
From: Steven Rostedt <rostedt@goodmis.org>
In the very unlikely event that tracing_update_buffers() fails in
trace_printk_init_buffers(), report the failure so that it is known.
Link: https://lore.kernel.org/all/20220917020353.3836285-1-floridsleeves@gmail.com/
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://patch.msgid.link/20260107161510.4dc98b15@gandalf.local.home
Suggested-by: Li Zhong <floridsleeves@gmail.com>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 8bd4ec08fb36..870205cba31e 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3309,9 +3309,10 @@ void trace_printk_init_buffers(void)
pr_warn("**********************************************************\n");
/* Expand the buffers to set size */
- tracing_update_buffers(&global_trace);
-
- buffers_allocated = 1;
+ if (tracing_update_buffers(&global_trace) < 0)
+ pr_err("Failed to expand tracing buffers for trace_printk() calls\n");
+ else
+ buffers_allocated = 1;
/*
* trace_printk_init_buffers() can be called by modules.
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 08/15] ring-buffer: Use a housekeeping CPU to wake up waiters
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
` (6 preceding siblings ...)
2026-01-27 15:06 ` [for-next][PATCH 07/15] tracing: Check the return value of tracing_update_buffers() Steven Rostedt
@ 2026-01-27 15:06 ` Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 09/15] tracing: Have show_event_trigger/filter format a bit more in columns Steven Rostedt
` (6 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:06 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Sebastian Andrzej Siewior, Clark Williams, Frederic Weisbecker,
Petr Tesarik
From: Petr Tesarik <ptesarik@suse.com>
Avoid running the wakeup irq_work on an isolated CPU. Since the wakeup can
run on any CPU, let's pick a housekeeping CPU to do the job.
This change reduces additional noise when tracing isolated CPUs. For
example, the following ipi_send_cpu stack trace was captured with
nohz_full=2 on the isolated CPU:
<idle>-0 [002] d.h4. 1255.379293: ipi_send_cpu: cpu=2 callsite=irq_work_queue+0x2d/0x50 callback=rb_wake_up_waiters+0x0/0x80
<idle>-0 [002] d.h4. 1255.379329: <stack trace>
=> trace_event_raw_event_ipi_send_cpu
=> __irq_work_queue_local
=> irq_work_queue
=> ring_buffer_unlock_commit
=> trace_buffer_unlock_commit_regs
=> trace_event_buffer_commit
=> trace_event_raw_event_x86_irq_vector
=> __sysvec_apic_timer_interrupt
=> sysvec_apic_timer_interrupt
=> asm_sysvec_apic_timer_interrupt
=> pv_native_safe_halt
=> default_idle
=> default_idle_call
=> do_idle
=> cpu_startup_entry
=> start_secondary
=> common_startup_64
The IRQ work interrupt alone adds considerable noise, but the impact can
get even worse with PREEMPT_RT, because the IRQ work interrupt is then
handled by a separate kernel thread. This requires a task switch and makes
tracing useless for analyzing latency on an isolated CPU.
After applying the patch, the trace is similar, but ipi_send_cpu always
targets a non-isolated CPU.
Unfortunately, irq_work_queue_on() is not NMI-safe. When running in NMI
context, fall back to queuing the irq work on the local CPU.
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Clark Williams <clrkwllms@kernel.org>
Cc: Frederic Weisbecker <frederic@kernel.org>
Link: https://patch.msgid.link/20260108132132.2473515-1-ptesarik@suse.com
Signed-off-by: Petr Tesarik <ptesarik@suse.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/ring_buffer.c | 24 +++++++++++++++++++++---
1 file changed, 21 insertions(+), 3 deletions(-)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 630221b00838..d33103408955 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -4,6 +4,7 @@
*
* Copyright (C) 2008 Steven Rostedt <srostedt@redhat.com>
*/
+#include <linux/sched/isolation.h>
#include <linux/trace_recursion.h>
#include <linux/trace_events.h>
#include <linux/ring_buffer.h>
@@ -4013,19 +4014,36 @@ static void rb_commit(struct ring_buffer_per_cpu *cpu_buffer)
rb_end_commit(cpu_buffer);
}
+static bool
+rb_irq_work_queue(struct rb_irq_work *irq_work)
+{
+ int cpu;
+
+ /* irq_work_queue_on() is not NMI-safe */
+ if (unlikely(in_nmi()))
+ return irq_work_queue(&irq_work->work);
+
+ /*
+ * If CPU isolation is not active, cpu is always the current
+ * CPU, and the following is equivallent to irq_work_queue().
+ */
+ cpu = housekeeping_any_cpu(HK_TYPE_KERNEL_NOISE);
+ return irq_work_queue_on(&irq_work->work, cpu);
+}
+
static __always_inline void
rb_wakeups(struct trace_buffer *buffer, struct ring_buffer_per_cpu *cpu_buffer)
{
if (buffer->irq_work.waiters_pending) {
buffer->irq_work.waiters_pending = false;
/* irq_work_queue() supplies it's own memory barriers */
- irq_work_queue(&buffer->irq_work.work);
+ rb_irq_work_queue(&buffer->irq_work);
}
if (cpu_buffer->irq_work.waiters_pending) {
cpu_buffer->irq_work.waiters_pending = false;
/* irq_work_queue() supplies it's own memory barriers */
- irq_work_queue(&cpu_buffer->irq_work.work);
+ rb_irq_work_queue(&cpu_buffer->irq_work);
}
if (cpu_buffer->last_pages_touch == local_read(&cpu_buffer->pages_touched))
@@ -4045,7 +4063,7 @@ rb_wakeups(struct trace_buffer *buffer, struct ring_buffer_per_cpu *cpu_buffer)
cpu_buffer->irq_work.wakeup_full = true;
cpu_buffer->irq_work.full_waiters_pending = false;
/* irq_work_queue() supplies it's own memory barriers */
- irq_work_queue(&cpu_buffer->irq_work.work);
+ rb_irq_work_queue(&cpu_buffer->irq_work);
}
#ifdef CONFIG_RING_BUFFER_RECORD_RECURSION
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 09/15] tracing: Have show_event_trigger/filter format a bit more in columns
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
` (7 preceding siblings ...)
2026-01-27 15:06 ` [for-next][PATCH 08/15] ring-buffer: Use a housekeeping CPU to wake up waiters Steven Rostedt
@ 2026-01-27 15:06 ` Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 10/15] ftrace: Introduce and use ENTRIES_PER_PAGE_GROUP macro Steven Rostedt
` (5 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:06 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Aaron Tomlin
From: Steven Rostedt <rostedt@goodmis.org>
By doing:
# trace-cmd sqlhist -e -n futex_wait select TIMESTAMP_DELTA_USECS as lat from sys_enter_futex as start join sys_exit_futex as end on start.common_pid = end.common_pid
and
# trace-cmd start -e futex_wait -f 'lat > 100' -e page_pool_state_release -f 'pfn == 1'
The output of the show_event_trigger and show_event_filter files are well
aligned because of the inconsistent 'tab' spacing:
~# cat /sys/kernel/tracing/show_event_triggers
syscalls:sys_exit_futex hist:keys=common_pid:vals=hitcount:__lat_12046_2=common_timestamp.usecs-$__arg_12046_1:sort=hitcount:size=2048:clock=global:onmatch(syscalls.sys_enter_futex).trace(futex_wait,$__lat_12046_2) [active]
syscalls:sys_enter_futex hist:keys=common_pid:vals=hitcount:__arg_12046_1=common_timestamp.usecs:sort=hitcount:size=2048:clock=global [active]
~# cat /sys/kernel/tracing/show_event_filters
synthetic:futex_wait (lat > 100)
page_pool:page_pool_state_release (pfn == 1)
This makes it not so easy to read. Instead, force the spacing to be at
least 32 bytes from the beginning (one space if the system:event is longer
than 30 bytes):
~# cat /sys/kernel/tracing/show_event_triggers
syscalls:sys_exit_futex hist:keys=common_pid:vals=hitcount:__lat_8125_2=common_timestamp.usecs-$__arg_8125_1:sort=hitcount:size=2048:clock=global:onmatch(syscalls.sys_enter_futex).trace(futex_wait,$__lat_8125_2) [active]
syscalls:sys_enter_futex hist:keys=common_pid:vals=hitcount:__arg_8125_1=common_timestamp.usecs:sort=hitcount:size=2048:clock=global [active]
~# cat /sys/kernel/tracing/show_event_filters
synthetic:futex_wait (lat > 100)
page_pool:page_pool_state_release (pfn == 1)
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://patch.msgid.link/20260112153408.18373e73@gandalf.local.home
Reviewed-by: Aaron Tomlin <atomlin@atomlin.com>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace_events.c | 26 ++++++++++++++++++++++----
1 file changed, 22 insertions(+), 4 deletions(-)
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 36936697fa2a..f372a6374164 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -1662,6 +1662,18 @@ static void t_stop(struct seq_file *m, void *p)
mutex_unlock(&event_mutex);
}
+static int get_call_len(struct trace_event_call *call)
+{
+ int len;
+
+ /* Get the length of "<system>:<event>" */
+ len = strlen(call->class->system) + 1;
+ len += strlen(trace_event_name(call));
+
+ /* Set the index to 32 bytes to separate event from data */
+ return len >= 32 ? 1 : 32 - len;
+}
+
/**
* t_show_filters - seq_file callback to display active event filters
* @m: The seq_file interface for formatted output
@@ -1676,14 +1688,17 @@ static int t_show_filters(struct seq_file *m, void *v)
struct trace_event_file *file = v;
struct trace_event_call *call = file->event_call;
struct event_filter *filter;
+ int len;
guard(rcu)();
filter = rcu_dereference(file->filter);
if (!filter || !filter->filter_string)
return 0;
- seq_printf(m, "%s:%s\t%s\n", call->class->system,
- trace_event_name(call), filter->filter_string);
+ len = get_call_len(call);
+
+ seq_printf(m, "%s:%s%*.s%s\n", call->class->system,
+ trace_event_name(call), len, "", filter->filter_string);
return 0;
}
@@ -1702,6 +1717,7 @@ static int t_show_triggers(struct seq_file *m, void *v)
struct trace_event_file *file = v;
struct trace_event_call *call = file->event_call;
struct event_trigger_data *data;
+ int len;
/*
* The event_mutex is held by t_start(), protecting the
@@ -1710,9 +1726,11 @@ static int t_show_triggers(struct seq_file *m, void *v)
if (list_empty(&file->triggers))
return 0;
+ len = get_call_len(call);
+
list_for_each_entry_rcu(data, &file->triggers, list) {
- seq_printf(m, "%s:%s\t", call->class->system,
- trace_event_name(call));
+ seq_printf(m, "%s:%s%*.s", call->class->system,
+ trace_event_name(call), len, "");
data->cmd_ops->print(m, data);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 10/15] ftrace: Introduce and use ENTRIES_PER_PAGE_GROUP macro
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
` (8 preceding siblings ...)
2026-01-27 15:06 ` [for-next][PATCH 09/15] tracing: Have show_event_trigger/filter format a bit more in columns Steven Rostedt
@ 2026-01-27 15:06 ` Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 11/15] tracing: Disable trace_printk buffer on warning too Steven Rostedt
` (4 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:06 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Guenter Roeck
From: Guenter Roeck <linux@roeck-us.net>
ENTRIES_PER_PAGE_GROUP() returns the number of dyn_ftrace entries in a page
group, identified by its order.
No functional change.
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://patch.msgid.link/20260113152243.3557219-2-linux@roeck-us.net
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/ftrace.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index aa758efc3731..df4ce244202e 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1148,6 +1148,7 @@ struct ftrace_page {
};
#define ENTRY_SIZE sizeof(struct dyn_ftrace)
+#define ENTRIES_PER_PAGE_GROUP(order) ((PAGE_SIZE << (order)) / ENTRY_SIZE)
static struct ftrace_page *ftrace_pages_start;
static struct ftrace_page *ftrace_pages;
@@ -3862,7 +3863,7 @@ static int ftrace_allocate_records(struct ftrace_page *pg, int count,
*num_pages += 1 << order;
ftrace_number_of_groups++;
- cnt = (PAGE_SIZE << order) / ENTRY_SIZE;
+ cnt = ENTRIES_PER_PAGE_GROUP(order);
pg->order = order;
if (cnt > count)
@@ -7309,7 +7310,7 @@ static int ftrace_process_locs(struct module *mod,
long skip;
/* Count the number of entries unused and compare it to skipped. */
- pg_remaining = (PAGE_SIZE << pg->order) / ENTRY_SIZE - pg->index;
+ pg_remaining = ENTRIES_PER_PAGE_GROUP(pg->order) - pg->index;
if (!WARN(skipped < pg_remaining, "Extra allocated pages for ftrace")) {
@@ -7317,7 +7318,7 @@ static int ftrace_process_locs(struct module *mod,
for (pg = pg_unuse; pg && skip > 0; pg = pg->next) {
remaining += 1 << pg->order;
- skip -= (PAGE_SIZE << pg->order) / ENTRY_SIZE;
+ skip -= ENTRIES_PER_PAGE_GROUP(pg->order);
}
pages -= remaining;
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 11/15] tracing: Disable trace_printk buffer on warning too
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
` (9 preceding siblings ...)
2026-01-27 15:06 ` [for-next][PATCH 10/15] ftrace: Introduce and use ENTRIES_PER_PAGE_GROUP macro Steven Rostedt
@ 2026-01-27 15:06 ` Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 12/15] tracing: Have hist_debug show what function a field uses Steven Rostedt
` (3 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:06 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton
From: Steven Rostedt <rostedt@goodmis.org>
When /proc/sys/kernel/traceoff_on_warning is set to 1, the top level
tracing buffer is disabled when a warning happens. This is very useful
when debugging and want the tracing buffer to stop taking new data when a
warning triggers keeping the events that lead up to the warning from being
overwritten.
Now that there is also a persistent ring buffer and an option to have
trace_printk go to that buffer, the same holds true for that buffer. A
warning could happen just before a crash but still write enough events to
lose the events that lead up to the first warning that was the reason for
the crash.
When /proc/sys/kernel/traceoff_on_warning is set to 1 and a warning is
triggered, not only disable the top level tracing buffer, but also disable
the buffer that trace_printk()s are written to.
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Link: https://patch.msgid.link/20260121093858.5c5d7e7b@gandalf.local.home
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 870205cba31e..396d59202438 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1666,9 +1666,18 @@ EXPORT_SYMBOL_GPL(tracing_off);
void disable_trace_on_warning(void)
{
if (__disable_trace_on_warning) {
+ struct trace_array *tr = READ_ONCE(printk_trace);
+
trace_array_printk_buf(global_trace.array_buffer.buffer, _THIS_IP_,
"Disabling tracing due to warning\n");
tracing_off();
+
+ /* Disable trace_printk() buffer too */
+ if (tr != &global_trace) {
+ trace_array_printk_buf(tr->array_buffer.buffer, _THIS_IP_,
+ "Disabling tracing due to warning\n");
+ tracer_tracing_off(tr);
+ }
}
}
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 12/15] tracing: Have hist_debug show what function a field uses
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
` (10 preceding siblings ...)
2026-01-27 15:06 ` [for-next][PATCH 11/15] tracing: Disable trace_printk buffer on warning too Steven Rostedt
@ 2026-01-27 15:06 ` Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 13/15] tracing: Remove notrace from trace_event_raw_event_synth() Steven Rostedt
` (2 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:06 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Tom Zanussi
From: Steven Rostedt <rostedt@goodmis.org>
When CONFIG_HIST_TRIGGERS_DEBUG is enabled, each trace event has a
"hist_debug" file that explains the histogram internal data. This is very
useful for debugging histograms.
One bit of data that was missing from this file was what function a
histogram field uses to process its data. The hist_field structure now has
a fn_num that is used by a switch statement in hist_fn_call() to call a
function directly (to avoid spectre mitigations).
Instead of displaying that number, create a string array that maps to the
histogram function enums so that the function for a field may be
displayed:
~# cat /sys/kernel/tracing/events/sched/sched_switch/hist_debug
[..]
hist_data: 0000000043d62762
n_vals: 2
n_keys: 1
n_fields: 3
val fields:
hist_data->fields[0]:
flags:
VAL: HIST_FIELD_FL_HITCOUNT
type: u64
size: 8
is_signed: 0
function: hist_field_counter()
hist_data->fields[1]:
flags:
HIST_FIELD_FL_VAR
var.name: __arg_3921_2
var.idx (into tracing_map_elt.vars[]): 0
type: unsigned long[]
size: 128
is_signed: 0
function: hist_field_nop()
key fields:
hist_data->fields[2]:
flags:
HIST_FIELD_FL_KEY
ftrace_event_field name: prev_pid
type: pid_t
size: 8
is_signed: 1
function: hist_field_s32()
The "function:" field above is added.
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://patch.msgid.link/20260122203822.58df4d80@gandalf.local.home
Reviewed-by: Tom Zanussi <zanussi@kernel.org>
Tested-by: Tom Zanussi <zanussi@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace_events_hist.c | 75 +++++++++++++++++++-------------
1 file changed, 44 insertions(+), 31 deletions(-)
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 0908a9f7e289..e245446a8cf7 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -105,38 +105,44 @@ enum field_op_id {
FIELD_OP_MULT,
};
+#define FIELD_FUNCS \
+ C(NOP, "nop"), \
+ C(VAR_REF, "var_ref"), \
+ C(COUNTER, "counter"), \
+ C(CONST, "const"), \
+ C(LOG2, "log2"), \
+ C(BUCKET, "bucket"), \
+ C(TIMESTAMP, "timestamp"), \
+ C(CPU, "cpu"), \
+ C(COMM, "comm"), \
+ C(STRING, "string"), \
+ C(DYNSTRING, "dynstring"), \
+ C(RELDYNSTRING, "reldynstring"), \
+ C(PSTRING, "pstring"), \
+ C(S64, "s64"), \
+ C(U64, "u64"), \
+ C(S32, "s32"), \
+ C(U32, "u32"), \
+ C(S16, "s16"), \
+ C(U16, "u16"), \
+ C(S8, "s8"), \
+ C(U8, "u8"), \
+ C(UMINUS, "uminus"), \
+ C(MINUS, "minus"), \
+ C(PLUS, "plus"), \
+ C(DIV, "div"), \
+ C(MULT, "mult"), \
+ C(DIV_POWER2, "div_power2"), \
+ C(DIV_NOT_POWER2, "div_not_power2"), \
+ C(DIV_MULT_SHIFT, "div_mult_shift"), \
+ C(EXECNAME, "execname"), \
+ C(STACK, "stack"),
+
+#undef C
+#define C(a, b) HIST_FIELD_FN_##a
+
enum hist_field_fn {
- HIST_FIELD_FN_NOP,
- HIST_FIELD_FN_VAR_REF,
- HIST_FIELD_FN_COUNTER,
- HIST_FIELD_FN_CONST,
- HIST_FIELD_FN_LOG2,
- HIST_FIELD_FN_BUCKET,
- HIST_FIELD_FN_TIMESTAMP,
- HIST_FIELD_FN_CPU,
- HIST_FIELD_FN_COMM,
- HIST_FIELD_FN_STRING,
- HIST_FIELD_FN_DYNSTRING,
- HIST_FIELD_FN_RELDYNSTRING,
- HIST_FIELD_FN_PSTRING,
- HIST_FIELD_FN_S64,
- HIST_FIELD_FN_U64,
- HIST_FIELD_FN_S32,
- HIST_FIELD_FN_U32,
- HIST_FIELD_FN_S16,
- HIST_FIELD_FN_U16,
- HIST_FIELD_FN_S8,
- HIST_FIELD_FN_U8,
- HIST_FIELD_FN_UMINUS,
- HIST_FIELD_FN_MINUS,
- HIST_FIELD_FN_PLUS,
- HIST_FIELD_FN_DIV,
- HIST_FIELD_FN_MULT,
- HIST_FIELD_FN_DIV_POWER2,
- HIST_FIELD_FN_DIV_NOT_POWER2,
- HIST_FIELD_FN_DIV_MULT_SHIFT,
- HIST_FIELD_FN_EXECNAME,
- HIST_FIELD_FN_STACK,
+ FIELD_FUNCS
};
/*
@@ -5854,6 +5860,12 @@ const struct file_operations event_hist_fops = {
};
#ifdef CONFIG_HIST_TRIGGERS_DEBUG
+
+#undef C
+#define C(a, b) b
+
+static const char * const field_funcs[] = { FIELD_FUNCS };
+
static void hist_field_debug_show_flags(struct seq_file *m,
unsigned long flags)
{
@@ -5918,6 +5930,7 @@ static int hist_field_debug_show(struct seq_file *m,
seq_printf(m, " type: %s\n", field->type);
seq_printf(m, " size: %u\n", field->size);
seq_printf(m, " is_signed: %u\n", field->is_signed);
+ seq_printf(m, " function: hist_field_%s()\n", field_funcs[field->fn_num]);
return 0;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 13/15] tracing: Remove notrace from trace_event_raw_event_synth()
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
` (11 preceding siblings ...)
2026-01-27 15:06 ` [for-next][PATCH 12/15] tracing: Have hist_debug show what function a field uses Steven Rostedt
@ 2026-01-27 15:06 ` Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 14/15] tracing: Up the hist stacktrace size from 16 to 31 Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 15/15] tracing: Remove duplicate ENABLE_EVENT_STR and DISABLE_EVENT_STR macros Steven Rostedt
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:06 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Tom Zanussi
From: Steven Rostedt <rostedt@goodmis.org>
When debugging the synthetic events, being able to function trace its
functions is very useful (now that CONFIG_FUNCTION_SELF_TRACING is
available). For some reason trace_event_raw_event_synth() was marked as
"notrace", which was totally unnecessary as all of the tracing directory
had function tracing disabled until the recent FUNCTION_SELF_TRACING was
added.
Remove the notrace annotation from trace_event_raw_event_synth() as
there's no reason to not trace it when tracing synthetic event functions.
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://patch.msgid.link/20260122204526.068a98c9@gandalf.local.home
Acked-by: Tom Zanussi <zanussi@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace_events_synth.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
index 45c187e77e21..ce42fbf16f4a 100644
--- a/kernel/trace/trace_events_synth.c
+++ b/kernel/trace/trace_events_synth.c
@@ -499,9 +499,9 @@ static unsigned int trace_stack(struct synth_trace_event *entry,
return len;
}
-static notrace void trace_event_raw_event_synth(void *__data,
- u64 *var_ref_vals,
- unsigned int *var_ref_idx)
+static void trace_event_raw_event_synth(void *__data,
+ u64 *var_ref_vals,
+ unsigned int *var_ref_idx)
{
unsigned int i, n_u64, val_idx, len, data_size = 0;
struct trace_event_file *trace_file = __data;
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 14/15] tracing: Up the hist stacktrace size from 16 to 31
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
` (12 preceding siblings ...)
2026-01-27 15:06 ` [for-next][PATCH 13/15] tracing: Remove notrace from trace_event_raw_event_synth() Steven Rostedt
@ 2026-01-27 15:06 ` Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 15/15] tracing: Remove duplicate ENABLE_EVENT_STR and DISABLE_EVENT_STR macros Steven Rostedt
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:06 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Tom Zanussi
From: Steven Rostedt <rostedt@goodmis.org>
Recording stacktraces is very useful, but the size of 16 deep is very
restrictive. For example, in seeing where tasks schedule out in a non
running state, the following can be used:
~# cd /sys/kernel/tracing
~# echo 'hist:keys=common_stacktrace:vals=hitcount if prev_state & 3' > events/sched/sched_switch/trigger
~# cat events/sched/sched_switch/hist
[..]
{ common_stacktrace:
__schedule+0xdc0/0x1860
schedule+0x27/0xd0
schedule_timeout+0xb5/0x100
wait_for_completion+0x8a/0x140
xfs_buf_iowait+0x20/0xd0 [xfs]
xfs_buf_read_map+0x103/0x250 [xfs]
xfs_trans_read_buf_map+0x161/0x310 [xfs]
xfs_btree_read_buf_block+0xa0/0x120 [xfs]
xfs_btree_lookup_get_block+0xa3/0x1e0 [xfs]
xfs_btree_lookup+0xea/0x530 [xfs]
xfs_alloc_fixup_trees+0x72/0x570 [xfs]
xfs_alloc_ag_vextent_size+0x67f/0x800 [xfs]
xfs_alloc_vextent_iterate_ags.constprop.0+0x52/0x230 [xfs]
xfs_alloc_vextent_start_ag+0x9d/0x1b0 [xfs]
xfs_bmap_btalloc+0x2af/0x680 [xfs]
xfs_bmapi_allocate+0xdb/0x2c0 [xfs]
} hitcount: 1
[..]
The above stops at 16 functions where knowing more would be useful. As the
allocated storage for stacks is the same for strings, and that size is 256
bytes, there is a lot of space not being used for stacktraces.
16 * 8 = 128
Up the size to 31 (it requires the last slot to be zero, so it can't be 32).
Also change the BUILD_BUG_ON() to allow the size of the stacktrace storage
to be equal to the max size. One slot is used to hold the number of
elements in the stack.
BUILD_BUG_ON((HIST_STACKTRACE_DEPTH + 1) * sizeof(long) >= STR_VAR_LEN_MAX);
Change that from ">=" to just ">", as now they are equal.
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://patch.msgid.link/20260123105415.2be26bf4@gandalf.local.home
Reviewed-by: Tom Zanussi <zanussi@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace.h | 2 +-
kernel/trace/trace_events_hist.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 8888fc9335b6..69e7defba6c6 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -128,7 +128,7 @@ enum trace_type {
#define FAULT_STRING "(fault)"
-#define HIST_STACKTRACE_DEPTH 16
+#define HIST_STACKTRACE_DEPTH 31
#define HIST_STACKTRACE_SIZE (HIST_STACKTRACE_DEPTH * sizeof(unsigned long))
#define HIST_STACKTRACE_SKIP 5
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index e245446a8cf7..0fc641461be5 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -3163,7 +3163,7 @@ static inline void __update_field_vars(struct tracing_map_elt *elt,
u64 var_val;
/* Make sure stacktrace can fit in the string variable length */
- BUILD_BUG_ON((HIST_STACKTRACE_DEPTH + 1) * sizeof(long) >= STR_VAR_LEN_MAX);
+ BUILD_BUG_ON((HIST_STACKTRACE_DEPTH + 1) * sizeof(long) > STR_VAR_LEN_MAX);
for (i = 0, j = field_var_str_start; i < n_field_vars; i++) {
struct field_var *field_var = field_vars[i];
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [for-next][PATCH 15/15] tracing: Remove duplicate ENABLE_EVENT_STR and DISABLE_EVENT_STR macros
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
` (13 preceding siblings ...)
2026-01-27 15:06 ` [for-next][PATCH 14/15] tracing: Up the hist stacktrace size from 16 to 31 Steven Rostedt
@ 2026-01-27 15:06 ` Steven Rostedt
14 siblings, 0 replies; 16+ messages in thread
From: Steven Rostedt @ 2026-01-27 15:06 UTC (permalink / raw)
To: linux-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Tom Zanussi
From: Steven Rostedt <rostedt@goodmis.org>
The macros ENABLE_EVENT_STR and DISABLE_EVENT_STR were added to trace.h so
that more than one file can have access to them, but was never removed
from their original location. Remove the duplicates.
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Tom Zanussi <zanussi@kernel.org>
Link: https://patch.msgid.link/20260126130037.4ba201f9@gandalf.local.home
Fixes: d0bad49bb0a09 ("tracing: Add enable_hist/disable_hist triggers")
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace_events.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index f372a6374164..4972e1a2b5f3 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -4097,11 +4097,6 @@ void trace_put_event_file(struct trace_event_file *file)
EXPORT_SYMBOL_GPL(trace_put_event_file);
#ifdef CONFIG_DYNAMIC_FTRACE
-
-/* Avoid typos */
-#define ENABLE_EVENT_STR "enable_event"
-#define DISABLE_EVENT_STR "disable_event"
-
struct event_probe_data {
struct trace_event_file *file;
unsigned long count;
--
2.51.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
end of thread, other threads:[~2026-01-27 15:06 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-27 15:05 [for-next][PATCH 00/15] tracing: Updates for 6.20 Steven Rostedt
2026-01-27 15:05 ` [for-next][PATCH 01/15] tracing: Properly process error handling in event_hist_trigger_parse() Steven Rostedt
2026-01-27 15:05 ` [for-next][PATCH 02/15] tracing: Remove redundant call to event_trigger_reset_filter() " Steven Rostedt
2026-01-27 15:05 ` [for-next][PATCH 03/15] tracing: Add bitmask-list option for human-readable bitmask display Steven Rostedt
2026-01-27 15:05 ` [for-next][PATCH 04/15] tracing: Replace use of system_wq with system_dfl_wq Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 05/15] tracing: Add show_event_filters to expose active event filters Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 06/15] tracing: Add show_event_triggers to expose active event triggers Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 07/15] tracing: Check the return value of tracing_update_buffers() Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 08/15] ring-buffer: Use a housekeeping CPU to wake up waiters Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 09/15] tracing: Have show_event_trigger/filter format a bit more in columns Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 10/15] ftrace: Introduce and use ENTRIES_PER_PAGE_GROUP macro Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 11/15] tracing: Disable trace_printk buffer on warning too Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 12/15] tracing: Have hist_debug show what function a field uses Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 13/15] tracing: Remove notrace from trace_event_raw_event_synth() Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 14/15] tracing: Up the hist stacktrace size from 16 to 31 Steven Rostedt
2026-01-27 15:06 ` [for-next][PATCH 15/15] tracing: Remove duplicate ENABLE_EVENT_STR and DISABLE_EVENT_STR macros Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox