public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [for-linus][PATCH 0/5] tracing: Fixes for 7.0
@ 2026-02-19 20:49 Steven Rostedt
  2026-02-19 20:49 ` [for-linus][PATCH 1/5] ring-buffer: Fix possible dereference of uninitialized pointer Steven Rostedt
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Steven Rostedt @ 2026-02-19 20:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton


Tracing fixes for 7.0:

- Fix possible dereference of uninitialized pointer

  When validating the persistent ring buffer on boot up, if the first
  validation fails, a reference to "head_page" is performed in the 
  error path, but it skips over the initialization of that variable.
  Move the initialization before the first validation check.

- Fix use of event length in validation of persistent ring buffer

  On boot up, the persistent ring buffer is checked to see if it is
  valid by several methods. One being to walk all the events in the
  memory location to make sure they are all valid. The length of the
  event is used to move to the next event. This length is determined
  by the data in the buffer. If that length is corrupted, it could
  possibly make the next event to check located at a bad memory location.

  Validate the length field of the event when doing the event walk.

- Fix function graph on archs that do not support use of ftrace_ops

  When an architecture defines HAVE_DYNAMIC_FTRACE_WITH_ARGS, it means
  that its function graph tracer uses the ftrace_ops of the function
  tracer to call its callbacks. This allows a single registered callback
  to be called directly instead of checking the callback's meta data's
  hash entries against the function being traced.

  For architectures that do not support this feature, it must always
  call the loop function that tests each registered callback (even if
  there's only one). The loop function tests each callback's meta data
  against its hash of functions and will call its callback if the
  function being traced is in its hash map.

  The issue was that there was no check against this and the direct
  function was being called even if the architecture didn't support it.
  This meant that if function tracing was enabled at the same time
  as a callback was registered with the function graph tracer, its
  callback would be called for every function that the function tracer
  also traced, even if the callback's meta data only wanted to be
  called back for a small subset of functions.

  Prevent the direct calling for those architectures that do not support
  it.

- Fix references to trace_event_file for hist files

  The hist files used event_file_data() to get a reference to the
  associated trace_event_file the histogram was attached to. This
  would return a pointer even if the trace_event_file is about to
  be freed (via RCU). Instead it should use the event_file_file()
  helper that returns NULL if the trace_event_file is marked to be
  freed so that no new references are added to it.

- Wake up hist poll readers when an event is being freed

  When polling on a hist file, the task is only awoken when a hist
  trigger is triggered. This means that if an event is being freed
  while there's a task waiting on its hist file, it will need to wait
  until the hist trigger occurs to wake it up and allow the freeing
  to happen. Note, the event will not be completely freed until all
  references are removed, and a hist poller keeps a reference. But
  it should still be woken when the event is being freed. 

  git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
trace/fixes

Head SHA1: 9678e53179aa7e907360f5b5b275769008a69b80


Daniil Dulov (1):
      ring-buffer: Fix possible dereference of uninitialized pointer

Masami Hiramatsu (Google) (1):
      tracing: ring-buffer: Fix to check event length before using

Petr Pavlu (2):
      tracing: Fix checking of freed trace_event_file for hist files
      tracing: Wake up poll waiters for hist files when removing an event

Steven Rostedt (1):
      fgraph: Do not call handlers direct when not using ftrace_ops

----
 include/linux/ftrace.h           | 13 ++++++++++---
 include/linux/trace_events.h     |  5 +++++
 kernel/trace/fgraph.c            | 12 +++++++++++-
 kernel/trace/ring_buffer.c       |  9 +++++++--
 kernel/trace/trace_events.c      |  3 +++
 kernel/trace/trace_events_hist.c |  4 ++--
 6 files changed, 38 insertions(+), 8 deletions(-)

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [for-linus][PATCH 1/5] ring-buffer: Fix possible dereference of uninitialized pointer
  2026-02-19 20:49 [for-linus][PATCH 0/5] tracing: Fixes for 7.0 Steven Rostedt
@ 2026-02-19 20:49 ` Steven Rostedt
  2026-02-19 20:49 ` [for-linus][PATCH 2/5] tracing: ring-buffer: Fix to check event length before using Steven Rostedt
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Steven Rostedt @ 2026-02-19 20:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	stable, kernel test robot, Dan Carpenter, Daniil Dulov

From: Daniil Dulov <d.dulov@aladdin.ru>

There is a pointer head_page in rb_meta_validate_events() which is not
initialized at the beginning of a function. This pointer can be dereferenced
if there is a failure during reader page validation. In this case the control
is passed to "invalid" label where the pointer is dereferenced in a loop.

To fix the issue initialize orig_head and head_page before calling
rb_validate_buffer.

Found by Linux Verification Center (linuxtesting.org) with SVACE.

Cc: stable@vger.kernel.org
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Link: https://patch.msgid.link/20260213100130.2013839-1-d.dulov@aladdin.ru
Closes: https://lore.kernel.org/r/202406130130.JtTGRf7W-lkp@intel.com/
Fixes: 5f3b6e839f3c ("ring-buffer: Validate boot range memory events")
Signed-off-by: Daniil Dulov <d.dulov@aladdin.ru>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/ring_buffer.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index d33103408955..bdc8010d8f48 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1919,6 +1919,8 @@ static void rb_meta_validate_events(struct ring_buffer_per_cpu *cpu_buffer)
 	if (!meta || !meta->head_buffer)
 		return;
 
+	orig_head = head_page = cpu_buffer->head_page;
+
 	/* Do the reader page first */
 	ret = rb_validate_buffer(cpu_buffer->reader_page->page, cpu_buffer->cpu);
 	if (ret < 0) {
@@ -1929,7 +1931,6 @@ static void rb_meta_validate_events(struct ring_buffer_per_cpu *cpu_buffer)
 	entry_bytes += local_read(&cpu_buffer->reader_page->page->commit);
 	local_set(&cpu_buffer->reader_page->entries, ret);
 
-	orig_head = head_page = cpu_buffer->head_page;
 	ts = head_page->page->time_stamp;
 
 	/*
-- 
2.51.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [for-linus][PATCH 2/5] tracing: ring-buffer: Fix to check event length before using
  2026-02-19 20:49 [for-linus][PATCH 0/5] tracing: Fixes for 7.0 Steven Rostedt
  2026-02-19 20:49 ` [for-linus][PATCH 1/5] ring-buffer: Fix possible dereference of uninitialized pointer Steven Rostedt
@ 2026-02-19 20:49 ` Steven Rostedt
  2026-02-19 20:49 ` [for-linus][PATCH 3/5] fgraph: Do not call handlers direct when not using ftrace_ops Steven Rostedt
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Steven Rostedt @ 2026-02-19 20:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	stable

From: "Masami Hiramatsu (Google)" <mhiramat@kernel.org>

Check the event length before adding it for accessing next index in
rb_read_data_buffer(). Since this function is used for validating
possibly broken ring buffers, the length of the event could be broken.
In that case, the new event (e + len) can point a wrong address.
To avoid invalid memory access at boot, check whether the length of
each event is in the possible range before using it.

Cc: stable@vger.kernel.org
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Fixes: 5f3b6e839f3c ("ring-buffer: Validate boot range memory events")
Link: https://patch.msgid.link/177123421541.142205.9414352170164678966.stgit@devnote2
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/ring_buffer.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index bdc8010d8f48..1e7a34a31851 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1849,6 +1849,7 @@ static int rb_read_data_buffer(struct buffer_data_page *dpage, int tail, int cpu
 	struct ring_buffer_event *event;
 	u64 ts, delta;
 	int events = 0;
+	int len;
 	int e;
 
 	*delta_ptr = 0;
@@ -1856,9 +1857,12 @@ static int rb_read_data_buffer(struct buffer_data_page *dpage, int tail, int cpu
 
 	ts = dpage->time_stamp;
 
-	for (e = 0; e < tail; e += rb_event_length(event)) {
+	for (e = 0; e < tail; e += len) {
 
 		event = (struct ring_buffer_event *)(dpage->data + e);
+		len = rb_event_length(event);
+		if (len <= 0 || len > tail - e)
+			return -1;
 
 		switch (event->type_len) {
 
-- 
2.51.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [for-linus][PATCH 3/5] fgraph: Do not call handlers direct when not using ftrace_ops
  2026-02-19 20:49 [for-linus][PATCH 0/5] tracing: Fixes for 7.0 Steven Rostedt
  2026-02-19 20:49 ` [for-linus][PATCH 1/5] ring-buffer: Fix possible dereference of uninitialized pointer Steven Rostedt
  2026-02-19 20:49 ` [for-linus][PATCH 2/5] tracing: ring-buffer: Fix to check event length before using Steven Rostedt
@ 2026-02-19 20:49 ` Steven Rostedt
  2026-02-19 20:49 ` [for-linus][PATCH 4/5] tracing: Fix checking of freed trace_event_file for hist files Steven Rostedt
  2026-02-19 20:49 ` [for-linus][PATCH 5/5] tracing: Wake up poll waiters for hist files when removing an event Steven Rostedt
  4 siblings, 0 replies; 7+ messages in thread
From: Steven Rostedt @ 2026-02-19 20:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	stable

From: Steven Rostedt <rostedt@goodmis.org>

The function graph tracer was modified to us the ftrace_ops of the
function tracer. This simplified the code as well as allowed more features
of the function graph tracer.

Not all architectures were converted over as it required the
implementation of HAVE_DYNAMIC_FTRACE_WITH_ARGS to implement. For those
architectures, it still did it the old way where the function graph tracer
handle was called by the function tracer trampoline. The handler then had
to check the hash to see if the registered handlers wanted to be called by
that function or not.

In order to speed up the function graph tracer that used ftrace_ops, if
only one callback was registered with function graph, it would call its
function directly via a static call.

Now, if the architecture does not support the use of using ftrace_ops and
still has the ftrace function trampoline calling the function graph
handler, then by doing a direct call it removes the check against the
handler's hash (list of functions it wants callbacks to), and it may call
that handler for functions that the handler did not request calls for.

On 32bit x86, which does not support the ftrace_ops use with function
graph tracer, it shows the issue:

 ~# trace-cmd start -p function -l schedule
 ~# trace-cmd show
 # tracer: function_graph
 #
 # CPU  DURATION                  FUNCTION CALLS
 # |     |   |                     |   |   |   |
  2) * 11898.94 us |  schedule();
  3) # 1783.041 us |  schedule();
  1)               |  schedule() {
  ------------------------------------------
  1)   bash-8369    =>  kworker-7669
  ------------------------------------------
  1)               |        schedule() {
  ------------------------------------------
  1)  kworker-7669  =>   bash-8369
  ------------------------------------------
  1) + 97.004 us   |  }
  1)               |  schedule() {
 [..]

Now by starting the function tracer is another instance:

 ~# trace-cmd start -B foo -p function

This causes the function graph tracer to trace all functions (because the
function trace calls the function graph tracer for each on, and the
function graph trace is doing a direct call):

 ~# trace-cmd show
 # tracer: function_graph
 #
 # CPU  DURATION                  FUNCTION CALLS
 # |     |   |                     |   |   |   |
  1)   1.669 us    |          } /* preempt_count_sub */
  1) + 10.443 us   |        } /* _raw_spin_unlock_irqrestore */
  1)               |        tick_program_event() {
  1)               |          clockevents_program_event() {
  1)   1.044 us    |            ktime_get();
  1)   6.481 us    |            lapic_next_event();
  1) + 10.114 us   |          }
  1) + 11.790 us   |        }
  1) ! 181.223 us  |      } /* hrtimer_interrupt */
  1) ! 184.624 us  |    } /* __sysvec_apic_timer_interrupt */
  1)               |    irq_exit_rcu() {
  1)   0.678 us    |      preempt_count_sub();

When it should still only be tracing the schedule() function.

To fix this, add a macro FGRAPH_NO_DIRECT to be set to 0 when the
architecture does not support function graph use of ftrace_ops, and set to
1 otherwise. Then use this macro to know to allow function graph tracer to
call the handlers directly or not.

Cc: stable@vger.kernel.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Link: https://patch.msgid.link/20260218104244.5f14dade@gandalf.local.home
Fixes: cc60ee813b503 ("function_graph: Use static_call and branch to optimize entry function")
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 include/linux/ftrace.h | 13 ++++++++++---
 kernel/trace/fgraph.c  | 12 +++++++++++-
 2 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 1a4d36fc9085..c242fe49af4c 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -1092,10 +1092,17 @@ static inline bool is_ftrace_trampoline(unsigned long addr)
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 #ifndef ftrace_graph_func
-#define ftrace_graph_func ftrace_stub
-#define FTRACE_OPS_GRAPH_STUB FTRACE_OPS_FL_STUB
+# define ftrace_graph_func ftrace_stub
+# define FTRACE_OPS_GRAPH_STUB FTRACE_OPS_FL_STUB
+/*
+ * The function graph is called every time the function tracer is called.
+ * It must always test the ops hash and cannot just directly call
+ * the handler.
+ */
+# define FGRAPH_NO_DIRECT	1
 #else
-#define FTRACE_OPS_GRAPH_STUB 0
+# define FTRACE_OPS_GRAPH_STUB	0
+# define FGRAPH_NO_DIRECT	0
 #endif
 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
 
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
index 4df766c690f9..40d373d65f9b 100644
--- a/kernel/trace/fgraph.c
+++ b/kernel/trace/fgraph.c
@@ -539,7 +539,11 @@ static struct fgraph_ops fgraph_stub = {
 static struct fgraph_ops *fgraph_direct_gops = &fgraph_stub;
 DEFINE_STATIC_CALL(fgraph_func, ftrace_graph_entry_stub);
 DEFINE_STATIC_CALL(fgraph_retfunc, ftrace_graph_ret_stub);
+#if FGRAPH_NO_DIRECT
+static DEFINE_STATIC_KEY_FALSE(fgraph_do_direct);
+#else
 static DEFINE_STATIC_KEY_TRUE(fgraph_do_direct);
+#endif
 
 /**
  * ftrace_graph_stop - set to permanently disable function graph tracing
@@ -843,7 +847,7 @@ __ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointe
 	bitmap = get_bitmap_bits(current, offset);
 
 #ifdef CONFIG_HAVE_STATIC_CALL
-	if (static_branch_likely(&fgraph_do_direct)) {
+	if (!FGRAPH_NO_DIRECT && static_branch_likely(&fgraph_do_direct)) {
 		if (test_bit(fgraph_direct_gops->idx, &bitmap))
 			static_call(fgraph_retfunc)(&trace, fgraph_direct_gops, fregs);
 	} else
@@ -1285,6 +1289,9 @@ static void ftrace_graph_enable_direct(bool enable_branch, struct fgraph_ops *go
 	trace_func_graph_ret_t retfunc = NULL;
 	int i;
 
+	if (FGRAPH_NO_DIRECT)
+		return;
+
 	if (gops) {
 		func = gops->entryfunc;
 		retfunc = gops->retfunc;
@@ -1308,6 +1315,9 @@ static void ftrace_graph_enable_direct(bool enable_branch, struct fgraph_ops *go
 
 static void ftrace_graph_disable_direct(bool disable_branch)
 {
+	if (FGRAPH_NO_DIRECT)
+		return;
+
 	if (disable_branch)
 		static_branch_disable(&fgraph_do_direct);
 	static_call_update(fgraph_func, ftrace_graph_entry_stub);
-- 
2.51.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [for-linus][PATCH 4/5] tracing: Fix checking of freed trace_event_file for hist files
  2026-02-19 20:49 [for-linus][PATCH 0/5] tracing: Fixes for 7.0 Steven Rostedt
                   ` (2 preceding siblings ...)
  2026-02-19 20:49 ` [for-linus][PATCH 3/5] fgraph: Do not call handlers direct when not using ftrace_ops Steven Rostedt
@ 2026-02-19 20:49 ` Steven Rostedt
  2026-02-19 20:49 ` [for-linus][PATCH 5/5] tracing: Wake up poll waiters for hist files when removing an event Steven Rostedt
  4 siblings, 0 replies; 7+ messages in thread
From: Steven Rostedt @ 2026-02-19 20:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	stable, Tom Zanussi, Petr Pavlu

From: Petr Pavlu <petr.pavlu@suse.com>

The event_hist_open() and event_hist_poll() functions currently retrieve
a trace_event_file pointer from a file struct by invoking
event_file_data(), which simply returns file->f_inode->i_private. The
functions then check if the pointer is NULL to determine whether the event
is still valid. This approach is flawed because i_private is assigned when
an eventfs inode is allocated and remains set throughout its lifetime.
Instead, the code should call event_file_file(), which checks for
EVENT_FILE_FL_FREED. Using the incorrect access function may result in the
code potentially opening a hist file for an event that is being removed or
becoming stuck while polling on this file.

Correct the access method to event_file_file() in both functions.

Cc: stable@vger.kernel.org
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Tom Zanussi <zanussi@kernel.org>
Link: https://patch.msgid.link/20260219162737.314231-2-petr.pavlu@suse.com
Fixes: 1bd13edbbed6 ("tracing/hist: Add poll(POLLIN) support on hist file")
Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace_events_hist.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index e6f449f53afc..768df987419e 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -5784,7 +5784,7 @@ static __poll_t event_hist_poll(struct file *file, struct poll_table_struct *wai
 
 	guard(mutex)(&event_mutex);
 
-	event_file = event_file_data(file);
+	event_file = event_file_file(file);
 	if (!event_file)
 		return EPOLLERR;
 
@@ -5822,7 +5822,7 @@ static int event_hist_open(struct inode *inode, struct file *file)
 
 	guard(mutex)(&event_mutex);
 
-	event_file = event_file_data(file);
+	event_file = event_file_file(file);
 	if (!event_file) {
 		ret = -ENODEV;
 		goto err;
-- 
2.51.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [for-linus][PATCH 5/5] tracing: Wake up poll waiters for hist files when removing an event
  2026-02-19 20:49 [for-linus][PATCH 0/5] tracing: Fixes for 7.0 Steven Rostedt
                   ` (3 preceding siblings ...)
  2026-02-19 20:49 ` [for-linus][PATCH 4/5] tracing: Fix checking of freed trace_event_file for hist files Steven Rostedt
@ 2026-02-19 20:49 ` Steven Rostedt
  4 siblings, 0 replies; 7+ messages in thread
From: Steven Rostedt @ 2026-02-19 20:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	stable, Tom Zanussi, Petr Pavlu

From: Petr Pavlu <petr.pavlu@suse.com>

The event_hist_poll() function attempts to verify whether an event file is
being removed, but this check may not occur or could be unnecessarily
delayed. This happens because hist_poll_wakeup() is currently invoked only
from event_hist_trigger() when a hist command is triggered. If the event
file is being removed, no associated hist command will be triggered and a
waiter will be woken up only after an unrelated hist command is triggered.

Fix the issue by adding a call to hist_poll_wakeup() in
remove_event_file_dir() after setting the EVENT_FILE_FL_FREED flag. This
ensures that a task polling on a hist file is woken up and receives
EPOLLERR.

Cc: stable@vger.kernel.org
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Tom Zanussi <zanussi@kernel.org>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Link: https://patch.msgid.link/20260219162737.314231-3-petr.pavlu@suse.com
Fixes: 1bd13edbbed6 ("tracing/hist: Add poll(POLLIN) support on hist file")
Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 include/linux/trace_events.h | 5 +++++
 kernel/trace/trace_events.c  | 3 +++
 2 files changed, 8 insertions(+)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 0a2b8229b999..37eb2f0f3dd8 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -683,6 +683,11 @@ static inline void hist_poll_wakeup(void)
 
 #define hist_poll_wait(file, wait)	\
 	poll_wait(file, &hist_poll_wq, wait)
+
+#else
+static inline void hist_poll_wakeup(void)
+{
+}
 #endif
 
 #define __TRACE_EVENT_FLAGS(name, value)				\
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 61fe01dce7a6..b659653dc03a 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -1311,6 +1311,9 @@ static void remove_event_file_dir(struct trace_event_file *file)
 	free_event_filter(file->filter);
 	file->flags |= EVENT_FILE_FL_FREED;
 	event_file_put(file);
+
+	/* Wake up hist poll waiters to notice the EVENT_FILE_FL_FREED flag. */
+	hist_poll_wakeup();
 }
 
 /*
-- 
2.51.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [for-linus][PATCH 0/5] tracing: Fixes for 7.0
@ 2026-03-22  3:09 Steven Rostedt
  0 siblings, 0 replies; 7+ messages in thread
From: Steven Rostedt @ 2026-03-22  3:09 UTC (permalink / raw)
  To: linux-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton


tracing fixes for 7.0:

- Revert "tracing: Remove pid in task_rename tracing output"

  A change was made to remove the pid field from the task_rename event
  because it was thought that it was always done for the current task and
  recording the pid would be redundant. This turned out to be incorrect and
  there are a few corner casse where this is not true and caused some
  regressions in tooling.

- Fix the reading from user space for migration

  The reading of user space uses a seq lock type of logic where it uses a
  per-cpu temporary buffer and disables migration, then enables preemption,
  does the copy from user space, disables preemption, enables migration and
  checks if there was any schedule switches while preemption was enabled. If
  there was a context switch, then it is considered that the per-cpu buffer
  could be corrupted and it tries again. There's a protection check that
  tests if it takes a hundred tries it issues a warning and exits out to
  prevent a live lock.

  This was triggered because the task was selected by the load balancer to
  be migrated to another CPU, every time preemption is enabled the migration
  task would schedule in try to migrate the task but can't because migration
  is enabled and let it run again. This caused the scheduler to schedule out
  the task every time it enabled preemption and made the loop never exit
  (until the 100 iteration test triggered).

  Fix this by enabling and disabling preemption and keeping migration
  enabled on a failure. This will let the migration thread to migrate the
  task and the copy from user space will likely pass on the next iteration.

- Fix trace_marker copy option freeing

  The "copy_trace_marker" option allows an tracing instance to get a copy of
  a write to the trace_marker file of the top level instance. This is
  managed by a link list protected by RCU. When an instance is removed, a
  check is made if the option is set, and if so synchronized_rcu() is
  called. The problem is that an iteration is made to reset all the flags to
  what they were when the instance was created (to perform clean ups) was
  done before the check of the copy_trace_marker option and that option was
  cleared, so the synchronize_rcu() was never called.

  Move the clearing of all the flags after the check of copy_trace_marker to
  do synchronize_rcu() so that the option is still set if it was before and
  the synchronization is performed.

- Fix entries setting when validating the persistent ring buffer

  When validating the persistent ring buffer on boot up, the number of
  events per sub-buffer is added to the sub-buffer meta page. The validator
  was updating cpu_buffer->head_page (the first sub-buffer of the per-cpu
  buffer) and not the "head_page" variable that was iterating the
  sub-buffers. This was causing the first sub-buffer to be assigned the
  entries for each sub-buffer and not the sub-buffer that was supposed to be
  updated.

- Use "hash" value to update the direct callers

  When updating the ftrace direct direct callers it assigned a temporary
  callback to all the callback functions of the ftrace ops and not just the
  functions represented by the passed in hash. This causes an unnecessary
  slow down of the functions of the ftrace_ops that is not being modified.
  Only update the functions that are going to be modified to call the ftrace
  loop function so that the update can be made on those functions.

  git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
trace/fixes

Head SHA1: 50b35c9e50a865600344ab1d8f9a8b3384d7e63d


Jiri Olsa (1):
      ftrace: Use hash argument for tmp_ops in update_ftrace_direct_mod

Masami Hiramatsu (Google) (1):
      ring-buffer: Fix to update per-subbuf entries of persistent ring buffer

Steven Rostedt (2):
      tracing: Fix failure to read user space from system call trace events
      tracing: Fix trace_marker copy link list updates

Xuewen Yan (1):
      tracing: Revert "tracing: Remove pid in task_rename tracing output"

----
 include/trace/events/task.h |  7 +++++--
 kernel/trace/ftrace.c       |  4 ++--
 kernel/trace/ring_buffer.c  |  2 +-
 kernel/trace/trace.c        | 36 +++++++++++++++++++++++++++---------
 4 files changed, 35 insertions(+), 14 deletions(-)

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-03-22  3:09 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-19 20:49 [for-linus][PATCH 0/5] tracing: Fixes for 7.0 Steven Rostedt
2026-02-19 20:49 ` [for-linus][PATCH 1/5] ring-buffer: Fix possible dereference of uninitialized pointer Steven Rostedt
2026-02-19 20:49 ` [for-linus][PATCH 2/5] tracing: ring-buffer: Fix to check event length before using Steven Rostedt
2026-02-19 20:49 ` [for-linus][PATCH 3/5] fgraph: Do not call handlers direct when not using ftrace_ops Steven Rostedt
2026-02-19 20:49 ` [for-linus][PATCH 4/5] tracing: Fix checking of freed trace_event_file for hist files Steven Rostedt
2026-02-19 20:49 ` [for-linus][PATCH 5/5] tracing: Wake up poll waiters for hist files when removing an event Steven Rostedt
  -- strict thread matches above, loose matches on Subject: below --
2026-03-22  3:09 [for-linus][PATCH 0/5] tracing: Fixes for 7.0 Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox