public inbox for linux-trace-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v9 0/3] tracing: Remove backup instance after read all
@ 2026-03-31 16:32 Masami Hiramatsu (Google)
  2026-03-31 16:32 ` [PATCH v9 1/3] tracing: Make the backup instance non-reusable Masami Hiramatsu (Google)
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Masami Hiramatsu (Google) @ 2026-03-31 16:32 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Masami Hiramatsu, Mathieu Desnoyers, linux-kernel,
	linux-trace-kernel

Hi,

Here is the v9 of the series to improve backup instances of
the persistent ring buffer. The previous version is here:

https://lore.kernel.org/all/177071300558.2293046.12057922262682243630.stgit@mhiramat.tok.corp.google.com/

In this version, I removed bugfixes (those are actual fixes/minor
updates) and the force permission check in tracefs
because superuser can modify the permission by itself. Instead,
simply add read-only and FMODE_WRITE check in the related open()
file operations[1/3]. Also, I fixed 2 bugs in autoremove patch
to init dedicated workqueue correctly [2/3].

Series Description
------------------
Since backup instances are a kind of snapshot of the persistent
ring buffer, it should be readonly. And if it is readonly
there is no reason to keep it after reading all data via trace_pipe
because the data has been consumed. But user should be able to remove
the readonly instance by rmdir or truncating `trace` file.

Thus, [1/3] makes backup instances readonly (not able to write any
events, cleanup trace, change buffer size). Also, [2/3] removes the
backup instance after consuming all data via trace_pipe.
With this improvements, even if we makes a backup instance (using
the same amount of memory of the persistent ring buffer), it will
be removed after reading the data automatically.

Thanks,

---

Masami Hiramatsu (Google) (3):
      tracing: Make the backup instance non-reusable
      tracing: Remove the backup instance automatically after read
      tracing/Documentation: Add a section about backup instance


 Documentation/trace/debugging.rst |   19 ++++
 kernel/trace/trace.c              |  166 +++++++++++++++++++++++++++++--------
 kernel/trace/trace.h              |   13 +++
 kernel/trace/trace_boot.c         |    5 +
 kernel/trace/trace_events.c       |   76 ++++++++++-------
 5 files changed, 208 insertions(+), 71 deletions(-)

--
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v9 1/3] tracing: Make the backup instance non-reusable
  2026-03-31 16:32 [PATCH v9 0/3] tracing: Remove backup instance after read all Masami Hiramatsu (Google)
@ 2026-03-31 16:32 ` Masami Hiramatsu (Google)
  2026-03-31 16:32 ` [PATCH v9 2/3] tracing: Remove the backup instance automatically after read Masami Hiramatsu (Google)
  2026-03-31 16:32 ` [PATCH v9 3/3] tracing/Documentation: Add a section about backup instance Masami Hiramatsu (Google)
  2 siblings, 0 replies; 12+ messages in thread
From: Masami Hiramatsu (Google) @ 2026-03-31 16:32 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Masami Hiramatsu, Mathieu Desnoyers, linux-kernel,
	linux-trace-kernel

From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Since there is no reason to reuse the backup instance, make it
readonly (but erasable).
Note that only backup instances are readonly, because
other trace instances will be empty unless it is writable.
Only backup instances have copy entries from the original.

With this change, most of the trace control files are removed
from the backup instance, including eventfs enable/filter etc.

 # find /sys/kernel/tracing/instances/backup/events/ | wc -l
 4093
 # find /sys/kernel/tracing/instances/boot_map/events/ | wc -l
 9573

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 Changes in v9:
   - Add forcibly readonly check in open() operations.
 Changes in v8:
   - Remove read-only checks in read() operation.
 Changes in v7:
   - Return -EACCES instead of -EPERM.
 Changes in v6:
   - Remove tracing_on file from readonly instances.
   - Remove unused writable_mode from tracing_init_tracefs_percpu().
   - Cleanup init_tracer_tracefs() and create_event_toplevel_files().
   - Remove TRACE_MODE_WRITE_MASK.
   - Add TRACE_ARRAY_FL_RDONLY.
 Changes in v5:
   - Rebased on the latest for-next (and hide show_event_filters/triggers
     if the instance is readonly.
 Changes in v4:
  - Make trace data erasable. (not reusable)
 Changes in v3:
  - Resuse the beginning part of event_entries for readonly files.
  - Remove readonly file_operations and checking readonly flag in
    each write operation.
 Changes in v2:
  - Use readonly file_operations to prohibit writing instead of
    checking flags in write() callbacks.
  - Remove writable files from eventfs.
---
 kernel/trace/trace.c        |   81 +++++++++++++++++++++++++++----------------
 kernel/trace/trace.h        |    7 ++++
 kernel/trace/trace_boot.c   |    5 ++-
 kernel/trace/trace_events.c |   76 +++++++++++++++++++++++-----------------
 4 files changed, 104 insertions(+), 65 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 7b9dd6378849..8cec7bd70438 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3414,6 +3414,11 @@ int tracing_open_generic_tr(struct inode *inode, struct file *filp)
 	if (ret)
 		return ret;
 
+	if ((filp->f_mode & FMODE_WRITE) && trace_array_is_readonly(tr)) {
+		trace_array_put(tr);
+		return -EACCES;
+	}
+
 	filp->private_data = inode->i_private;
 
 	return 0;
@@ -6435,6 +6440,11 @@ static int tracing_clock_open(struct inode *inode, struct file *file)
 	if (ret)
 		return ret;
 
+	if ((file->f_mode & FMODE_WRITE) && trace_array_is_readonly(tr)) {
+		trace_array_put(tr);
+		return -EACCES;
+	}
+
 	ret = single_open(file, tracing_clock_show, inode->i_private);
 	if (ret < 0)
 		trace_array_put(tr);
@@ -8771,17 +8781,22 @@ static __init void create_trace_instances(struct dentry *d_tracer)
 static void
 init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
 {
+	umode_t writable_mode = TRACE_MODE_WRITE;
 	int cpu;
 
+	if (trace_array_is_readonly(tr))
+		writable_mode = TRACE_MODE_READ;
+
 	trace_create_file("available_tracers", TRACE_MODE_READ, d_tracer,
-			tr, &show_traces_fops);
+			  tr, &show_traces_fops);
 
-	trace_create_file("current_tracer", TRACE_MODE_WRITE, d_tracer,
-			tr, &set_tracer_fops);
+	trace_create_file("current_tracer", writable_mode, d_tracer,
+			  tr, &set_tracer_fops);
 
-	trace_create_file("tracing_cpumask", TRACE_MODE_WRITE, d_tracer,
+	trace_create_file("tracing_cpumask", writable_mode, d_tracer,
 			  tr, &tracing_cpumask_fops);
 
+	/* Options are used for changing print-format even for readonly instance. */
 	trace_create_file("trace_options", TRACE_MODE_WRITE, d_tracer,
 			  tr, &tracing_iter_fops);
 
@@ -8791,12 +8806,36 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
 	trace_create_file("trace_pipe", TRACE_MODE_READ, d_tracer,
 			  tr, &tracing_pipe_fops);
 
-	trace_create_file("buffer_size_kb", TRACE_MODE_WRITE, d_tracer,
+	trace_create_file("buffer_size_kb", writable_mode, d_tracer,
 			  tr, &tracing_entries_fops);
 
 	trace_create_file("buffer_total_size_kb", TRACE_MODE_READ, d_tracer,
 			  tr, &tracing_total_entries_fops);
 
+	trace_create_file("trace_clock", writable_mode, d_tracer, tr,
+			  &trace_clock_fops);
+
+	trace_create_file("timestamp_mode", TRACE_MODE_READ, d_tracer, tr,
+			  &trace_time_stamp_mode_fops);
+
+	tr->buffer_percent = 50;
+
+	trace_create_file("buffer_subbuf_size_kb", writable_mode, d_tracer,
+			  tr, &buffer_subbuf_size_fops);
+
+	create_trace_options_dir(tr);
+
+	if (tr->range_addr_start)
+		trace_create_file("last_boot_info", TRACE_MODE_READ, d_tracer,
+				  tr, &last_boot_fops);
+
+	for_each_tracing_cpu(cpu)
+		tracing_init_tracefs_percpu(tr, cpu);
+
+	/* Read-only instance has above files only. */
+	if (trace_array_is_readonly(tr))
+		return;
+
 	trace_create_file("free_buffer", 0200, d_tracer,
 			  tr, &tracing_free_buffer_fops);
 
@@ -8808,49 +8847,29 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
 	trace_create_file("trace_marker_raw", 0220, d_tracer,
 			  tr, &tracing_mark_raw_fops);
 
-	trace_create_file("trace_clock", TRACE_MODE_WRITE, d_tracer, tr,
-			  &trace_clock_fops);
-
-	trace_create_file("tracing_on", TRACE_MODE_WRITE, d_tracer,
-			  tr, &rb_simple_fops);
-
-	trace_create_file("timestamp_mode", TRACE_MODE_READ, d_tracer, tr,
-			  &trace_time_stamp_mode_fops);
-
-	tr->buffer_percent = 50;
-
 	trace_create_file("buffer_percent", TRACE_MODE_WRITE, d_tracer,
-			tr, &buffer_percent_fops);
-
-	trace_create_file("buffer_subbuf_size_kb", TRACE_MODE_WRITE, d_tracer,
-			  tr, &buffer_subbuf_size_fops);
+			  tr, &buffer_percent_fops);
 
 	trace_create_file("syscall_user_buf_size", TRACE_MODE_WRITE, d_tracer,
-			 tr, &tracing_syscall_buf_fops);
+			  tr, &tracing_syscall_buf_fops);
 
-	create_trace_options_dir(tr);
+	trace_create_file("tracing_on", TRACE_MODE_WRITE, d_tracer,
+			  tr, &rb_simple_fops);
 
 	trace_create_maxlat_file(tr, d_tracer);
 
 	if (ftrace_create_function_files(tr, d_tracer))
 		MEM_FAIL(1, "Could not allocate function filter files");
 
-	if (tr->range_addr_start) {
-		trace_create_file("last_boot_info", TRACE_MODE_READ, d_tracer,
-				  tr, &last_boot_fops);
 #ifdef CONFIG_TRACER_SNAPSHOT
-	} else {
+	if (!tr->range_addr_start)
 		trace_create_file("snapshot", TRACE_MODE_WRITE, d_tracer,
 				  tr, &snapshot_fops);
 #endif
-	}
 
 	trace_create_file("error_log", TRACE_MODE_WRITE, d_tracer,
 			  tr, &tracing_err_log_fops);
 
-	for_each_tracing_cpu(cpu)
-		tracing_init_tracefs_percpu(tr, cpu);
-
 	ftrace_init_tracefs(tr, d_tracer);
 }
 
@@ -9635,7 +9654,7 @@ __init static void enable_instances(void)
 		 * Backup buffers can be freed but need vfree().
 		 */
 		if (backup)
-			tr->flags |= TRACE_ARRAY_FL_VMALLOC;
+			tr->flags |= TRACE_ARRAY_FL_VMALLOC | TRACE_ARRAY_FL_RDONLY;
 
 		if (start || backup) {
 			tr->flags |= TRACE_ARRAY_FL_BOOT | TRACE_ARRAY_FL_LAST_BOOT;
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index a3ea735a9ef6..2d9d26d423f1 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -464,6 +464,7 @@ enum {
 	TRACE_ARRAY_FL_MOD_INIT		= BIT(3),
 	TRACE_ARRAY_FL_MEMMAP		= BIT(4),
 	TRACE_ARRAY_FL_VMALLOC		= BIT(5),
+	TRACE_ARRAY_FL_RDONLY		= BIT(6),
 };
 
 #ifdef CONFIG_MODULES
@@ -493,6 +494,12 @@ extern unsigned long trace_adjust_address(struct trace_array *tr, unsigned long
 
 extern struct trace_array *printk_trace;
 
+static inline bool trace_array_is_readonly(struct trace_array *tr)
+{
+	/* backup instance is read only. */
+	return tr->flags & TRACE_ARRAY_FL_RDONLY;
+}
+
 /*
  * The global tracer (top) should be the first trace array added,
  * but we check the flag anyway.
diff --git a/kernel/trace/trace_boot.c b/kernel/trace/trace_boot.c
index dbe29b4c6a7a..2ca2541c8a58 100644
--- a/kernel/trace/trace_boot.c
+++ b/kernel/trace/trace_boot.c
@@ -61,7 +61,8 @@ trace_boot_set_instance_options(struct trace_array *tr, struct xbc_node *node)
 		v = memparse(p, NULL);
 		if (v < PAGE_SIZE)
 			pr_err("Buffer size is too small: %s\n", p);
-		if (tracing_resize_ring_buffer(tr, v, RING_BUFFER_ALL_CPUS) < 0)
+		if (trace_array_is_readonly(tr) ||
+		    tracing_resize_ring_buffer(tr, v, RING_BUFFER_ALL_CPUS) < 0)
 			pr_err("Failed to resize trace buffer to %s\n", p);
 	}
 
@@ -597,7 +598,7 @@ trace_boot_enable_tracer(struct trace_array *tr, struct xbc_node *node)
 
 	p = xbc_node_find_value(node, "tracer", NULL);
 	if (p && *p != '\0') {
-		if (tracing_set_tracer(tr, p) < 0)
+		if (trace_array_is_readonly(tr) || tracing_set_tracer(tr, p) < 0)
 			pr_err("Failed to set given tracer: %s\n", p);
 	}
 
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index de807a9e2371..7ddcee312471 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -1401,6 +1401,9 @@ static int __ftrace_set_clr_event(struct trace_array *tr, const char *match,
 {
 	int ret;
 
+	if (trace_array_is_readonly(tr))
+		return -EACCES;
+
 	mutex_lock(&event_mutex);
 	ret = __ftrace_set_clr_event_nolock(tr, match, sub, event, set, mod);
 	mutex_unlock(&event_mutex);
@@ -2969,8 +2972,8 @@ event_subsystem_dir(struct trace_array *tr, const char *name,
 	} else
 		__get_system(system);
 
-	/* ftrace only has directories no files */
-	if (strcmp(name, "ftrace") == 0)
+	/* ftrace only has directories no files, readonly instance too. */
+	if (strcmp(name, "ftrace") == 0 || trace_array_is_readonly(tr))
 		nr_entries = 0;
 	else
 		nr_entries = ARRAY_SIZE(system_entries);
@@ -3135,28 +3138,30 @@ event_create_dir(struct eventfs_inode *parent, struct trace_event_file *file)
 	int ret;
 	static struct eventfs_entry event_entries[] = {
 		{
-			.name		= "enable",
+			.name		= "format",
 			.callback	= event_callback,
-			.release	= event_release,
 		},
+#ifdef CONFIG_PERF_EVENTS
 		{
-			.name		= "filter",
+			.name		= "id",
 			.callback	= event_callback,
 		},
+#endif
+#define NR_RO_EVENT_ENTRIES	(1 + IS_ENABLED(CONFIG_PERF_EVENTS))
+/* Readonly files must be above this line and counted by NR_RO_EVENT_ENTRIES. */
 		{
-			.name		= "trigger",
+			.name		= "enable",
 			.callback	= event_callback,
+			.release	= event_release,
 		},
 		{
-			.name		= "format",
+			.name		= "filter",
 			.callback	= event_callback,
 		},
-#ifdef CONFIG_PERF_EVENTS
 		{
-			.name		= "id",
+			.name		= "trigger",
 			.callback	= event_callback,
 		},
-#endif
 #ifdef CONFIG_HIST_TRIGGERS
 		{
 			.name		= "hist",
@@ -3189,7 +3194,10 @@ event_create_dir(struct eventfs_inode *parent, struct trace_event_file *file)
 	if (!e_events)
 		return -ENOMEM;
 
-	nr_entries = ARRAY_SIZE(event_entries);
+	if (trace_array_is_readonly(tr))
+		nr_entries = NR_RO_EVENT_ENTRIES;
+	else
+		nr_entries = ARRAY_SIZE(event_entries);
 
 	name = trace_event_name(call);
 	ei = eventfs_create_dir(name, e_events, event_entries, nr_entries, file);
@@ -4532,31 +4540,44 @@ create_event_toplevel_files(struct dentry *parent, struct trace_array *tr)
 	int nr_entries;
 	static struct eventfs_entry events_entries[] = {
 		{
-			.name		= "enable",
+			.name		= "header_page",
 			.callback	= events_callback,
 		},
 		{
-			.name		= "header_page",
+			.name		= "header_event",
 			.callback	= events_callback,
 		},
+#define NR_RO_TOP_ENTRIES	2
+/* Readonly files must be above this line and counted by NR_RO_TOP_ENTRIES. */
 		{
-			.name		= "header_event",
+			.name		= "enable",
 			.callback	= events_callback,
 		},
 	};
 
-	entry = trace_create_file("set_event", TRACE_MODE_WRITE, parent,
-				  tr, &ftrace_set_event_fops);
-	if (!entry)
-		return -ENOMEM;
+	if (!trace_array_is_readonly(tr)) {
+		entry = trace_create_file("set_event", TRACE_MODE_WRITE, parent,
+					tr, &ftrace_set_event_fops);
+		if (!entry)
+			return -ENOMEM;
 
-	trace_create_file("show_event_filters", TRACE_MODE_READ, parent, tr,
-			  &ftrace_show_event_filters_fops);
+		/* There are not as crucial, just warn if they are not created */
+		trace_create_file("show_event_filters", TRACE_MODE_READ, parent, tr,
+				&ftrace_show_event_filters_fops);
 
-	trace_create_file("show_event_triggers", TRACE_MODE_READ, parent, tr,
-			  &ftrace_show_event_triggers_fops);
+		trace_create_file("show_event_triggers", TRACE_MODE_READ, parent, tr,
+				&ftrace_show_event_triggers_fops);
 
-	nr_entries = ARRAY_SIZE(events_entries);
+		trace_create_file("set_event_pid", TRACE_MODE_WRITE, parent,
+				tr, &ftrace_set_event_pid_fops);
+
+		trace_create_file("set_event_notrace_pid",
+				TRACE_MODE_WRITE, parent, tr,
+				&ftrace_set_event_notrace_pid_fops);
+		nr_entries = ARRAY_SIZE(events_entries);
+	} else {
+		nr_entries = NR_RO_TOP_ENTRIES;
+	}
 
 	e_events = eventfs_create_events_dir("events", parent, events_entries,
 					     nr_entries, tr);
@@ -4565,15 +4586,6 @@ create_event_toplevel_files(struct dentry *parent, struct trace_array *tr)
 		return -ENOMEM;
 	}
 
-	/* There are not as crucial, just warn if they are not created */
-
-	trace_create_file("set_event_pid", TRACE_MODE_WRITE, parent,
-			  tr, &ftrace_set_event_pid_fops);
-
-	trace_create_file("set_event_notrace_pid",
-			  TRACE_MODE_WRITE, parent, tr,
-			  &ftrace_set_event_notrace_pid_fops);
-
 	tr->event_dir = e_events;
 
 	return 0;


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v9 2/3] tracing: Remove the backup instance automatically after read
  2026-03-31 16:32 [PATCH v9 0/3] tracing: Remove backup instance after read all Masami Hiramatsu (Google)
  2026-03-31 16:32 ` [PATCH v9 1/3] tracing: Make the backup instance non-reusable Masami Hiramatsu (Google)
@ 2026-03-31 16:32 ` Masami Hiramatsu (Google)
  2026-03-31 21:19   ` Steven Rostedt
  2026-03-31 16:32 ` [PATCH v9 3/3] tracing/Documentation: Add a section about backup instance Masami Hiramatsu (Google)
  2 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu (Google) @ 2026-03-31 16:32 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Masami Hiramatsu, Mathieu Desnoyers, linux-kernel,
	linux-trace-kernel

From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Since the backup instance is readonly, after reading all data
via pipe, no data is left on the instance. Thus it can be
removed safely after closing all files.
This also removes it if user resets the ring buffer manually
via 'trace' file.

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 Changes in v9:
   - Fix to initialize autoremove workqueue only for backup.
   - Fix to return -ENODEV if trace_array_get() refers freeing instance.
 Changes in v6:
   - Fix typo in comment.
   - Only when there is a readonly trace array, initialize autoremove_wq.
   - Fix to exit loop in trace_array_get() if tr is found in the list.
 Changes in v4:
   - Update description.
---
 kernel/trace/trace.c |   85 ++++++++++++++++++++++++++++++++++++++++++++++----
 kernel/trace/trace.h |    6 ++++
 2 files changed, 85 insertions(+), 6 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 8cec7bd70438..1d73400a01c7 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -539,8 +539,65 @@ void trace_set_ring_buffer_expanded(struct trace_array *tr)
 	tr->ring_buffer_expanded = true;
 }
 
+static int __remove_instance(struct trace_array *tr);
+
+static void trace_array_autoremove(struct work_struct *work)
+{
+	struct trace_array *tr = container_of(work, struct trace_array, autoremove_work);
+
+	guard(mutex)(&event_mutex);
+	guard(mutex)(&trace_types_lock);
+
+	/*
+	 * This can be fail if someone gets @tr before starting this
+	 * function, but in that case, this will be kicked again when
+	 * putting it. So we don't care about the result.
+	 */
+	__remove_instance(tr);
+}
+
+static struct workqueue_struct *autoremove_wq;
+
+static void trace_array_kick_autoremove(struct trace_array *tr)
+{
+	if (autoremove_wq && !work_pending(&tr->autoremove_work))
+		queue_work(autoremove_wq, &tr->autoremove_work);
+}
+
+static void trace_array_cancel_autoremove(struct trace_array *tr)
+{
+	if (autoremove_wq && work_pending(&tr->autoremove_work))
+		cancel_work(&tr->autoremove_work);
+}
+
+static void trace_array_init_autoremove(struct trace_array *tr)
+{
+	INIT_WORK(&tr->autoremove_work, trace_array_autoremove);
+}
+
+static void trace_array_start_autoremove(void)
+{
+	if (autoremove_wq)
+		return;
+
+	autoremove_wq = alloc_workqueue("tr_autoremove_wq",
+					WQ_UNBOUND | WQ_HIGHPRI, 0);
+	if (!autoremove_wq)
+		pr_warn("Unable to allocate tr_autoremove_wq. autoremove disabled.\n");
+}
+
 LIST_HEAD(ftrace_trace_arrays);
 
+static int __trace_array_get(struct trace_array *this_tr)
+{
+	/* When free_on_close is set, this is not available anymore. */
+	if (autoremove_wq && this_tr->free_on_close)
+		return -ENODEV;
+
+	this_tr->ref++;
+	return 0;
+}
+
 int trace_array_get(struct trace_array *this_tr)
 {
 	struct trace_array *tr;
@@ -548,8 +605,7 @@ int trace_array_get(struct trace_array *this_tr)
 	guard(mutex)(&trace_types_lock);
 	list_for_each_entry(tr, &ftrace_trace_arrays, list) {
 		if (tr == this_tr) {
-			tr->ref++;
-			return 0;
+			return __trace_array_get(tr);
 		}
 	}
 
@@ -560,6 +616,12 @@ static void __trace_array_put(struct trace_array *this_tr)
 {
 	WARN_ON(!this_tr->ref);
 	this_tr->ref--;
+	/*
+	 * When free_on_close is set, prepare removing the array
+	 * when the last reference is released.
+	 */
+	if (this_tr->ref == 1 && this_tr->free_on_close)
+		trace_array_kick_autoremove(this_tr);
 }
 
 /**
@@ -4829,6 +4891,10 @@ static void update_last_data(struct trace_array *tr)
 	/* Only if the buffer has previous boot data clear and update it. */
 	tr->flags &= ~TRACE_ARRAY_FL_LAST_BOOT;
 
+	/* If this is a backup instance, mark it for autoremove. */
+	if (tr->flags & TRACE_ARRAY_FL_VMALLOC)
+		tr->free_on_close = true;
+
 	/* Reset the module list and reload them */
 	if (tr->scratch) {
 		struct trace_scratch *tscratch = tr->scratch;
@@ -8442,8 +8508,8 @@ struct trace_array *trace_array_find_get(const char *instance)
 
 	guard(mutex)(&trace_types_lock);
 	tr = trace_array_find(instance);
-	if (tr)
-		tr->ref++;
+	if (tr && __trace_array_get(tr) < 0)
+		tr = NULL;
 
 	return tr;
 }
@@ -8540,6 +8606,8 @@ trace_array_create_systems(const char *name, const char *systems,
 	if (ftrace_allocate_ftrace_ops(tr) < 0)
 		goto out_free_tr;
 
+	trace_array_init_autoremove(tr);
+
 	ftrace_init_trace_array(tr);
 
 	init_trace_flags_index(tr);
@@ -8650,7 +8718,9 @@ struct trace_array *trace_array_get_by_name(const char *name, const char *system
 
 	list_for_each_entry(tr, &ftrace_trace_arrays, list) {
 		if (tr->name && strcmp(tr->name, name) == 0) {
-			tr->ref++;
+			/* if this fails, @tr is going to be removed. */
+			if (__trace_array_get(tr) < 0)
+				tr = NULL;
 			return tr;
 		}
 	}
@@ -8689,6 +8759,7 @@ static int __remove_instance(struct trace_array *tr)
 			set_tracer_flag(tr, 1ULL << i, 0);
 	}
 
+	trace_array_cancel_autoremove(tr);
 	tracing_set_nop(tr);
 	clear_ftrace_function_probes(tr);
 	event_trace_del_tracer(tr);
@@ -9653,8 +9724,10 @@ __init static void enable_instances(void)
 		/*
 		 * Backup buffers can be freed but need vfree().
 		 */
-		if (backup)
+		if (backup) {
 			tr->flags |= TRACE_ARRAY_FL_VMALLOC | TRACE_ARRAY_FL_RDONLY;
+			trace_array_start_autoremove();
+		}
 
 		if (start || backup) {
 			tr->flags |= TRACE_ARRAY_FL_BOOT | TRACE_ARRAY_FL_LAST_BOOT;
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 2d9d26d423f1..60e079177492 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -455,6 +455,12 @@ struct trace_array {
 	 * we do not waste memory on systems that are not using tracing.
 	 */
 	bool ring_buffer_expanded;
+	/*
+	 * If the ring buffer is a read only backup instance, it will be
+	 * removed after dumping all data via pipe, because no readable data.
+	 */
+	bool free_on_close;
+	struct work_struct	autoremove_work;
 };
 
 enum {


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v9 3/3] tracing/Documentation: Add a section about backup instance
  2026-03-31 16:32 [PATCH v9 0/3] tracing: Remove backup instance after read all Masami Hiramatsu (Google)
  2026-03-31 16:32 ` [PATCH v9 1/3] tracing: Make the backup instance non-reusable Masami Hiramatsu (Google)
  2026-03-31 16:32 ` [PATCH v9 2/3] tracing: Remove the backup instance automatically after read Masami Hiramatsu (Google)
@ 2026-03-31 16:32 ` Masami Hiramatsu (Google)
  2026-03-31 21:21   ` Steven Rostedt
  2 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu (Google) @ 2026-03-31 16:32 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Masami Hiramatsu, Mathieu Desnoyers, linux-kernel,
	linux-trace-kernel

From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Add a section about backup instance to the debugging.rst.

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
  Changes in v6:
   - Fix typos.
---
 Documentation/trace/debugging.rst |   19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/Documentation/trace/debugging.rst b/Documentation/trace/debugging.rst
index 4d88c346fc38..15857951b506 100644
--- a/Documentation/trace/debugging.rst
+++ b/Documentation/trace/debugging.rst
@@ -159,3 +159,22 @@ If setting it from the kernel command line, it is recommended to also
 disable tracing with the "traceoff" flag, and enable tracing after boot up.
 Otherwise the trace from the most recent boot will be mixed with the trace
 from the previous boot, and may make it confusing to read.
+
+Using a backup instance for keeping previous boot data
+------------------------------------------------------
+
+It is also possible to record trace data at system boot time by specifying
+events with the persistent ring buffer, but in this case the data before the
+reboot will be lost before it can be read. This problem can be solved by a
+backup instance. From the kernel command line::
+
+  reserve_mem=12M:4096:trace trace_instance=boot_map@trace,sched,irq trace_instance=backup=boot_map
+
+On boot up, the previous data in the "boot_map" is copied to the "backup"
+instance, and the "sched:*" and "irq:*" events for the current boot are traced
+in the "boot_map". Thus the user can read the previous boot data from the "backup"
+instance without stopping the trace.
+
+Note that this "backup" instance is readonly, and will be removed automatically
+if you clear the trace data or read out all trace data from the "trace_pipe"
+or the "trace_pipe_raw" files.
\ No newline at end of file


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v9 2/3] tracing: Remove the backup instance automatically after read
  2026-03-31 16:32 ` [PATCH v9 2/3] tracing: Remove the backup instance automatically after read Masami Hiramatsu (Google)
@ 2026-03-31 21:19   ` Steven Rostedt
  2026-04-01  3:19     ` Masami Hiramatsu
  0 siblings, 1 reply; 12+ messages in thread
From: Steven Rostedt @ 2026-03-31 21:19 UTC (permalink / raw)
  To: Masami Hiramatsu (Google)
  Cc: Mathieu Desnoyers, linux-kernel, linux-trace-kernel

On Wed,  1 Apr 2026 01:32:33 +0900
"Masami Hiramatsu (Google)" <mhiramat@kernel.org> wrote:

> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 8cec7bd70438..1d73400a01c7 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -539,8 +539,65 @@ void trace_set_ring_buffer_expanded(struct trace_array *tr)
>  	tr->ring_buffer_expanded = true;
>  }
>  
> +static int __remove_instance(struct trace_array *tr);
> +
> +static void trace_array_autoremove(struct work_struct *work)
> +{
> +	struct trace_array *tr = container_of(work, struct trace_array, autoremove_work);
> +
> +	guard(mutex)(&event_mutex);
> +	guard(mutex)(&trace_types_lock);

Hmm, should we do a check if the tr still exists? Couldn't the user delete
this via a rmdir after the last file closed and this was kicked?

  CPU 0							CPU 1
  -----							-----
  open(trace_pipe);
  read(..);
  close(trace_pipe);
     kick the work queue to delete it....
						rmdir();
							[instance deleted]

  __remove_instance();

   [ now the tr is freed, and the remove will crash!]


What would prevent this is this is to use trace_array_destroy() that checks
this and also adds the proper locking:

static void trace_array_autoremove(struct work_struct *work)
{
	struct trace_array *tr = container_of(work, struct trace_array, autoremove_work);

	trace_array_destroy(tr);
}


> +
> +	/*
> +	 * This can be fail if someone gets @tr before starting this
> +	 * function, but in that case, this will be kicked again when
> +	 * putting it. So we don't care about the result.
> +	 */
> +	__remove_instance(tr);
> +}
> +
> +static struct workqueue_struct *autoremove_wq;
> +
> +static void trace_array_kick_autoremove(struct trace_array *tr)
> +{
> +	if (autoremove_wq && !work_pending(&tr->autoremove_work))
> +		queue_work(autoremove_wq, &tr->autoremove_work);

Doesn't queue_work() check if it's pending? Do we really need to check it
twice?

> +}
> +
> +static void trace_array_cancel_autoremove(struct trace_array *tr)
> +{
> +	if (autoremove_wq && work_pending(&tr->autoremove_work))
> +		cancel_work(&tr->autoremove_work);

Same here, as can't this be racy?

> +}
> +
> +static void trace_array_init_autoremove(struct trace_array *tr)
> +{
> +	INIT_WORK(&tr->autoremove_work, trace_array_autoremove);
> +}
> +
> +static void trace_array_start_autoremove(void)
> +{
> +	if (autoremove_wq)
> +		return;
> +
> +	autoremove_wq = alloc_workqueue("tr_autoremove_wq",
> +					WQ_UNBOUND | WQ_HIGHPRI, 0);
> +	if (!autoremove_wq)
> +		pr_warn("Unable to allocate tr_autoremove_wq. autoremove
> disabled.\n"); +}
> +
>  LIST_HEAD(ftrace_trace_arrays);

-- Steve

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v9 3/3] tracing/Documentation: Add a section about backup instance
  2026-03-31 16:32 ` [PATCH v9 3/3] tracing/Documentation: Add a section about backup instance Masami Hiramatsu (Google)
@ 2026-03-31 21:21   ` Steven Rostedt
  2026-04-01  3:15     ` Masami Hiramatsu
  0 siblings, 1 reply; 12+ messages in thread
From: Steven Rostedt @ 2026-03-31 21:21 UTC (permalink / raw)
  To: Masami Hiramatsu (Google)
  Cc: Mathieu Desnoyers, linux-kernel, linux-trace-kernel

On Wed,  1 Apr 2026 01:32:41 +0900
"Masami Hiramatsu (Google)" <mhiramat@kernel.org> wrote:

> From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> 
> Add a section about backup instance to the debugging.rst.
> 
> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> ---
>   Changes in v6:
>    - Fix typos.
> ---
>  Documentation/trace/debugging.rst |   19 +++++++++++++++++++
>  1 file changed, 19 insertions(+)
> 
> diff --git a/Documentation/trace/debugging.rst b/Documentation/trace/debugging.rst
> index 4d88c346fc38..15857951b506 100644
> --- a/Documentation/trace/debugging.rst
> +++ b/Documentation/trace/debugging.rst
> @@ -159,3 +159,22 @@ If setting it from the kernel command line, it is recommended to also
>  disable tracing with the "traceoff" flag, and enable tracing after boot up.
>  Otherwise the trace from the most recent boot will be mixed with the trace
>  from the previous boot, and may make it confusing to read.
> +
> +Using a backup instance for keeping previous boot data
> +------------------------------------------------------
> +
> +It is also possible to record trace data at system boot time by specifying
> +events with the persistent ring buffer, but in this case the data before the
> +reboot will be lost before it can be read. This problem can be solved by a
> +backup instance. From the kernel command line::
> +
> +  reserve_mem=12M:4096:trace trace_instance=boot_map@trace,sched,irq trace_instance=backup=boot_map

The above didn't actually work well without my patch to enable events on
the persistent ring buffer with the backup. But keep it, as it will work
with my patch ;-)

-- Steve


> +
> +On boot up, the previous data in the "boot_map" is copied to the "backup"
> +instance, and the "sched:*" and "irq:*" events for the current boot are traced
> +in the "boot_map". Thus the user can read the previous boot data from the "backup"
> +instance without stopping the trace.
> +
> +Note that this "backup" instance is readonly, and will be removed automatically
> +if you clear the trace data or read out all trace data from the "trace_pipe"
> +or the "trace_pipe_raw" files.
> \ No newline at end of file


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v9 3/3] tracing/Documentation: Add a section about backup instance
  2026-03-31 21:21   ` Steven Rostedt
@ 2026-04-01  3:15     ` Masami Hiramatsu
  2026-04-01 14:41       ` Steven Rostedt
  0 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2026-04-01  3:15 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Mathieu Desnoyers, linux-kernel, linux-trace-kernel

On Tue, 31 Mar 2026 17:21:05 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Wed,  1 Apr 2026 01:32:41 +0900
> "Masami Hiramatsu (Google)" <mhiramat@kernel.org> wrote:
> 
> > From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > 
> > Add a section about backup instance to the debugging.rst.
> > 
> > Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > ---
> >   Changes in v6:
> >    - Fix typos.
> > ---
> >  Documentation/trace/debugging.rst |   19 +++++++++++++++++++
> >  1 file changed, 19 insertions(+)
> > 
> > diff --git a/Documentation/trace/debugging.rst b/Documentation/trace/debugging.rst
> > index 4d88c346fc38..15857951b506 100644
> > --- a/Documentation/trace/debugging.rst
> > +++ b/Documentation/trace/debugging.rst
> > @@ -159,3 +159,22 @@ If setting it from the kernel command line, it is recommended to also
> >  disable tracing with the "traceoff" flag, and enable tracing after boot up.
> >  Otherwise the trace from the most recent boot will be mixed with the trace
> >  from the previous boot, and may make it confusing to read.
> > +
> > +Using a backup instance for keeping previous boot data
> > +------------------------------------------------------
> > +
> > +It is also possible to record trace data at system boot time by specifying
> > +events with the persistent ring buffer, but in this case the data before the
> > +reboot will be lost before it can be read. This problem can be solved by a
> > +backup instance. From the kernel command line::
> > +
> > +  reserve_mem=12M:4096:trace trace_instance=boot_map@trace,sched,irq trace_instance=backup=boot_map
> 
> The above didn't actually work well without my patch to enable events on
> the persistent ring buffer with the backup. But keep it, as it will work
> with my patch ;-)

Ah, thanks!
Let me rebase on it.


> 
> -- Steve
> 
> 
> > +
> > +On boot up, the previous data in the "boot_map" is copied to the "backup"
> > +instance, and the "sched:*" and "irq:*" events for the current boot are traced
> > +in the "boot_map". Thus the user can read the previous boot data from the "backup"
> > +instance without stopping the trace.
> > +
> > +Note that this "backup" instance is readonly, and will be removed automatically
> > +if you clear the trace data or read out all trace data from the "trace_pipe"
> > +or the "trace_pipe_raw" files.
> > \ No newline at end of file
> 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v9 2/3] tracing: Remove the backup instance automatically after read
  2026-03-31 21:19   ` Steven Rostedt
@ 2026-04-01  3:19     ` Masami Hiramatsu
  2026-04-01 14:40       ` Steven Rostedt
  0 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2026-04-01  3:19 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Mathieu Desnoyers, linux-kernel, linux-trace-kernel

On Tue, 31 Mar 2026 17:19:36 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Wed,  1 Apr 2026 01:32:33 +0900
> "Masami Hiramatsu (Google)" <mhiramat@kernel.org> wrote:
> 
> > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> > index 8cec7bd70438..1d73400a01c7 100644
> > --- a/kernel/trace/trace.c
> > +++ b/kernel/trace/trace.c
> > @@ -539,8 +539,65 @@ void trace_set_ring_buffer_expanded(struct trace_array *tr)
> >  	tr->ring_buffer_expanded = true;
> >  }
> >  
> > +static int __remove_instance(struct trace_array *tr);
> > +
> > +static void trace_array_autoremove(struct work_struct *work)
> > +{
> > +	struct trace_array *tr = container_of(work, struct trace_array, autoremove_work);
> > +
> > +	guard(mutex)(&event_mutex);
> > +	guard(mutex)(&trace_types_lock);
> 
> Hmm, should we do a check if the tr still exists? Couldn't the user delete
> this via a rmdir after the last file closed and this was kicked?
> 
>   CPU 0							CPU 1
>   -----							-----
>   open(trace_pipe);
>   read(..);
>   close(trace_pipe);
>      kick the work queue to delete it....
> 						rmdir();
> 							[instance deleted]

I thought this requires trace_types_lock, and after kicked the queue,
can rmdir() gets the tr? (__trace_array_get() return error if
tr->free_on_close is set)

> 
>   __remove_instance();
> 
>    [ now the tr is freed, and the remove will crash!]
> 
> 
> What would prevent this is this is to use trace_array_destroy() that checks
> this and also adds the proper locking:
> 
> static void trace_array_autoremove(struct work_struct *work)
> {
> 	struct trace_array *tr = container_of(work, struct trace_array, autoremove_work);
> 
> 	trace_array_destroy(tr);
> }

OK, let's use it.

> 
> 
> > +
> > +	/*
> > +	 * This can be fail if someone gets @tr before starting this
> > +	 * function, but in that case, this will be kicked again when
> > +	 * putting it. So we don't care about the result.
> > +	 */
> > +	__remove_instance(tr);
> > +}
> > +
> > +static struct workqueue_struct *autoremove_wq;
> > +
> > +static void trace_array_kick_autoremove(struct trace_array *tr)
> > +{
> > +	if (autoremove_wq && !work_pending(&tr->autoremove_work))
> > +		queue_work(autoremove_wq, &tr->autoremove_work);
> 
> Doesn't queue_work() check if it's pending? Do we really need to check it
> twice?

Indeed, it checked the flag.

> 
> > +}
> > +
> > +static void trace_array_cancel_autoremove(struct trace_array *tr)
> > +{
> > +	if (autoremove_wq && work_pending(&tr->autoremove_work))
> > +		cancel_work(&tr->autoremove_work);
> 
> Same here, as can't this be racy?

Yeah, and this should use cancel_work_sync().

> 
> > +}
> > +
> > +static void trace_array_init_autoremove(struct trace_array *tr)
> > +{
> > +	INIT_WORK(&tr->autoremove_work, trace_array_autoremove);
> > +}
> > +
> > +static void trace_array_start_autoremove(void)
> > +{
> > +	if (autoremove_wq)
> > +		return;
> > +
> > +	autoremove_wq = alloc_workqueue("tr_autoremove_wq",
> > +					WQ_UNBOUND | WQ_HIGHPRI, 0);
> > +	if (!autoremove_wq)
> > +		pr_warn("Unable to allocate tr_autoremove_wq. autoremove
> > disabled.\n"); +}
> > +
> >  LIST_HEAD(ftrace_trace_arrays);
> 
> -- Steve
> 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v9 2/3] tracing: Remove the backup instance automatically after read
  2026-04-01  3:19     ` Masami Hiramatsu
@ 2026-04-01 14:40       ` Steven Rostedt
  2026-04-02 13:19         ` Masami Hiramatsu
  0 siblings, 1 reply; 12+ messages in thread
From: Steven Rostedt @ 2026-04-01 14:40 UTC (permalink / raw)
  To: Masami Hiramatsu (Google)
  Cc: Mathieu Desnoyers, linux-kernel, linux-trace-kernel

On Wed, 1 Apr 2026 12:19:57 +0900
Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote:

> > 
> >   CPU 0							CPU 1
> >   -----							-----
> >   open(trace_pipe);
> >   read(..);
> >   close(trace_pipe);
> >      kick the work queue to delete it....
> > 						rmdir();
> > 							[instance deleted]  
> 
> I thought this requires trace_types_lock, and after kicked the queue,
> can rmdir() gets the tr? (__trace_array_get() return error if
> tr->free_on_close is set)

rmdir() doesn't use __trace_array_get(), it uses trace_array_find() which
we shouldn't need to modify.

static int instance_rmdir(const char *name)
{
	struct trace_array *tr;

	guard(mutex)(&event_mutex);
	guard(mutex)(&trace_types_lock);

	tr = trace_array_find(name);
	if (!tr)
		return -ENODEV;

	return __remove_instance(tr);
}

> 
> > 
> >   __remove_instance();
> > 
> >    [ now the tr is freed, and the remove will crash!]
> > 
> > 
> > What would prevent this is this is to use trace_array_destroy() that checks
> > this and also adds the proper locking:
> > 
> > static void trace_array_autoremove(struct work_struct *work)
> > {
> > 	struct trace_array *tr = container_of(work, struct trace_array, autoremove_work);
> > 
> > 	trace_array_destroy(tr);
> > }  
> 
> OK, let's use it.

Yes, by using trace_array_destroy(), it will fix this.

Thanks,

-- Steve

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v9 3/3] tracing/Documentation: Add a section about backup instance
  2026-04-01  3:15     ` Masami Hiramatsu
@ 2026-04-01 14:41       ` Steven Rostedt
  0 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2026-04-01 14:41 UTC (permalink / raw)
  To: Masami Hiramatsu (Google)
  Cc: Mathieu Desnoyers, linux-kernel, linux-trace-kernel

On Wed, 1 Apr 2026 12:15:41 +0900
Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote:

> > > +It is also possible to record trace data at system boot time by specifying
> > > +events with the persistent ring buffer, but in this case the data before the
> > > +reboot will be lost before it can be read. This problem can be solved by a
> > > +backup instance. From the kernel command line::
> > > +
> > > +  reserve_mem=12M:4096:trace trace_instance=boot_map@trace,sched,irq trace_instance=backup=boot_map  
> > 
> > The above didn't actually work well without my patch to enable events on
> > the persistent ring buffer with the backup. But keep it, as it will work
> > with my patch ;-)  
> 
> Ah, thanks!
> Let me rebase on it.

Well, my patch hasn't been added yet ;-)  But don't change anything. When
this goes upstream both patches should have been applied.

-- Steve

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v9 2/3] tracing: Remove the backup instance automatically after read
  2026-04-01 14:40       ` Steven Rostedt
@ 2026-04-02 13:19         ` Masami Hiramatsu
  2026-04-02 14:52           ` Steven Rostedt
  0 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2026-04-02 13:19 UTC (permalink / raw)
  To: Steven Rostedt; +Cc: Mathieu Desnoyers, linux-kernel, linux-trace-kernel

On Wed, 1 Apr 2026 10:40:01 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> On Wed, 1 Apr 2026 12:19:57 +0900
> Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote:
> 
> > > 
> > >   CPU 0							CPU 1
> > >   -----							-----
> > >   open(trace_pipe);
> > >   read(..);
> > >   close(trace_pipe);
> > >      kick the work queue to delete it....
> > > 						rmdir();
> > > 							[instance deleted]  
> > 
> > I thought this requires trace_types_lock, and after kicked the queue,
> > can rmdir() gets the tr? (__trace_array_get() return error if
> > tr->free_on_close is set)
> 
> rmdir() doesn't use __trace_array_get(), it uses trace_array_find() which
> we shouldn't need to modify.
> 
> static int instance_rmdir(const char *name)
> {
> 	struct trace_array *tr;
> 
> 	guard(mutex)(&event_mutex);
> 	guard(mutex)(&trace_types_lock);
> 
> 	tr = trace_array_find(name);
> 	if (!tr)
> 		return -ENODEV;
> 
> 	return __remove_instance(tr);
> }

Oops, OK it must be updated too.

Thanks,

> 
> > 
> > > 
> > >   __remove_instance();
> > > 
> > >    [ now the tr is freed, and the remove will crash!]
> > > 
> > > 
> > > What would prevent this is this is to use trace_array_destroy() that checks
> > > this and also adds the proper locking:
> > > 
> > > static void trace_array_autoremove(struct work_struct *work)
> > > {
> > > 	struct trace_array *tr = container_of(work, struct trace_array, autoremove_work);
> > > 
> > > 	trace_array_destroy(tr);
> > > }  
> > 
> > OK, let's use it.
> 
> Yes, by using trace_array_destroy(), it will fix this.
> 
> Thanks,
> 
> -- Steve


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v9 2/3] tracing: Remove the backup instance automatically after read
  2026-04-02 13:19         ` Masami Hiramatsu
@ 2026-04-02 14:52           ` Steven Rostedt
  0 siblings, 0 replies; 12+ messages in thread
From: Steven Rostedt @ 2026-04-02 14:52 UTC (permalink / raw)
  To: Masami Hiramatsu (Google)
  Cc: Mathieu Desnoyers, linux-kernel, linux-trace-kernel

On Thu, 2 Apr 2026 22:19:43 +0900
Masami Hiramatsu (Google) <mhiramat@kernel.org> wrote:
> > 
> > rmdir() doesn't use __trace_array_get(), it uses trace_array_find() which
> > we shouldn't need to modify.
> > 


> Oops, OK it must be updated too.

No it doesn't. Use trace_array_destroy() (as mentioned below) and all will
be fine.

-- Steve

> > > > 
> > > > What would prevent this is this is to use trace_array_destroy() that checks
> > > > this and also adds the proper locking:
> > > > 
> > > > static void trace_array_autoremove(struct work_struct *work)
> > > > {
> > > > 	struct trace_array *tr = container_of(work, struct trace_array, autoremove_work);
> > > > 
> > > > 	trace_array_destroy(tr);
> > > > }    
> > > 
> > > OK, let's use it.  
> > 
> > Yes, by using trace_array_destroy(), it will fix this.
> > 

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2026-04-02 14:51 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-31 16:32 [PATCH v9 0/3] tracing: Remove backup instance after read all Masami Hiramatsu (Google)
2026-03-31 16:32 ` [PATCH v9 1/3] tracing: Make the backup instance non-reusable Masami Hiramatsu (Google)
2026-03-31 16:32 ` [PATCH v9 2/3] tracing: Remove the backup instance automatically after read Masami Hiramatsu (Google)
2026-03-31 21:19   ` Steven Rostedt
2026-04-01  3:19     ` Masami Hiramatsu
2026-04-01 14:40       ` Steven Rostedt
2026-04-02 13:19         ` Masami Hiramatsu
2026-04-02 14:52           ` Steven Rostedt
2026-03-31 16:32 ` [PATCH v9 3/3] tracing/Documentation: Add a section about backup instance Masami Hiramatsu (Google)
2026-03-31 21:21   ` Steven Rostedt
2026-04-01  3:15     ` Masami Hiramatsu
2026-04-01 14:41       ` Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox