linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/12] tracing: Remove most uses of "disabled" field
@ 2025-05-02 20:51 Steven Rostedt
  2025-05-02 20:51 ` [PATCH 01/12] tracing/mmiotrace: Remove reference to unused per CPU data pointer Steven Rostedt
                   ` (11 more replies)
  0 siblings, 12 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton


Looking into allowing syscall events to fault and read user space, I found
that the use of the per CPU data "disabled" field was mostly obsolete.
This goes back to 2008 when the tracing subsystem was first created.
The "disabled" field was the only way to know if tracing was disabled or
not. But things have changed in the last 17 years! The ring buffer itself
can disable tracing, and for the most part, that is what determines if
tracing is enabled or not.

Now the stack tracer and latency tracers still use the disabled field to
prevent corruption while its doing its per CPU accounting.

This series removes most uses of the disabled field. It also does some
various clean ups, like convert the disabled field into a local_t type from
an atomic_t type as it only is used to synchronize with interrupts and such.

Also, while inspecting the per CPU data, I realized that there's a
"buffer_page" field that was supposed to hold the reader page to be able to
reuse it. But that is done by the ring buffer infrastructure itself and this
field is unneeded, so it is removed.

Note, with this change, the trace events shouldn't need to be called with
preemption disabled anymore. This should allow the syscall trace event to be
updated to read user memory. It still has some code that requires preemption
disabled, but it does it internally and doesn't expect the functions to be
called with preemption disabled.

Steven Rostedt (12):
      tracing/mmiotrace: Remove reference to unused per CPU data pointer
      tracing: Do not bother setting "disabled" field for ftrace_dump_one()
      ftrace: Do not bother checking per CPU "disabled" flag
      tracing: Just use this_cpu_read() to access ignore_pid
      tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabled
      ftrace: Do not disabled function graph based on "disabled" field
      tracing: Do not use per CPU array_buffer.data->disabled for cpumask
      ring-buffer: Add ring_buffer_record_is_on_cpu()
      tracing: branch: Use trace_tracing_is_on_cpu() instead of "disabled" field
      tracing: Convert the per CPU "disabled" counter to local from atomic
      tracing: Use atomic_inc_return() for updating "disabled" counter in irqsoff tracer
      tracing: Remove unused buffer_page field from trace_array_cpu structure

----
 include/linux/ring_buffer.h          |  1 +
 kernel/trace/ring_buffer.c           | 18 ++++++++++++++
 kernel/trace/trace.c                 | 11 +--------
 kernel/trace/trace.h                 | 18 ++++++++++++--
 kernel/trace/trace_branch.c          |  4 +--
 kernel/trace/trace_events.c          |  9 ++++---
 kernel/trace/trace_functions.c       | 24 ++++++------------
 kernel/trace/trace_functions_graph.c | 38 +++++++----------------------
 kernel/trace/trace_irqsoff.c         | 47 +++++++++++++++++++++---------------
 kernel/trace/trace_kdb.c             |  8 ++----
 kernel/trace/trace_mmiotrace.c       | 12 ++-------
 kernel/trace/trace_sched_wakeup.c    | 18 +++++++-------
 12 files changed, 98 insertions(+), 110 deletions(-)

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 01/12] tracing/mmiotrace: Remove reference to unused per CPU data pointer
  2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
@ 2025-05-02 20:51 ` Steven Rostedt
  2025-05-02 20:51 ` [PATCH 02/12] tracing: Do not bother setting "disabled" field for ftrace_dump_one() Steven Rostedt
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The mmiotracer referenced the per CPU array_buffer->data descriptor but
never actually used it. Remove the references to it.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace_mmiotrace.c | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/kernel/trace/trace_mmiotrace.c b/kernel/trace/trace_mmiotrace.c
index ba5858866b2f..c706544be60c 100644
--- a/kernel/trace/trace_mmiotrace.c
+++ b/kernel/trace/trace_mmiotrace.c
@@ -291,7 +291,6 @@ __init static int init_mmio_trace(void)
 device_initcall(init_mmio_trace);
 
 static void __trace_mmiotrace_rw(struct trace_array *tr,
-				struct trace_array_cpu *data,
 				struct mmiotrace_rw *rw)
 {
 	struct trace_buffer *buffer = tr->array_buffer.buffer;
@@ -315,12 +314,10 @@ static void __trace_mmiotrace_rw(struct trace_array *tr,
 void mmio_trace_rw(struct mmiotrace_rw *rw)
 {
 	struct trace_array *tr = mmio_trace_array;
-	struct trace_array_cpu *data = per_cpu_ptr(tr->array_buffer.data, smp_processor_id());
-	__trace_mmiotrace_rw(tr, data, rw);
+	__trace_mmiotrace_rw(tr, rw);
 }
 
 static void __trace_mmiotrace_map(struct trace_array *tr,
-				struct trace_array_cpu *data,
 				struct mmiotrace_map *map)
 {
 	struct trace_buffer *buffer = tr->array_buffer.buffer;
@@ -344,12 +341,7 @@ static void __trace_mmiotrace_map(struct trace_array *tr,
 void mmio_trace_mapping(struct mmiotrace_map *map)
 {
 	struct trace_array *tr = mmio_trace_array;
-	struct trace_array_cpu *data;
-
-	preempt_disable();
-	data = per_cpu_ptr(tr->array_buffer.data, smp_processor_id());
-	__trace_mmiotrace_map(tr, data, map);
-	preempt_enable();
+	__trace_mmiotrace_map(tr, map);
 }
 
 int mmio_trace_printk(const char *fmt, va_list args)
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 02/12] tracing: Do not bother setting "disabled" field for ftrace_dump_one()
  2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
  2025-05-02 20:51 ` [PATCH 01/12] tracing/mmiotrace: Remove reference to unused per CPU data pointer Steven Rostedt
@ 2025-05-02 20:51 ` Steven Rostedt
  2025-05-02 20:51 ` [PATCH 03/12] ftrace: Do not bother checking per CPU "disabled" flag Steven Rostedt
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The per CPU "disabled" value was the original way to disable tracing when
the tracing subsystem was first created. Today, the ring buffer
infrastructure has its own way to disable tracing. In fact, things have
changed so much since 2008 that many things ignore the disable flag.

The ftrace_dump_one() function iterates over all the current tracing CPUs and
increments the "disabled" counter before doing the dump, and decrements it
afterward.

As the disabled flag can be ignored, doing this today is not reliable.
The code already calls tracer_tracing_off() which disables the ring
buffer, there's no reason to use the "disabled" flags.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 8ddf6b17215c..bae32778b292 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -10445,7 +10445,7 @@ static void ftrace_dump_one(struct trace_array *tr, enum ftrace_dump_mode dump_m
 	static struct trace_iterator iter;
 	unsigned int old_userobj;
 	unsigned long flags;
-	int cnt = 0, cpu;
+	int cnt = 0;
 
 	/*
 	 * Always turn off tracing when we dump.
@@ -10462,10 +10462,6 @@ static void ftrace_dump_one(struct trace_array *tr, enum ftrace_dump_mode dump_m
 	/* Simulate the iterator */
 	trace_init_iter(&iter, tr);
 
-	for_each_tracing_cpu(cpu) {
-		atomic_inc(&per_cpu_ptr(iter.array_buffer->data, cpu)->disabled);
-	}
-
 	old_userobj = tr->trace_flags & TRACE_ITER_SYM_USEROBJ;
 
 	/* don't look at user memory in panic mode */
@@ -10523,9 +10519,6 @@ static void ftrace_dump_one(struct trace_array *tr, enum ftrace_dump_mode dump_m
 
 	tr->trace_flags |= old_userobj;
 
-	for_each_tracing_cpu(cpu) {
-		atomic_dec(&per_cpu_ptr(iter.array_buffer->data, cpu)->disabled);
-	}
 	local_irq_restore(flags);
 }
 
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 03/12] ftrace: Do not bother checking per CPU "disabled" flag
  2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
  2025-05-02 20:51 ` [PATCH 01/12] tracing/mmiotrace: Remove reference to unused per CPU data pointer Steven Rostedt
  2025-05-02 20:51 ` [PATCH 02/12] tracing: Do not bother setting "disabled" field for ftrace_dump_one() Steven Rostedt
@ 2025-05-02 20:51 ` Steven Rostedt
  2025-05-02 20:51 ` [PATCH 04/12] tracing: Just use this_cpu_read() to access ignore_pid Steven Rostedt
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The per CPU "disabled" value was the original way to disable tracing when
the tracing subsystem was first created. Today, the ring buffer
infrastructure has its own way to disable tracing. In fact, things have
changed so much since 2008 that many things ignore the disable flag.

There's no reason for the function tracer to check it, if tracing is
disabled, the ring buffer will not record the event anyway.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace_functions.c | 16 +++-------------
 1 file changed, 3 insertions(+), 13 deletions(-)

diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index 98ccf3f00c51..bd153219a712 100644
--- a/kernel/trace/trace_functions.c
+++ b/kernel/trace/trace_functions.c
@@ -209,7 +209,6 @@ function_trace_call(unsigned long ip, unsigned long parent_ip,
 		    struct ftrace_ops *op, struct ftrace_regs *fregs)
 {
 	struct trace_array *tr = op->private;
-	struct trace_array_cpu *data;
 	unsigned int trace_ctx;
 	int bit;
 
@@ -224,9 +223,7 @@ function_trace_call(unsigned long ip, unsigned long parent_ip,
 
 	trace_ctx = tracing_gen_ctx_dec();
 
-	data = this_cpu_ptr(tr->array_buffer.data);
-	if (!atomic_read(&data->disabled))
-		trace_function(tr, ip, parent_ip, trace_ctx, NULL);
+	trace_function(tr, ip, parent_ip, trace_ctx, NULL);
 
 	ftrace_test_recursion_unlock(bit);
 }
@@ -236,10 +233,8 @@ function_args_trace_call(unsigned long ip, unsigned long parent_ip,
 			 struct ftrace_ops *op, struct ftrace_regs *fregs)
 {
 	struct trace_array *tr = op->private;
-	struct trace_array_cpu *data;
 	unsigned int trace_ctx;
 	int bit;
-	int cpu;
 
 	if (unlikely(!tr->function_enabled))
 		return;
@@ -250,10 +245,7 @@ function_args_trace_call(unsigned long ip, unsigned long parent_ip,
 
 	trace_ctx = tracing_gen_ctx();
 
-	cpu = smp_processor_id();
-	data = per_cpu_ptr(tr->array_buffer.data, cpu);
-	if (!atomic_read(&data->disabled))
-		trace_function(tr, ip, parent_ip, trace_ctx, fregs);
+	trace_function(tr, ip, parent_ip, trace_ctx, fregs);
 
 	ftrace_test_recursion_unlock(bit);
 }
@@ -352,7 +344,6 @@ function_no_repeats_trace_call(unsigned long ip, unsigned long parent_ip,
 {
 	struct trace_func_repeats *last_info;
 	struct trace_array *tr = op->private;
-	struct trace_array_cpu *data;
 	unsigned int trace_ctx;
 	int bit;
 
@@ -364,8 +355,7 @@ function_no_repeats_trace_call(unsigned long ip, unsigned long parent_ip,
 		return;
 
 	parent_ip = function_get_true_parent_ip(parent_ip, fregs);
-	data = this_cpu_ptr(tr->array_buffer.data);
-	if (atomic_read(&data->disabled))
+	if (!tracer_tracing_is_on(tr))
 		goto out;
 
 	/*
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 04/12] tracing: Just use this_cpu_read() to access ignore_pid
  2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
                   ` (2 preceding siblings ...)
  2025-05-02 20:51 ` [PATCH 03/12] ftrace: Do not bother checking per CPU "disabled" flag Steven Rostedt
@ 2025-05-02 20:51 ` Steven Rostedt
  2025-05-02 20:51 ` [PATCH 05/12] tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabled Steven Rostedt
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The ignore_pid boolean on the per CPU data descriptor is updated at
sched_switch when a new task is scheduled in. If the new task is to be
ignored, it is set to true, otherwise it is set to false. The current task
should always have the correct value as it is updated when the task is
scheduled in.

Instead of breaking up the read of this value, which requires preemption
to be disabled, just use this_cpu_read() which gives a snapshot of the
value. Since the value will always be correct for a given task (because
it's updated at sched switch) it doesn't need preemption disabled.

This will also allow trace events to be called with preemption enabled.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace_events.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 069e92856bda..fe0ea14d809e 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -622,7 +622,6 @@ EXPORT_SYMBOL_GPL(trace_event_raw_init);
 bool trace_event_ignore_this_pid(struct trace_event_file *trace_file)
 {
 	struct trace_array *tr = trace_file->tr;
-	struct trace_array_cpu *data;
 	struct trace_pid_list *no_pid_list;
 	struct trace_pid_list *pid_list;
 
@@ -632,9 +631,11 @@ bool trace_event_ignore_this_pid(struct trace_event_file *trace_file)
 	if (!pid_list && !no_pid_list)
 		return false;
 
-	data = this_cpu_ptr(tr->array_buffer.data);
-
-	return data->ignore_pid;
+	/*
+	 * This is recorded at every sched_switch for this task.
+	 * Thus, even if the task migrates the ignore value will be the same.
+	 */
+	return this_cpu_read(tr->array_buffer.data->ignore_pid) != 0;
 }
 EXPORT_SYMBOL_GPL(trace_event_ignore_this_pid);
 
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 05/12] tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabled
  2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
                   ` (3 preceding siblings ...)
  2025-05-02 20:51 ` [PATCH 04/12] tracing: Just use this_cpu_read() to access ignore_pid Steven Rostedt
@ 2025-05-02 20:51 ` Steven Rostedt
  2025-05-05  5:05   ` kernel test robot
  2025-05-05 15:42   ` Doug Anderson
  2025-05-02 20:51 ` [PATCH 06/12] ftrace: Do not disabled function graph based on "disabled" field Steven Rostedt
                   ` (6 subsequent siblings)
  11 siblings, 2 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	Jason Wessel, Daniel Thompson, Douglas Anderson

From: Steven Rostedt <rostedt@goodmis.org>

The per CPU "disabled" value was the original way to disable tracing when
the tracing subsystem was first created. Today, the ring buffer
infrastructure has its own way to disable tracing. In fact, things have
changed so much since 2008 that many things ignore the disable flag.

The kdb_ftdump() function iterates over all the current tracing CPUs and
increments the "disabled" counter before doing the dump, and decrements it
afterward.

As the disabled flag can be ignored, doing this today is not reliable.
Instead, simply call tracer_tracing_off() and then tracer_tracing_on() to
disable and then enabled the entire ring buffer in one go!

Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Daniel Thompson <danielt@kernel.org>
Cc: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace_kdb.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
index 1e72d20b3c2f..b5cf3fdde8cb 100644
--- a/kernel/trace/trace_kdb.c
+++ b/kernel/trace/trace_kdb.c
@@ -120,9 +120,7 @@ static int kdb_ftdump(int argc, const char **argv)
 	trace_init_global_iter(&iter);
 	iter.buffer_iter = buffer_iter;
 
-	for_each_tracing_cpu(cpu) {
-		atomic_inc(&per_cpu_ptr(iter.array_buffer->data, cpu)->disabled);
-	}
+	tracer_tracing_off(iter.tr);
 
 	/* A negative skip_entries means skip all but the last entries */
 	if (skip_entries < 0) {
@@ -135,9 +133,7 @@ static int kdb_ftdump(int argc, const char **argv)
 
 	ftrace_dump_buf(skip_entries, cpu_file);
 
-	for_each_tracing_cpu(cpu) {
-		atomic_dec(&per_cpu_ptr(iter.array_buffer->data, cpu)->disabled);
-	}
+	tracer_tracing_on(iter.tr);
 
 	kdb_trap_printk--;
 
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 06/12] ftrace: Do not disabled function graph based on "disabled" field
  2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
                   ` (4 preceding siblings ...)
  2025-05-02 20:51 ` [PATCH 05/12] tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabled Steven Rostedt
@ 2025-05-02 20:51 ` Steven Rostedt
  2025-05-02 20:51 ` [PATCH 07/12] tracing: Do not use per CPU array_buffer.data->disabled for cpumask Steven Rostedt
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The per CPU "disabled" value was the original way to disable tracing when
the tracing subsystem was first created. Today, the ring buffer
infrastructure has its own way to disable tracing. In fact, things have
changed so much since 2008 that many things ignore the disable flag.

Do not bother disabling the function graph tracer if the per CPU disabled
field is set. Just record as normal. If tracing is disabled in the ring
buffer it will not be recorded.

Also, when tracing is enabled again, it will not drop the return call of
the function.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace_functions_graph.c | 38 +++++++---------------------
 1 file changed, 9 insertions(+), 29 deletions(-)

diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 0c357a89c58e..9234e2c39abf 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -202,12 +202,9 @@ static int graph_entry(struct ftrace_graph_ent *trace,
 {
 	unsigned long *task_var = fgraph_get_task_var(gops);
 	struct trace_array *tr = gops->private;
-	struct trace_array_cpu *data;
 	struct fgraph_times *ftimes;
 	unsigned int trace_ctx;
-	long disabled;
 	int ret = 0;
-	int cpu;
 
 	if (*task_var & TRACE_GRAPH_NOTRACE)
 		return 0;
@@ -257,21 +254,14 @@ static int graph_entry(struct ftrace_graph_ent *trace,
 	if (tracing_thresh)
 		return 1;
 
-	preempt_disable_notrace();
-	cpu = raw_smp_processor_id();
-	data = per_cpu_ptr(tr->array_buffer.data, cpu);
-	disabled = atomic_read(&data->disabled);
-	if (likely(!disabled)) {
-		trace_ctx = tracing_gen_ctx();
-		if (IS_ENABLED(CONFIG_FUNCTION_GRAPH_RETADDR) &&
-		    tracer_flags_is_set(TRACE_GRAPH_PRINT_RETADDR)) {
-			unsigned long retaddr = ftrace_graph_top_ret_addr(current);
-			ret = __trace_graph_retaddr_entry(tr, trace, trace_ctx, retaddr);
-		} else {
-			ret = __graph_entry(tr, trace, trace_ctx, fregs);
-		}
+	trace_ctx = tracing_gen_ctx();
+	if (IS_ENABLED(CONFIG_FUNCTION_GRAPH_RETADDR) &&
+	    tracer_flags_is_set(TRACE_GRAPH_PRINT_RETADDR)) {
+		unsigned long retaddr = ftrace_graph_top_ret_addr(current);
+		ret = __trace_graph_retaddr_entry(tr, trace, trace_ctx, retaddr);
+	} else {
+		ret = __graph_entry(tr, trace, trace_ctx, fregs);
 	}
-	preempt_enable_notrace();
 
 	return ret;
 }
@@ -351,13 +341,10 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
 {
 	unsigned long *task_var = fgraph_get_task_var(gops);
 	struct trace_array *tr = gops->private;
-	struct trace_array_cpu *data;
 	struct fgraph_times *ftimes;
 	unsigned int trace_ctx;
 	u64 calltime, rettime;
-	long disabled;
 	int size;
-	int cpu;
 
 	rettime = trace_clock_local();
 
@@ -376,15 +363,8 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
 
 	calltime = ftimes->calltime;
 
-	preempt_disable_notrace();
-	cpu = raw_smp_processor_id();
-	data = per_cpu_ptr(tr->array_buffer.data, cpu);
-	disabled = atomic_read(&data->disabled);
-	if (likely(!disabled)) {
-		trace_ctx = tracing_gen_ctx();
-		__trace_graph_return(tr, trace, trace_ctx, calltime, rettime);
-	}
-	preempt_enable_notrace();
+	trace_ctx = tracing_gen_ctx();
+	__trace_graph_return(tr, trace, trace_ctx, calltime, rettime);
 }
 
 static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 07/12] tracing: Do not use per CPU array_buffer.data->disabled for cpumask
  2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
                   ` (5 preceding siblings ...)
  2025-05-02 20:51 ` [PATCH 06/12] ftrace: Do not disabled function graph based on "disabled" field Steven Rostedt
@ 2025-05-02 20:51 ` Steven Rostedt
  2025-05-02 20:51 ` [PATCH 08/12] ring-buffer: Add ring_buffer_record_is_on_cpu() Steven Rostedt
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The per CPU "disabled" value was the original way to disable tracing when
the tracing subsystem was first created. Today, the ring buffer
infrastructure has its own way to disable tracing. In fact, things have
changed so much since 2008 that many things ignore the disable flag.

Do not bother setting the per CPU disabled flag of the array_buffer data
to use to determine what CPUs can write to the buffer and only rely on the
ring buffer code itself to disabled it.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index bae32778b292..8cee71683fe3 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -5048,7 +5048,6 @@ int tracing_set_cpumask(struct trace_array *tr,
 		 */
 		if (cpumask_test_cpu(cpu, tr->tracing_cpumask) &&
 				!cpumask_test_cpu(cpu, tracing_cpumask_new)) {
-			atomic_inc(&per_cpu_ptr(tr->array_buffer.data, cpu)->disabled);
 			ring_buffer_record_disable_cpu(tr->array_buffer.buffer, cpu);
 #ifdef CONFIG_TRACER_MAX_TRACE
 			ring_buffer_record_disable_cpu(tr->max_buffer.buffer, cpu);
@@ -5056,7 +5055,6 @@ int tracing_set_cpumask(struct trace_array *tr,
 		}
 		if (!cpumask_test_cpu(cpu, tr->tracing_cpumask) &&
 				cpumask_test_cpu(cpu, tracing_cpumask_new)) {
-			atomic_dec(&per_cpu_ptr(tr->array_buffer.data, cpu)->disabled);
 			ring_buffer_record_enable_cpu(tr->array_buffer.buffer, cpu);
 #ifdef CONFIG_TRACER_MAX_TRACE
 			ring_buffer_record_enable_cpu(tr->max_buffer.buffer, cpu);
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 08/12] ring-buffer: Add ring_buffer_record_is_on_cpu()
  2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
                   ` (6 preceding siblings ...)
  2025-05-02 20:51 ` [PATCH 07/12] tracing: Do not use per CPU array_buffer.data->disabled for cpumask Steven Rostedt
@ 2025-05-02 20:51 ` Steven Rostedt
  2025-05-02 20:51 ` [PATCH 09/12] tracing: branch: Use trace_tracing_is_on_cpu() instead of "disabled" field Steven Rostedt
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

Add the function ring_buffer_record_is_on_cpu() that returns true if the
ring buffer for a give CPU is writable and false otherwise.

Also add tracer_tracing_is_on_cpu() to return if the ring buffer for a
given CPU is writeable for a given trace_array.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 include/linux/ring_buffer.h |  1 +
 kernel/trace/ring_buffer.c  | 18 ++++++++++++++++++
 kernel/trace/trace.h        | 15 +++++++++++++++
 3 files changed, 34 insertions(+)

diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
index 56e27263acf8..cd7f0ae26615 100644
--- a/include/linux/ring_buffer.h
+++ b/include/linux/ring_buffer.h
@@ -192,6 +192,7 @@ void ring_buffer_record_off(struct trace_buffer *buffer);
 void ring_buffer_record_on(struct trace_buffer *buffer);
 bool ring_buffer_record_is_on(struct trace_buffer *buffer);
 bool ring_buffer_record_is_set_on(struct trace_buffer *buffer);
+bool ring_buffer_record_is_on_cpu(struct trace_buffer *buffer, int cpu);
 void ring_buffer_record_disable_cpu(struct trace_buffer *buffer, int cpu);
 void ring_buffer_record_enable_cpu(struct trace_buffer *buffer, int cpu);
 
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index c0f877d39a24..1ca482955dae 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -4882,6 +4882,24 @@ bool ring_buffer_record_is_set_on(struct trace_buffer *buffer)
 	return !(atomic_read(&buffer->record_disabled) & RB_BUFFER_OFF);
 }
 
+/**
+ * ring_buffer_record_is_on_cpu - return true if the ring buffer can write
+ * @buffer: The ring buffer to see if write is enabled
+ * @cpu: The CPU to test if the ring buffer can write too
+ *
+ * Returns true if the ring buffer is in a state that it accepts writes
+ *   for a particular CPU.
+ */
+bool ring_buffer_record_is_on_cpu(struct trace_buffer *buffer, int cpu)
+{
+	struct ring_buffer_per_cpu *cpu_buffer;
+
+	cpu_buffer = buffer->buffers[cpu];
+
+	return ring_buffer_record_is_set_on(buffer) &&
+		!atomic_read(&cpu_buffer->record_disabled);
+}
+
 /**
  * ring_buffer_record_disable_cpu - stop all writes into the cpu_buffer
  * @buffer: The ring buffer to stop writes to.
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 79be1995db44..294d92179a02 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -671,6 +671,21 @@ struct dentry *trace_create_file(const char *name,
 				 void *data,
 				 const struct file_operations *fops);
 
+
+/**
+ * tracer_tracing_is_on_cpu - show real state of ring buffer enabled on for a cpu
+ * @tr : the trace array to know if ring buffer is enabled
+ * @cpu: The cpu buffer to check if enabled
+ *
+ * Shows real state of the per CPU buffer if it is enabled or not.
+ */
+static inline bool tracer_tracing_is_on_cpu(struct trace_array *tr, int cpu)
+{
+	if (tr->array_buffer.buffer)
+		return ring_buffer_record_is_on_cpu(tr->array_buffer.buffer, cpu);
+	return false;
+}
+
 int tracing_init_dentry(void);
 
 struct ring_buffer_event;
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 09/12] tracing: branch: Use trace_tracing_is_on_cpu() instead of "disabled" field
  2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
                   ` (7 preceding siblings ...)
  2025-05-02 20:51 ` [PATCH 08/12] ring-buffer: Add ring_buffer_record_is_on_cpu() Steven Rostedt
@ 2025-05-02 20:51 ` Steven Rostedt
  2025-05-03 10:10   ` kernel test robot
  2025-05-05 12:11   ` kernel test robot
  2025-05-02 20:51 ` [PATCH 10/12] tracing: Convert the per CPU "disabled" counter to local from atomic Steven Rostedt
                   ` (2 subsequent siblings)
  11 siblings, 2 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The branch tracer currently checks the per CPU "disabled" field to know if
tracing is enabled or not for the CPU. As the "disabled" value is not used
anymore to turn of tracing generically, use tracing_tracer_is_on_cpu()
instead.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace_branch.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/kernel/trace/trace_branch.c b/kernel/trace/trace_branch.c
index 6d08a5523ce0..2a1a06eb7939 100644
--- a/kernel/trace/trace_branch.c
+++ b/kernel/trace/trace_branch.c
@@ -32,7 +32,6 @@ probe_likely_condition(struct ftrace_likely_data *f, int val, int expect)
 {
 	struct trace_array *tr = branch_tracer;
 	struct trace_buffer *buffer;
-	struct trace_array_cpu *data;
 	struct ring_buffer_event *event;
 	struct trace_branch *entry;
 	unsigned long flags;
@@ -54,8 +53,7 @@ probe_likely_condition(struct ftrace_likely_data *f, int val, int expect)
 
 	raw_local_irq_save(flags);
 	current->trace_recursion |= TRACE_BRANCH_BIT;
-	data = this_cpu_ptr(tr->array_buffer.data);
-	if (atomic_read(&data->disabled))
+	if (!tracer_tracing_is_on_cpu(tr, raw_smp_process_id()))
 		goto out;
 
 	trace_ctx = tracing_gen_ctx_flags(flags);
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 10/12] tracing: Convert the per CPU "disabled" counter to local from atomic
  2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
                   ` (8 preceding siblings ...)
  2025-05-02 20:51 ` [PATCH 09/12] tracing: branch: Use trace_tracing_is_on_cpu() instead of "disabled" field Steven Rostedt
@ 2025-05-02 20:51 ` Steven Rostedt
  2025-05-02 20:51 ` [PATCH 11/12] tracing: Use atomic_inc_return() for updating "disabled" counter in irqsoff tracer Steven Rostedt
  2025-05-02 20:51 ` [PATCH 12/12] tracing: Remove unused buffer_page field from trace_array_cpu structure Steven Rostedt
  11 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The per CPU "disabled" counter is used for the latency tracers and stack
tracers to make sure that their accounting isn't messed up by an NMI or
interrupt coming in and affecting the same CPU data. But the counter is an
atomic_t type. As it only needs to synchronize against the current CPU,
switch it over to local_t type.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace.h              |  2 +-
 kernel/trace/trace_functions.c    |  8 ++++----
 kernel/trace/trace_irqsoff.c      | 22 +++++++++++-----------
 kernel/trace/trace_sched_wakeup.c | 18 +++++++++---------
 4 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 294d92179a02..646fb5c1066e 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -183,7 +183,7 @@ struct trace_array;
  * the trace, etc.)
  */
 struct trace_array_cpu {
-	atomic_t		disabled;
+	local_t			disabled;
 	void			*buffer_page;	/* ring buffer spare */
 
 	unsigned long		entries;
diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
index bd153219a712..99a90f182485 100644
--- a/kernel/trace/trace_functions.c
+++ b/kernel/trace/trace_functions.c
@@ -291,7 +291,7 @@ function_stack_trace_call(unsigned long ip, unsigned long parent_ip,
 	parent_ip = function_get_true_parent_ip(parent_ip, fregs);
 	cpu = raw_smp_processor_id();
 	data = per_cpu_ptr(tr->array_buffer.data, cpu);
-	disabled = atomic_inc_return(&data->disabled);
+	disabled = local_inc_return(&data->disabled);
 
 	if (likely(disabled == 1)) {
 		trace_ctx = tracing_gen_ctx_flags(flags);
@@ -303,7 +303,7 @@ function_stack_trace_call(unsigned long ip, unsigned long parent_ip,
 		__trace_stack(tr, trace_ctx, skip);
 	}
 
-	atomic_dec(&data->disabled);
+	local_dec(&data->disabled);
 	local_irq_restore(flags);
 }
 
@@ -402,7 +402,7 @@ function_stack_no_repeats_trace_call(unsigned long ip, unsigned long parent_ip,
 	parent_ip = function_get_true_parent_ip(parent_ip, fregs);
 	cpu = raw_smp_processor_id();
 	data = per_cpu_ptr(tr->array_buffer.data, cpu);
-	disabled = atomic_inc_return(&data->disabled);
+	disabled = local_inc_return(&data->disabled);
 
 	if (likely(disabled == 1)) {
 		last_info = per_cpu_ptr(tr->last_func_repeats, cpu);
@@ -417,7 +417,7 @@ function_stack_no_repeats_trace_call(unsigned long ip, unsigned long parent_ip,
 	}
 
  out:
-	atomic_dec(&data->disabled);
+	local_dec(&data->disabled);
 	local_irq_restore(flags);
 }
 
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 40c39e946940..0b6d932a931e 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -123,12 +123,12 @@ static int func_prolog_dec(struct trace_array *tr,
 		return 0;
 
 	*data = per_cpu_ptr(tr->array_buffer.data, cpu);
-	disabled = atomic_inc_return(&(*data)->disabled);
+	disabled = local_inc_return(&(*data)->disabled);
 
 	if (likely(disabled == 1))
 		return 1;
 
-	atomic_dec(&(*data)->disabled);
+	local_dec(&(*data)->disabled);
 
 	return 0;
 }
@@ -152,7 +152,7 @@ irqsoff_tracer_call(unsigned long ip, unsigned long parent_ip,
 
 	trace_function(tr, ip, parent_ip, trace_ctx, fregs);
 
-	atomic_dec(&data->disabled);
+	local_dec(&data->disabled);
 }
 #endif /* CONFIG_FUNCTION_TRACER */
 
@@ -209,7 +209,7 @@ static int irqsoff_graph_entry(struct ftrace_graph_ent *trace,
 
 	trace_ctx = tracing_gen_ctx_flags(flags);
 	ret = __trace_graph_entry(tr, trace, trace_ctx);
-	atomic_dec(&data->disabled);
+	local_dec(&data->disabled);
 
 	return ret;
 }
@@ -238,7 +238,7 @@ static void irqsoff_graph_return(struct ftrace_graph_ret *trace,
 
 	trace_ctx = tracing_gen_ctx_flags(flags);
 	__trace_graph_return(tr, trace, trace_ctx, *calltime, rettime);
-	atomic_dec(&data->disabled);
+	local_dec(&data->disabled);
 }
 
 static struct fgraph_ops fgraph_ops = {
@@ -408,10 +408,10 @@ start_critical_timing(unsigned long ip, unsigned long parent_ip)
 
 	data = per_cpu_ptr(tr->array_buffer.data, cpu);
 
-	if (unlikely(!data) || atomic_read(&data->disabled))
+	if (unlikely(!data) || local_read(&data->disabled))
 		return;
 
-	atomic_inc(&data->disabled);
+	local_inc(&data->disabled);
 
 	data->critical_sequence = max_sequence;
 	data->preempt_timestamp = ftrace_now(cpu);
@@ -421,7 +421,7 @@ start_critical_timing(unsigned long ip, unsigned long parent_ip)
 
 	per_cpu(tracing_cpu, cpu) = 1;
 
-	atomic_dec(&data->disabled);
+	local_dec(&data->disabled);
 }
 
 static nokprobe_inline void
@@ -445,16 +445,16 @@ stop_critical_timing(unsigned long ip, unsigned long parent_ip)
 	data = per_cpu_ptr(tr->array_buffer.data, cpu);
 
 	if (unlikely(!data) ||
-	    !data->critical_start || atomic_read(&data->disabled))
+	    !data->critical_start || local_read(&data->disabled))
 		return;
 
-	atomic_inc(&data->disabled);
+	local_inc(&data->disabled);
 
 	trace_ctx = tracing_gen_ctx();
 	__trace_function(tr, ip, parent_ip, trace_ctx);
 	check_critical_timing(tr, data, parent_ip ? : ip, cpu);
 	data->critical_start = 0;
-	atomic_dec(&data->disabled);
+	local_dec(&data->disabled);
 }
 
 /* start and stop critical timings used to for stoppage (in idle) */
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index a0db3404f7f7..bf1cb80742ae 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -83,14 +83,14 @@ func_prolog_preempt_disable(struct trace_array *tr,
 		goto out_enable;
 
 	*data = per_cpu_ptr(tr->array_buffer.data, cpu);
-	disabled = atomic_inc_return(&(*data)->disabled);
+	disabled = local_inc_return(&(*data)->disabled);
 	if (unlikely(disabled != 1))
 		goto out;
 
 	return 1;
 
 out:
-	atomic_dec(&(*data)->disabled);
+	local_dec(&(*data)->disabled);
 
 out_enable:
 	preempt_enable_notrace();
@@ -144,7 +144,7 @@ static int wakeup_graph_entry(struct ftrace_graph_ent *trace,
 	*calltime = trace_clock_local();
 
 	ret = __trace_graph_entry(tr, trace, trace_ctx);
-	atomic_dec(&data->disabled);
+	local_dec(&data->disabled);
 	preempt_enable_notrace();
 
 	return ret;
@@ -173,7 +173,7 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace,
 		return;
 
 	__trace_graph_return(tr, trace, trace_ctx, *calltime, rettime);
-	atomic_dec(&data->disabled);
+	local_dec(&data->disabled);
 
 	preempt_enable_notrace();
 	return;
@@ -243,7 +243,7 @@ wakeup_tracer_call(unsigned long ip, unsigned long parent_ip,
 	trace_function(tr, ip, parent_ip, trace_ctx, fregs);
 	local_irq_restore(flags);
 
-	atomic_dec(&data->disabled);
+	local_dec(&data->disabled);
 	preempt_enable_notrace();
 }
 
@@ -471,7 +471,7 @@ probe_wakeup_sched_switch(void *ignore, bool preempt,
 
 	/* disable local data, not wakeup_cpu data */
 	cpu = raw_smp_processor_id();
-	disabled = atomic_inc_return(&per_cpu_ptr(wakeup_trace->array_buffer.data, cpu)->disabled);
+	disabled = local_inc_return(&per_cpu_ptr(wakeup_trace->array_buffer.data, cpu)->disabled);
 	if (likely(disabled != 1))
 		goto out;
 
@@ -508,7 +508,7 @@ probe_wakeup_sched_switch(void *ignore, bool preempt,
 	arch_spin_unlock(&wakeup_lock);
 	local_irq_restore(flags);
 out:
-	atomic_dec(&per_cpu_ptr(wakeup_trace->array_buffer.data, cpu)->disabled);
+	local_dec(&per_cpu_ptr(wakeup_trace->array_buffer.data, cpu)->disabled);
 }
 
 static void __wakeup_reset(struct trace_array *tr)
@@ -563,7 +563,7 @@ probe_wakeup(void *ignore, struct task_struct *p)
 	    (!dl_task(p) && (p->prio >= wakeup_prio || p->prio >= current->prio)))
 		return;
 
-	disabled = atomic_inc_return(&per_cpu_ptr(wakeup_trace->array_buffer.data, cpu)->disabled);
+	disabled = local_inc_return(&per_cpu_ptr(wakeup_trace->array_buffer.data, cpu)->disabled);
 	if (unlikely(disabled != 1))
 		goto out;
 
@@ -610,7 +610,7 @@ probe_wakeup(void *ignore, struct task_struct *p)
 out_locked:
 	arch_spin_unlock(&wakeup_lock);
 out:
-	atomic_dec(&per_cpu_ptr(wakeup_trace->array_buffer.data, cpu)->disabled);
+	local_dec(&per_cpu_ptr(wakeup_trace->array_buffer.data, cpu)->disabled);
 }
 
 static void start_wakeup_tracer(struct trace_array *tr)
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 11/12] tracing: Use atomic_inc_return() for updating "disabled" counter in irqsoff tracer
  2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
                   ` (9 preceding siblings ...)
  2025-05-02 20:51 ` [PATCH 10/12] tracing: Convert the per CPU "disabled" counter to local from atomic Steven Rostedt
@ 2025-05-02 20:51 ` Steven Rostedt
  2025-05-02 20:51 ` [PATCH 12/12] tracing: Remove unused buffer_page field from trace_array_cpu structure Steven Rostedt
  11 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The irqsoff tracer uses the per CPU "disabled" field to prevent corruption
of the accounting when it starts to trace interrupts disabled, but there's
a slight race that could happen if for some reason it was called twice.
Use atomic_inc_return() instead.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace_irqsoff.c | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 0b6d932a931e..5496758b6c76 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -397,6 +397,7 @@ start_critical_timing(unsigned long ip, unsigned long parent_ip)
 	int cpu;
 	struct trace_array *tr = irqsoff_trace;
 	struct trace_array_cpu *data;
+	long disabled;
 
 	if (!tracer_enabled || !tracing_is_enabled())
 		return;
@@ -411,15 +412,17 @@ start_critical_timing(unsigned long ip, unsigned long parent_ip)
 	if (unlikely(!data) || local_read(&data->disabled))
 		return;
 
-	local_inc(&data->disabled);
+	disabled = local_inc_return(&data->disabled);
 
-	data->critical_sequence = max_sequence;
-	data->preempt_timestamp = ftrace_now(cpu);
-	data->critical_start = parent_ip ? : ip;
+	if (disabled == 1) {
+		data->critical_sequence = max_sequence;
+		data->preempt_timestamp = ftrace_now(cpu);
+		data->critical_start = parent_ip ? : ip;
 
-	__trace_function(tr, ip, parent_ip, tracing_gen_ctx());
+		__trace_function(tr, ip, parent_ip, tracing_gen_ctx());
 
-	per_cpu(tracing_cpu, cpu) = 1;
+		per_cpu(tracing_cpu, cpu) = 1;
+	}
 
 	local_dec(&data->disabled);
 }
@@ -431,6 +434,7 @@ stop_critical_timing(unsigned long ip, unsigned long parent_ip)
 	struct trace_array *tr = irqsoff_trace;
 	struct trace_array_cpu *data;
 	unsigned int trace_ctx;
+	long disabled;
 
 	cpu = raw_smp_processor_id();
 	/* Always clear the tracing cpu on stopping the trace */
@@ -448,12 +452,15 @@ stop_critical_timing(unsigned long ip, unsigned long parent_ip)
 	    !data->critical_start || local_read(&data->disabled))
 		return;
 
-	local_inc(&data->disabled);
+	disabled = local_inc_return(&data->disabled);
+
+	if (disabled == 1) {
+		trace_ctx = tracing_gen_ctx();
+		__trace_function(tr, ip, parent_ip, trace_ctx);
+		check_critical_timing(tr, data, parent_ip ? : ip, cpu);
+		data->critical_start = 0;
+	}
 
-	trace_ctx = tracing_gen_ctx();
-	__trace_function(tr, ip, parent_ip, trace_ctx);
-	check_critical_timing(tr, data, parent_ip ? : ip, cpu);
-	data->critical_start = 0;
 	local_dec(&data->disabled);
 }
 
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 12/12] tracing: Remove unused buffer_page field from trace_array_cpu structure
  2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
                   ` (10 preceding siblings ...)
  2025-05-02 20:51 ` [PATCH 11/12] tracing: Use atomic_inc_return() for updating "disabled" counter in irqsoff tracer Steven Rostedt
@ 2025-05-02 20:51 ` Steven Rostedt
  11 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-02 20:51 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The trace_array_cpu had a "buffer_page" field that was originally going to
be used as a backup page for the ring buffer. But the ring buffer has its
own way of reusing pages and this field was never used.

Remove it.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 646fb5c1066e..499f8d294861 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -184,7 +184,6 @@ struct trace_array;
  */
 struct trace_array_cpu {
 	local_t			disabled;
-	void			*buffer_page;	/* ring buffer spare */
 
 	unsigned long		entries;
 	unsigned long		saved_latency;
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 09/12] tracing: branch: Use trace_tracing_is_on_cpu() instead of "disabled" field
  2025-05-02 20:51 ` [PATCH 09/12] tracing: branch: Use trace_tracing_is_on_cpu() instead of "disabled" field Steven Rostedt
@ 2025-05-03 10:10   ` kernel test robot
  2025-05-05 12:11   ` kernel test robot
  1 sibling, 0 replies; 18+ messages in thread
From: kernel test robot @ 2025-05-03 10:10 UTC (permalink / raw)
  To: Steven Rostedt, linux-kernel, linux-trace-kernel
  Cc: oe-kbuild-all, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
	Andrew Morton, Linux Memory Management List

Hi Steven,

kernel test robot noticed the following build errors:

[auto build test ERROR on trace/for-next]
[also build test ERROR on linus/master v6.15-rc4 next-20250502]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Steven-Rostedt/tracing-mmiotrace-Remove-reference-to-unused-per-CPU-data-pointer/20250503-050317
base:   https://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace for-next
patch link:    https://lore.kernel.org/r/20250502205349.299144667%40goodmis.org
patch subject: [PATCH 09/12] tracing: branch: Use trace_tracing_is_on_cpu() instead of "disabled" field
config: arc-randconfig-001-20250503 (https://download.01.org/0day-ci/archive/20250503/202505031738.buFg2SBt-lkp@intel.com/config)
compiler: arc-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250503/202505031738.buFg2SBt-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202505031738.buFg2SBt-lkp@intel.com/

All errors (new ones prefixed by >>):

   kernel/trace/trace_branch.c: In function 'probe_likely_condition':
>> kernel/trace/trace_branch.c:56:43: error: implicit declaration of function 'raw_smp_process_id'; did you mean 'raw_smp_processor_id'? [-Wimplicit-function-declaration]
      56 |         if (!tracer_tracing_is_on_cpu(tr, raw_smp_process_id()))
         |                                           ^~~~~~~~~~~~~~~~~~
         |                                           raw_smp_processor_id


vim +56 kernel/trace/trace_branch.c

    29	
    30	static void
    31	probe_likely_condition(struct ftrace_likely_data *f, int val, int expect)
    32	{
    33		struct trace_array *tr = branch_tracer;
    34		struct trace_buffer *buffer;
    35		struct ring_buffer_event *event;
    36		struct trace_branch *entry;
    37		unsigned long flags;
    38		unsigned int trace_ctx;
    39		const char *p;
    40	
    41		if (current->trace_recursion & TRACE_BRANCH_BIT)
    42			return;
    43	
    44		/*
    45		 * I would love to save just the ftrace_likely_data pointer, but
    46		 * this code can also be used by modules. Ugly things can happen
    47		 * if the module is unloaded, and then we go and read the
    48		 * pointer.  This is slower, but much safer.
    49		 */
    50	
    51		if (unlikely(!tr))
    52			return;
    53	
    54		raw_local_irq_save(flags);
    55		current->trace_recursion |= TRACE_BRANCH_BIT;
  > 56		if (!tracer_tracing_is_on_cpu(tr, raw_smp_process_id()))
    57			goto out;
    58	
    59		trace_ctx = tracing_gen_ctx_flags(flags);
    60		buffer = tr->array_buffer.buffer;
    61		event = trace_buffer_lock_reserve(buffer, TRACE_BRANCH,
    62						  sizeof(*entry), trace_ctx);
    63		if (!event)
    64			goto out;
    65	
    66		entry	= ring_buffer_event_data(event);
    67	
    68		/* Strip off the path, only save the file */
    69		p = f->data.file + strlen(f->data.file);
    70		while (p >= f->data.file && *p != '/')
    71			p--;
    72		p++;
    73	
    74		strscpy(entry->func, f->data.func);
    75		strscpy(entry->file, p);
    76		entry->constant = f->constant;
    77		entry->line = f->data.line;
    78		entry->correct = val == expect;
    79	
    80		trace_buffer_unlock_commit_nostack(buffer, event);
    81	
    82	 out:
    83		current->trace_recursion &= ~TRACE_BRANCH_BIT;
    84		raw_local_irq_restore(flags);
    85	}
    86	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 05/12] tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabled
  2025-05-02 20:51 ` [PATCH 05/12] tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabled Steven Rostedt
@ 2025-05-05  5:05   ` kernel test robot
  2025-05-05 15:42   ` Doug Anderson
  1 sibling, 0 replies; 18+ messages in thread
From: kernel test robot @ 2025-05-05  5:05 UTC (permalink / raw)
  To: Steven Rostedt, linux-kernel, linux-trace-kernel
  Cc: oe-kbuild-all, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
	Andrew Morton, Linux Memory Management List, Jason Wessel,
	Daniel Thompson, Douglas Anderson

Hi Steven,

kernel test robot noticed the following build warnings:

[auto build test WARNING on trace/for-next]
[also build test WARNING on linus/master v6.15-rc4 next-20250502]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Steven-Rostedt/tracing-mmiotrace-Remove-reference-to-unused-per-CPU-data-pointer/20250503-050317
base:   https://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace for-next
patch link:    https://lore.kernel.org/r/20250502205348.643055437%40goodmis.org
patch subject: [PATCH 05/12] tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabled
config: arc-randconfig-002-20250503 (https://download.01.org/0day-ci/archive/20250505/202505051213.exaXF8qp-lkp@intel.com/config)
compiler: arc-linux-gcc (GCC) 11.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250505/202505051213.exaXF8qp-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202505051213.exaXF8qp-lkp@intel.com/

All warnings (new ones prefixed by >>):

   kernel/trace/trace_kdb.c: In function 'kdb_ftdump':
>> kernel/trace/trace_kdb.c:101:13: warning: unused variable 'cpu' [-Wunused-variable]
     101 |         int cpu;
         |             ^~~


vim +/cpu +101 kernel/trace/trace_kdb.c

955b61e5979847 Jason Wessel     2010-08-05   91  
955b61e5979847 Jason Wessel     2010-08-05   92  /*
955b61e5979847 Jason Wessel     2010-08-05   93   * kdb_ftdump - Dump the ftrace log buffer
955b61e5979847 Jason Wessel     2010-08-05   94   */
955b61e5979847 Jason Wessel     2010-08-05   95  static int kdb_ftdump(int argc, const char **argv)
955b61e5979847 Jason Wessel     2010-08-05   96  {
dbfe67334a1767 Douglas Anderson 2019-03-19   97  	int skip_entries = 0;
19063c776fe745 Jason Wessel     2010-08-05   98  	long cpu_file;
0c10cc2435115c Yuran Pereira    2024-10-28   99  	int err;
03197fc02b3566 Douglas Anderson 2019-03-19  100  	int cnt;
03197fc02b3566 Douglas Anderson 2019-03-19 @101  	int cpu;
955b61e5979847 Jason Wessel     2010-08-05  102  
19063c776fe745 Jason Wessel     2010-08-05  103  	if (argc > 2)
955b61e5979847 Jason Wessel     2010-08-05  104  		return KDB_ARGCOUNT;
955b61e5979847 Jason Wessel     2010-08-05  105  
0c10cc2435115c Yuran Pereira    2024-10-28  106  	if (argc && kstrtoint(argv[1], 0, &skip_entries))
0c10cc2435115c Yuran Pereira    2024-10-28  107  		return KDB_BADINT;
955b61e5979847 Jason Wessel     2010-08-05  108  
19063c776fe745 Jason Wessel     2010-08-05  109  	if (argc == 2) {
0c10cc2435115c Yuran Pereira    2024-10-28  110  		err = kstrtol(argv[2], 0, &cpu_file);
0c10cc2435115c Yuran Pereira    2024-10-28  111  		if (err || cpu_file >= NR_CPUS || cpu_file < 0 ||
19063c776fe745 Jason Wessel     2010-08-05  112  		    !cpu_online(cpu_file))
19063c776fe745 Jason Wessel     2010-08-05  113  			return KDB_BADINT;
19063c776fe745 Jason Wessel     2010-08-05  114  	} else {
ae3b5093ad6004 Steven Rostedt   2013-01-23  115  		cpu_file = RING_BUFFER_ALL_CPUS;
19063c776fe745 Jason Wessel     2010-08-05  116  	}
19063c776fe745 Jason Wessel     2010-08-05  117  
955b61e5979847 Jason Wessel     2010-08-05  118  	kdb_trap_printk++;
03197fc02b3566 Douglas Anderson 2019-03-19  119  
03197fc02b3566 Douglas Anderson 2019-03-19  120  	trace_init_global_iter(&iter);
03197fc02b3566 Douglas Anderson 2019-03-19  121  	iter.buffer_iter = buffer_iter;
03197fc02b3566 Douglas Anderson 2019-03-19  122  
6a4611f6bb763d Steven Rostedt   2025-05-02  123  	tracer_tracing_off(iter.tr);
03197fc02b3566 Douglas Anderson 2019-03-19  124  
03197fc02b3566 Douglas Anderson 2019-03-19  125  	/* A negative skip_entries means skip all but the last entries */
03197fc02b3566 Douglas Anderson 2019-03-19  126  	if (skip_entries < 0) {
03197fc02b3566 Douglas Anderson 2019-03-19  127  		if (cpu_file == RING_BUFFER_ALL_CPUS)
03197fc02b3566 Douglas Anderson 2019-03-19  128  			cnt = trace_total_entries(NULL);
03197fc02b3566 Douglas Anderson 2019-03-19  129  		else
03197fc02b3566 Douglas Anderson 2019-03-19  130  			cnt = trace_total_entries_cpu(NULL, cpu_file);
03197fc02b3566 Douglas Anderson 2019-03-19  131  		skip_entries = max(cnt + skip_entries, 0);
03197fc02b3566 Douglas Anderson 2019-03-19  132  	}
03197fc02b3566 Douglas Anderson 2019-03-19  133  
dbfe67334a1767 Douglas Anderson 2019-03-19  134  	ftrace_dump_buf(skip_entries, cpu_file);
03197fc02b3566 Douglas Anderson 2019-03-19  135  
6a4611f6bb763d Steven Rostedt   2025-05-02  136  	tracer_tracing_on(iter.tr);
03197fc02b3566 Douglas Anderson 2019-03-19  137  
955b61e5979847 Jason Wessel     2010-08-05  138  	kdb_trap_printk--;
955b61e5979847 Jason Wessel     2010-08-05  139  
955b61e5979847 Jason Wessel     2010-08-05  140  	return 0;
955b61e5979847 Jason Wessel     2010-08-05  141  }
955b61e5979847 Jason Wessel     2010-08-05  142  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 09/12] tracing: branch: Use trace_tracing_is_on_cpu() instead of "disabled" field
  2025-05-02 20:51 ` [PATCH 09/12] tracing: branch: Use trace_tracing_is_on_cpu() instead of "disabled" field Steven Rostedt
  2025-05-03 10:10   ` kernel test robot
@ 2025-05-05 12:11   ` kernel test robot
  1 sibling, 0 replies; 18+ messages in thread
From: kernel test robot @ 2025-05-05 12:11 UTC (permalink / raw)
  To: Steven Rostedt, linux-kernel, linux-trace-kernel
  Cc: llvm, oe-kbuild-all, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Linux Memory Management List

Hi Steven,

kernel test robot noticed the following build errors:

[auto build test ERROR on trace/for-next]
[also build test ERROR on linus/master v6.15-rc5 next-20250502]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Steven-Rostedt/tracing-mmiotrace-Remove-reference-to-unused-per-CPU-data-pointer/20250503-050317
base:   https://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace for-next
patch link:    https://lore.kernel.org/r/20250502205349.299144667%40goodmis.org
patch subject: [PATCH 09/12] tracing: branch: Use trace_tracing_is_on_cpu() instead of "disabled" field
config: i386-buildonly-randconfig-002-20250505 (https://download.01.org/0day-ci/archive/20250505/202505051827.xKU53TzL-lkp@intel.com/config)
compiler: clang version 20.1.2 (https://github.com/llvm/llvm-project 58df0ef89dd64126512e4ee27b4ac3fd8ddf6247)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250505/202505051827.xKU53TzL-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202505051827.xKU53TzL-lkp@intel.com/

All errors (new ones prefixed by >>):

>> kernel/trace/trace_branch.c:56:36: error: call to undeclared function 'raw_smp_process_id'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
      56 |         if (!tracer_tracing_is_on_cpu(tr, raw_smp_process_id()))
         |                                           ^
   kernel/trace/trace_branch.c:56:36: note: did you mean 'safe_smp_processor_id'?
   arch/x86/include/asm/smp.h:140:12: note: 'safe_smp_processor_id' declared here
     140 | extern int safe_smp_processor_id(void);
         |            ^
   1 error generated.


vim +/raw_smp_process_id +56 kernel/trace/trace_branch.c

    29	
    30	static void
    31	probe_likely_condition(struct ftrace_likely_data *f, int val, int expect)
    32	{
    33		struct trace_array *tr = branch_tracer;
    34		struct trace_buffer *buffer;
    35		struct ring_buffer_event *event;
    36		struct trace_branch *entry;
    37		unsigned long flags;
    38		unsigned int trace_ctx;
    39		const char *p;
    40	
    41		if (current->trace_recursion & TRACE_BRANCH_BIT)
    42			return;
    43	
    44		/*
    45		 * I would love to save just the ftrace_likely_data pointer, but
    46		 * this code can also be used by modules. Ugly things can happen
    47		 * if the module is unloaded, and then we go and read the
    48		 * pointer.  This is slower, but much safer.
    49		 */
    50	
    51		if (unlikely(!tr))
    52			return;
    53	
    54		raw_local_irq_save(flags);
    55		current->trace_recursion |= TRACE_BRANCH_BIT;
  > 56		if (!tracer_tracing_is_on_cpu(tr, raw_smp_process_id()))
    57			goto out;
    58	
    59		trace_ctx = tracing_gen_ctx_flags(flags);
    60		buffer = tr->array_buffer.buffer;
    61		event = trace_buffer_lock_reserve(buffer, TRACE_BRANCH,
    62						  sizeof(*entry), trace_ctx);
    63		if (!event)
    64			goto out;
    65	
    66		entry	= ring_buffer_event_data(event);
    67	
    68		/* Strip off the path, only save the file */
    69		p = f->data.file + strlen(f->data.file);
    70		while (p >= f->data.file && *p != '/')
    71			p--;
    72		p++;
    73	
    74		strscpy(entry->func, f->data.func);
    75		strscpy(entry->file, p);
    76		entry->constant = f->constant;
    77		entry->line = f->data.line;
    78		entry->correct = val == expect;
    79	
    80		trace_buffer_unlock_commit_nostack(buffer, event);
    81	
    82	 out:
    83		current->trace_recursion &= ~TRACE_BRANCH_BIT;
    84		raw_local_irq_restore(flags);
    85	}
    86	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 05/12] tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabled
  2025-05-02 20:51 ` [PATCH 05/12] tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabled Steven Rostedt
  2025-05-05  5:05   ` kernel test robot
@ 2025-05-05 15:42   ` Doug Anderson
  2025-05-05 15:59     ` Steven Rostedt
  1 sibling, 1 reply; 18+ messages in thread
From: Doug Anderson @ 2025-05-05 15:42 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Jason Wessel, Daniel Thompson

Hi,

On Fri, May 2, 2025 at 1:53 PM Steven Rostedt <rostedt@goodmis.org> wrote:
>
> From: Steven Rostedt <rostedt@goodmis.org>
>
> The per CPU "disabled" value was the original way to disable tracing when
> the tracing subsystem was first created. Today, the ring buffer
> infrastructure has its own way to disable tracing. In fact, things have
> changed so much since 2008 that many things ignore the disable flag.
>
> The kdb_ftdump() function iterates over all the current tracing CPUs and
> increments the "disabled" counter before doing the dump, and decrements it
> afterward.
>
> As the disabled flag can be ignored, doing this today is not reliable.
> Instead, simply call tracer_tracing_off() and then tracer_tracing_on() to
> disable and then enabled the entire ring buffer in one go!
>
> Cc: Jason Wessel <jason.wessel@windriver.com>
> Cc: Daniel Thompson <danielt@kernel.org>
> Cc: Douglas Anderson <dianders@chromium.org>
> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
> ---
>  kernel/trace/trace_kdb.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
> index 1e72d20b3c2f..b5cf3fdde8cb 100644
> --- a/kernel/trace/trace_kdb.c
> +++ b/kernel/trace/trace_kdb.c
> @@ -120,9 +120,7 @@ static int kdb_ftdump(int argc, const char **argv)
>         trace_init_global_iter(&iter);
>         iter.buffer_iter = buffer_iter;
>
> -       for_each_tracing_cpu(cpu) {
> -               atomic_inc(&per_cpu_ptr(iter.array_buffer->data, cpu)->disabled);
> -       }
> +       tracer_tracing_off(iter.tr);
>
>         /* A negative skip_entries means skip all but the last entries */
>         if (skip_entries < 0) {
> @@ -135,9 +133,7 @@ static int kdb_ftdump(int argc, const char **argv)
>
>         ftrace_dump_buf(skip_entries, cpu_file);
>
> -       for_each_tracing_cpu(cpu) {
> -               atomic_dec(&per_cpu_ptr(iter.array_buffer->data, cpu)->disabled);
> -       }
> +       tracer_tracing_on(iter.tr);

This new change seems less safe than the old one. Previously you'd
always increment by one at the start of the function and decrement by
one at the end. Now at the start of the function you'll set
"buffer_disabled" to 1 and at the end you'll set it to 0. If
"buffer_disabled" was already 1 at the start of the function your new
sequence will end up having the side effect of changing it to 0.

-Doug

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 05/12] tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabled
  2025-05-05 15:42   ` Doug Anderson
@ 2025-05-05 15:59     ` Steven Rostedt
  0 siblings, 0 replies; 18+ messages in thread
From: Steven Rostedt @ 2025-05-05 15:59 UTC (permalink / raw)
  To: Doug Anderson
  Cc: linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Jason Wessel, Daniel Thompson

On Mon, 5 May 2025 08:42:52 -0700
Doug Anderson <dianders@chromium.org> wrote:

> This new change seems less safe than the old one. Previously you'd

Well, it matters what you your definition of "safe" is ;-)

The new change prevents the ring buffer from having anything written to it,
where as the old change didn't disable everything.

> always increment by one at the start of the function and decrement by
> one at the end. Now at the start of the function you'll set
> "buffer_disabled" to 1 and at the end you'll set it to 0. If
> "buffer_disabled" was already 1 at the start of the function your new
> sequence will end up having the side effect of changing it to 0.

Good point. How about I add a tracer_tracing_disable() and
tracer_tracing_enable() that is not an on off switch and uses:
ring_buffer_disable/enable() that decrements/increments disabling of the
ring buffer?

That way it keeps the same semantics.

-- STeve

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2025-05-05 15:59 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-02 20:51 [PATCH 00/12] tracing: Remove most uses of "disabled" field Steven Rostedt
2025-05-02 20:51 ` [PATCH 01/12] tracing/mmiotrace: Remove reference to unused per CPU data pointer Steven Rostedt
2025-05-02 20:51 ` [PATCH 02/12] tracing: Do not bother setting "disabled" field for ftrace_dump_one() Steven Rostedt
2025-05-02 20:51 ` [PATCH 03/12] ftrace: Do not bother checking per CPU "disabled" flag Steven Rostedt
2025-05-02 20:51 ` [PATCH 04/12] tracing: Just use this_cpu_read() to access ignore_pid Steven Rostedt
2025-05-02 20:51 ` [PATCH 05/12] tracing: kdb: Use tracer_tracing_on/off() instead of setting per CPU disabled Steven Rostedt
2025-05-05  5:05   ` kernel test robot
2025-05-05 15:42   ` Doug Anderson
2025-05-05 15:59     ` Steven Rostedt
2025-05-02 20:51 ` [PATCH 06/12] ftrace: Do not disabled function graph based on "disabled" field Steven Rostedt
2025-05-02 20:51 ` [PATCH 07/12] tracing: Do not use per CPU array_buffer.data->disabled for cpumask Steven Rostedt
2025-05-02 20:51 ` [PATCH 08/12] ring-buffer: Add ring_buffer_record_is_on_cpu() Steven Rostedt
2025-05-02 20:51 ` [PATCH 09/12] tracing: branch: Use trace_tracing_is_on_cpu() instead of "disabled" field Steven Rostedt
2025-05-03 10:10   ` kernel test robot
2025-05-05 12:11   ` kernel test robot
2025-05-02 20:51 ` [PATCH 10/12] tracing: Convert the per CPU "disabled" counter to local from atomic Steven Rostedt
2025-05-02 20:51 ` [PATCH 11/12] tracing: Use atomic_inc_return() for updating "disabled" counter in irqsoff tracer Steven Rostedt
2025-05-02 20:51 ` [PATCH 12/12] tracing: Remove unused buffer_page field from trace_array_cpu structure Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).