linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] ftrace: Graph tracing performance enhancements and clean ups.
@ 2024-12-23 18:46 Steven Rostedt
  2024-12-23 18:46 ` [PATCH 1/4] fgraph: Remove unnecessary disabling of interrupts and recursion Steven Rostedt
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Steven Rostedt @ 2024-12-23 18:46 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton


ftrace update for 6.14:

I've always known that function graph tracing had a significant overhead
compared to function tracing. More than just double (since it also
traces the return of functions). I never looked too deep into the cause,
but I just noticed one of the reasons. As supposed to the function tracer,
the function graph tracer disables interrupts for every function entry
and exit it traces. It has been doing this since it was created back in
2008! A lot has changed since then, and there's no reason to disable
interrupts as it can handle recursion. It also forces a disable that
even prevents NMIs from being traced (so function graph tracing will
drop NMI functions if it preempts a current trace).

This is totally unneeded, especially since most of the complex code of
the shadow stack has been removed from the function graph tracer and
put into the fgraph.c file. The function graph tracer is only one consumer
of that code today, and the disabling was due to protect the code that
it no longer calls.

Remove the interrupt disabling as well as forcing a disable of the
function graph tracer while it is being recorded. This gives a significant
improvement to function graph tracing.

Running perf stat on "hackbench 10".

  Before:  4.8423 +- 0.0892 seconds time elapsed  ( +-  1.84% )
  After:   3.490 +- 0.112 seconds time elapsed  ( +-  3.22% )

That's ~28% speed up!

Do the same for the function profiler as well.

  Before:  3.332 +- 0.101 seconds time elapsed  ( +-  3.04% )
  After:   1.9590 +- 0.0484 seconds time elapsed  ( +-  2.47% )

Which gives ~41% speed up!!!

I may mark those commits with Fixes tags for performance only.
I would not mark them for stable as I would like to verify a bit
more that they do not cause any regressions. But for those that are
interested in better performance from the function graph tracer, it
may be good to backport.

The other two changes clean up the ftrace goto code. There were three
places that simply did a goto to the end of the function that returned
a value. No unlocking, no freeing, just a simple return. That was
changed to just return in place.

The second change was to implement the guard() logic around mutexes.

All of these commits are agnostic from each other and may be applied
separately.

Steven Rostedt (4):
      fgraph: Remove unnecessary disabling of interrupts and recursion
      ftrace: Do not disable interrupts in profiler
      ftrace: Remove unneeded goto jumps
      ftrace: Switch ftrace.c code over to use guard()

----
 kernel/trace/ftrace.c                | 130 ++++++++++++-----------------------
 kernel/trace/trace_functions_graph.c |  37 ++++------
 2 files changed, 60 insertions(+), 107 deletions(-)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/4] fgraph: Remove unnecessary disabling of interrupts and recursion
  2024-12-23 18:46 [PATCH 0/4] ftrace: Graph tracing performance enhancements and clean ups Steven Rostedt
@ 2024-12-23 18:46 ` Steven Rostedt
  2024-12-23 18:46 ` [PATCH 2/4] ftrace: Do not disable interrupts in profiler Steven Rostedt
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2024-12-23 18:46 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The function graph tracer disables interrupts as well as prevents
recursion via NMIs when recording the graph tracer code. There's no reason
to do this today. That disabling goes back to 2008 when the function graph
tracer was first introduced and recursion protection wasn't part of the
code.

Today, there's no reason to disable interrupts or prevent the code from
recursing as the infrastructure can easily handle it.

Before this change:

~# echo function_graph > /sys/kernel/tracing/current_tracer
~# perf stat -r 10 ./hackbench 10
Time: 4.240
Time: 4.236
Time: 4.106
Time: 4.014
Time: 4.314
Time: 3.830
Time: 4.063
Time: 4.323
Time: 3.763
Time: 3.727

 Performance counter stats for '/work/c/hackbench 10' (10 runs):

         33,937.20 msec task-clock                       #    7.008 CPUs utilized               ( +-  1.85% )
            18,220      context-switches                 #  536.874 /sec                        ( +-  6.41% )
               624      cpu-migrations                   #   18.387 /sec                        ( +-  9.07% )
            11,319      page-faults                      #  333.528 /sec                        ( +-  1.97% )
    76,657,643,617      cycles                           #    2.259 GHz                         ( +-  0.40% )
   141,403,302,768      instructions                     #    1.84  insn per cycle              ( +-  0.37% )
    25,518,463,888      branches                         #  751.932 M/sec                       ( +-  0.35% )
       156,151,050      branch-misses                    #    0.61% of all branches             ( +-  0.63% )

            4.8423 +- 0.0892 seconds time elapsed  ( +-  1.84% )

After this change:

~# echo function_graph > /sys/kernel/tracing/current_tracer
~# perf stat -r 10 ./hackbench 10
Time: 3.340
Time: 3.192
Time: 3.129
Time: 2.579
Time: 2.589
Time: 2.798
Time: 2.791
Time: 2.955
Time: 3.044
Time: 3.065

 Performance counter stats for './hackbench 10' (10 runs):

         24,416.30 msec task-clock                       #    6.996 CPUs utilized               ( +-  2.74% )
            16,764      context-switches                 #  686.590 /sec                        ( +-  5.85% )
               469      cpu-migrations                   #   19.208 /sec                        ( +-  6.14% )
            11,519      page-faults                      #  471.775 /sec                        ( +-  1.92% )
    53,895,628,450      cycles                           #    2.207 GHz                         ( +-  0.52% )
   105,552,664,638      instructions                     #    1.96  insn per cycle              ( +-  0.47% )
    17,808,672,667      branches                         #  729.376 M/sec                       ( +-  0.48% )
       133,075,435      branch-misses                    #    0.75% of all branches             ( +-  0.59% )

             3.490 +- 0.112 seconds time elapsed  ( +-  3.22% )

Also removed unneeded "unlikely()" around the retaddr code.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace_functions_graph.c | 37 +++++++++++-----------------
 1 file changed, 15 insertions(+), 22 deletions(-)

diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 5504b5e4e7b4..f513603d7df9 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -181,10 +181,9 @@ int trace_graph_entry(struct ftrace_graph_ent *trace,
 	struct trace_array *tr = gops->private;
 	struct trace_array_cpu *data;
 	struct fgraph_times *ftimes;
-	unsigned long flags;
 	unsigned int trace_ctx;
 	long disabled;
-	int ret;
+	int ret = 0;
 	int cpu;
 
 	if (*task_var & TRACE_GRAPH_NOTRACE)
@@ -235,25 +234,21 @@ int trace_graph_entry(struct ftrace_graph_ent *trace,
 	if (tracing_thresh)
 		return 1;
 
-	local_irq_save(flags);
+	preempt_disable_notrace();
 	cpu = raw_smp_processor_id();
 	data = per_cpu_ptr(tr->array_buffer.data, cpu);
-	disabled = atomic_inc_return(&data->disabled);
-	if (likely(disabled == 1)) {
-		trace_ctx = tracing_gen_ctx_flags(flags);
-		if (unlikely(IS_ENABLED(CONFIG_FUNCTION_GRAPH_RETADDR) &&
-			tracer_flags_is_set(TRACE_GRAPH_PRINT_RETADDR))) {
+	disabled = atomic_read(&data->disabled);
+	if (likely(!disabled)) {
+		trace_ctx = tracing_gen_ctx();
+		if (IS_ENABLED(CONFIG_FUNCTION_GRAPH_RETADDR) &&
+		    tracer_flags_is_set(TRACE_GRAPH_PRINT_RETADDR)) {
 			unsigned long retaddr = ftrace_graph_top_ret_addr(current);
-
 			ret = __trace_graph_retaddr_entry(tr, trace, trace_ctx, retaddr);
-		} else
+		} else {
 			ret = __trace_graph_entry(tr, trace, trace_ctx);
-	} else {
-		ret = 0;
+		}
 	}
-
-	atomic_dec(&data->disabled);
-	local_irq_restore(flags);
+	preempt_enable_notrace();
 
 	return ret;
 }
@@ -320,7 +315,6 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
 	struct trace_array *tr = gops->private;
 	struct trace_array_cpu *data;
 	struct fgraph_times *ftimes;
-	unsigned long flags;
 	unsigned int trace_ctx;
 	long disabled;
 	int size;
@@ -341,16 +335,15 @@ void trace_graph_return(struct ftrace_graph_ret *trace,
 
 	trace->calltime = ftimes->calltime;
 
-	local_irq_save(flags);
+	preempt_disable_notrace();
 	cpu = raw_smp_processor_id();
 	data = per_cpu_ptr(tr->array_buffer.data, cpu);
-	disabled = atomic_inc_return(&data->disabled);
-	if (likely(disabled == 1)) {
-		trace_ctx = tracing_gen_ctx_flags(flags);
+	disabled = atomic_read(&data->disabled);
+	if (likely(!disabled)) {
+		trace_ctx = tracing_gen_ctx();
 		__trace_graph_return(tr, trace, trace_ctx);
 	}
-	atomic_dec(&data->disabled);
-	local_irq_restore(flags);
+	preempt_enable_notrace();
 }
 
 static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
-- 
2.45.2



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/4] ftrace: Do not disable interrupts in profiler
  2024-12-23 18:46 [PATCH 0/4] ftrace: Graph tracing performance enhancements and clean ups Steven Rostedt
  2024-12-23 18:46 ` [PATCH 1/4] fgraph: Remove unnecessary disabling of interrupts and recursion Steven Rostedt
@ 2024-12-23 18:46 ` Steven Rostedt
  2024-12-23 18:46 ` [PATCH 3/4] ftrace: Remove unneeded goto jumps Steven Rostedt
  2024-12-23 18:46 ` [PATCH 4/4] ftrace: Switch ftrace.c code over to use guard() Steven Rostedt
  3 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2024-12-23 18:46 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The function profiler disables interrupts before processing. This was
there since the profiler was introduced back in 2009 when there were
recursion issues to deal with. The function tracer is much more robust
today and has its own internal recursion protection. There's no reason to
disable interrupts in the function profiler.

Instead, just disable preemption and use the guard() infrastructure while
at it.

Before this change:

~# echo 1 > /sys/kernel/tracing/function_profile_enabled
~# perf stat -r 10 ./hackbench 10
Time: 3.099
Time: 2.556
Time: 2.500
Time: 2.705
Time: 2.985
Time: 2.959
Time: 2.859
Time: 2.621
Time: 2.742
Time: 2.631

 Performance counter stats for '/work/c/hackbench 10' (10 runs):

         23,156.77 msec task-clock                       #    6.951 CPUs utilized               ( +-  2.36% )
            18,306      context-switches                 #  790.525 /sec                        ( +-  5.95% )
               495      cpu-migrations                   #   21.376 /sec                        ( +-  8.61% )
            11,522      page-faults                      #  497.565 /sec                        ( +-  1.80% )
    47,967,124,606      cycles                           #    2.071 GHz                         ( +-  0.41% )
    80,009,078,371      instructions                     #    1.67  insn per cycle              ( +-  0.34% )
    16,389,249,798      branches                         #  707.752 M/sec                       ( +-  0.36% )
       139,943,109      branch-misses                    #    0.85% of all branches             ( +-  0.61% )

             3.332 +- 0.101 seconds time elapsed  ( +-  3.04% )

After this change:

~# echo 1 > /sys/kernel/tracing/function_profile_enabled
~# perf stat -r 10 ./hackbench 10
Time: 1.869
Time: 1.428
Time: 1.575
Time: 1.569
Time: 1.685
Time: 1.511
Time: 1.611
Time: 1.672
Time: 1.724
Time: 1.715

 Performance counter stats for '/work/c/hackbench 10' (10 runs):

         13,578.21 msec task-clock                       #    6.931 CPUs utilized               ( +-  2.23% )
            12,736      context-switches                 #  937.973 /sec                        ( +-  3.86% )
               341      cpu-migrations                   #   25.114 /sec                        ( +-  5.27% )
            11,378      page-faults                      #  837.960 /sec                        ( +-  1.74% )
    27,638,039,036      cycles                           #    2.035 GHz                         ( +-  0.27% )
    45,107,762,498      instructions                     #    1.63  insn per cycle              ( +-  0.23% )
     8,623,868,018      branches                         #  635.125 M/sec                       ( +-  0.27% )
       125,738,443      branch-misses                    #    1.46% of all branches             ( +-  0.32% )

            1.9590 +- 0.0484 seconds time elapsed  ( +-  2.47% )

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 20 +++++++-------------
 1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 9b17efb1a87d..63a9ffa65e17 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -789,27 +789,24 @@ function_profile_call(unsigned long ip, unsigned long parent_ip,
 {
 	struct ftrace_profile_stat *stat;
 	struct ftrace_profile *rec;
-	unsigned long flags;
 
 	if (!ftrace_profile_enabled)
 		return;
 
-	local_irq_save(flags);
+	guard(preempt_notrace)();
 
 	stat = this_cpu_ptr(&ftrace_profile_stats);
 	if (!stat->hash || !ftrace_profile_enabled)
-		goto out;
+		return;
 
 	rec = ftrace_find_profiled_func(stat, ip);
 	if (!rec) {
 		rec = ftrace_profile_alloc(stat, ip);
 		if (!rec)
-			goto out;
+			return;
 	}
 
 	rec->counter++;
- out:
-	local_irq_restore(flags);
 }
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
@@ -856,19 +853,19 @@ static void profile_graph_return(struct ftrace_graph_ret *trace,
 	unsigned long long calltime;
 	unsigned long long rettime = trace_clock_local();
 	struct ftrace_profile *rec;
-	unsigned long flags;
 	int size;
 
-	local_irq_save(flags);
+	guard(preempt_notrace)();
+
 	stat = this_cpu_ptr(&ftrace_profile_stats);
 	if (!stat->hash || !ftrace_profile_enabled)
-		goto out;
+		return;
 
 	profile_data = fgraph_retrieve_data(gops->idx, &size);
 
 	/* If the calltime was zero'd ignore it */
 	if (!profile_data || !profile_data->calltime)
-		goto out;
+		return;
 
 	calltime = rettime - profile_data->calltime;
 
@@ -896,9 +893,6 @@ static void profile_graph_return(struct ftrace_graph_ret *trace,
 		rec->time += calltime;
 		rec->time_squared += calltime * calltime;
 	}
-
- out:
-	local_irq_restore(flags);
 }
 
 static struct fgraph_ops fprofiler_ops = {
-- 
2.45.2



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 3/4] ftrace: Remove unneeded goto jumps
  2024-12-23 18:46 [PATCH 0/4] ftrace: Graph tracing performance enhancements and clean ups Steven Rostedt
  2024-12-23 18:46 ` [PATCH 1/4] fgraph: Remove unnecessary disabling of interrupts and recursion Steven Rostedt
  2024-12-23 18:46 ` [PATCH 2/4] ftrace: Do not disable interrupts in profiler Steven Rostedt
@ 2024-12-23 18:46 ` Steven Rostedt
  2024-12-23 18:46 ` [PATCH 4/4] ftrace: Switch ftrace.c code over to use guard() Steven Rostedt
  3 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2024-12-23 18:46 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

There are some goto jumps to exit a program to just return a value. The
code after the label doesn't free anything nor does it do any unlocks. It
simply returns the variable that was set before the jump.

Remove these unneeded goto jumps.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 63a9ffa65e17..2c1691aa1d2f 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1669,14 +1669,12 @@ unsigned long ftrace_location(unsigned long ip)
 	loc = ftrace_location_range(ip, ip);
 	if (!loc) {
 		if (!kallsyms_lookup_size_offset(ip, &size, &offset))
-			goto out;
+			return 0;
 
 		/* map sym+0 to __fentry__ */
 		if (!offset)
 			loc = ftrace_location_range(ip, ip + size - 1);
 	}
-
-out:
 	return loc;
 }
 
@@ -2071,7 +2069,7 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
 			continue;
 
 		if (rec == end)
-			goto err_out;
+			return -EBUSY;
 
 		in_old = !!ftrace_lookup_ip(old_hash, rec->ip);
 		in_new = !!ftrace_lookup_ip(new_hash, rec->ip);
@@ -2084,7 +2082,6 @@ static int __ftrace_hash_update_ipmodify(struct ftrace_ops *ops,
 			rec->flags |= FTRACE_FL_IPMODIFY;
 	} while_for_each_ftrace_rec();
 
-err_out:
 	return -EBUSY;
 }
 
@@ -5720,12 +5717,10 @@ ftrace_regex_write(struct file *file, const char __user *ubuf,
 					   parser->idx, enable);
 		trace_parser_clear(parser);
 		if (ret < 0)
-			goto out;
+			return ret;
 	}
 
-	ret = read;
- out:
-	return ret;
+	return read;
 }
 
 ssize_t
-- 
2.45.2



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 4/4] ftrace: Switch ftrace.c code over to use guard()
  2024-12-23 18:46 [PATCH 0/4] ftrace: Graph tracing performance enhancements and clean ups Steven Rostedt
                   ` (2 preceding siblings ...)
  2024-12-23 18:46 ` [PATCH 3/4] ftrace: Remove unneeded goto jumps Steven Rostedt
@ 2024-12-23 18:46 ` Steven Rostedt
  3 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2024-12-23 18:46 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

There are a few functions in ftrace.c that have "goto out" or equivalent
on error in order to release locks that were taken. This can be error
prone or just simply make the code more complex.

Switch every location that ends with unlocking a mutex on error over to
using the guard(mutex)() infrastructure to let the compiler worry about
releasing locks. This makes the code easier to read and understand.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/ftrace.c | 97 +++++++++++++++----------------------------
 1 file changed, 34 insertions(+), 63 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 2c1691aa1d2f..6ebc76bafd38 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -536,24 +536,21 @@ static int function_stat_show(struct seq_file *m, void *v)
 {
 	struct ftrace_profile *rec = v;
 	char str[KSYM_SYMBOL_LEN];
-	int ret = 0;
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	static struct trace_seq s;
 	unsigned long long avg;
 	unsigned long long stddev;
 #endif
-	mutex_lock(&ftrace_profile_lock);
+	guard(mutex)(&ftrace_profile_lock);
 
 	/* we raced with function_profile_reset() */
-	if (unlikely(rec->counter == 0)) {
-		ret = -EBUSY;
-		goto out;
-	}
+	if (unlikely(rec->counter == 0))
+		return -EBUSY;
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 	avg = div64_ul(rec->time, rec->counter);
 	if (tracing_thresh && (avg < tracing_thresh))
-		goto out;
+		return 0;
 #endif
 
 	kallsyms_lookup(rec->ip, NULL, NULL, NULL, str);
@@ -590,10 +587,8 @@ static int function_stat_show(struct seq_file *m, void *v)
 	trace_print_seq(m, &s);
 #endif
 	seq_putc(m, '\n');
-out:
-	mutex_unlock(&ftrace_profile_lock);
 
-	return ret;
+	return 0;
 }
 
 static void ftrace_profile_reset(struct ftrace_profile_stat *stat)
@@ -944,20 +939,16 @@ ftrace_profile_write(struct file *filp, const char __user *ubuf,
 
 	val = !!val;
 
-	mutex_lock(&ftrace_profile_lock);
+	guard(mutex)(&ftrace_profile_lock);
 	if (ftrace_profile_enabled ^ val) {
 		if (val) {
 			ret = ftrace_profile_init();
-			if (ret < 0) {
-				cnt = ret;
-				goto out;
-			}
+			if (ret < 0)
+				return ret;
 
 			ret = register_ftrace_profiler();
-			if (ret < 0) {
-				cnt = ret;
-				goto out;
-			}
+			if (ret < 0)
+				return ret;
 			ftrace_profile_enabled = 1;
 		} else {
 			ftrace_profile_enabled = 0;
@@ -968,8 +959,6 @@ ftrace_profile_write(struct file *filp, const char __user *ubuf,
 			unregister_ftrace_profiler();
 		}
 	}
- out:
-	mutex_unlock(&ftrace_profile_lock);
 
 	*ppos += cnt;
 
@@ -5610,20 +5599,15 @@ static DEFINE_MUTEX(ftrace_cmd_mutex);
 __init int register_ftrace_command(struct ftrace_func_command *cmd)
 {
 	struct ftrace_func_command *p;
-	int ret = 0;
 
-	mutex_lock(&ftrace_cmd_mutex);
+	guard(mutex)(&ftrace_cmd_mutex);
 	list_for_each_entry(p, &ftrace_commands, list) {
-		if (strcmp(cmd->name, p->name) == 0) {
-			ret = -EBUSY;
-			goto out_unlock;
-		}
+		if (strcmp(cmd->name, p->name) == 0)
+			return -EBUSY;
 	}
 	list_add(&cmd->list, &ftrace_commands);
- out_unlock:
-	mutex_unlock(&ftrace_cmd_mutex);
 
-	return ret;
+	return 0;
 }
 
 /*
@@ -5633,20 +5617,17 @@ __init int register_ftrace_command(struct ftrace_func_command *cmd)
 __init int unregister_ftrace_command(struct ftrace_func_command *cmd)
 {
 	struct ftrace_func_command *p, *n;
-	int ret = -ENODEV;
 
-	mutex_lock(&ftrace_cmd_mutex);
+	guard(mutex)(&ftrace_cmd_mutex);
+
 	list_for_each_entry_safe(p, n, &ftrace_commands, list) {
 		if (strcmp(cmd->name, p->name) == 0) {
-			ret = 0;
 			list_del_init(&p->list);
-			goto out_unlock;
+			return 0;
 		}
 	}
- out_unlock:
-	mutex_unlock(&ftrace_cmd_mutex);
 
-	return ret;
+	return -ENODEV;
 }
 
 static int ftrace_process_regex(struct ftrace_iterator *iter,
@@ -5656,7 +5637,7 @@ static int ftrace_process_regex(struct ftrace_iterator *iter,
 	struct trace_array *tr = iter->ops->private;
 	char *func, *command, *next = buff;
 	struct ftrace_func_command *p;
-	int ret = -EINVAL;
+	int ret;
 
 	func = strsep(&next, ":");
 
@@ -5673,17 +5654,14 @@ static int ftrace_process_regex(struct ftrace_iterator *iter,
 
 	command = strsep(&next, ":");
 
-	mutex_lock(&ftrace_cmd_mutex);
+	guard(mutex)(&ftrace_cmd_mutex);
+
 	list_for_each_entry(p, &ftrace_commands, list) {
-		if (strcmp(p->name, command) == 0) {
-			ret = p->func(tr, hash, func, command, next, enable);
-			goto out_unlock;
-		}
+		if (strcmp(p->name, command) == 0)
+			return p->func(tr, hash, func, command, next, enable);
 	}
- out_unlock:
-	mutex_unlock(&ftrace_cmd_mutex);
 
-	return ret;
+	return -EINVAL;
 }
 
 static ssize_t
@@ -8280,7 +8258,7 @@ pid_write(struct file *filp, const char __user *ubuf,
 	if (!cnt)
 		return 0;
 
-	mutex_lock(&ftrace_lock);
+	guard(mutex)(&ftrace_lock);
 
 	switch (type) {
 	case TRACE_PIDS:
@@ -8296,14 +8274,13 @@ pid_write(struct file *filp, const char __user *ubuf,
 					     lockdep_is_held(&ftrace_lock));
 		break;
 	default:
-		ret = -EINVAL;
 		WARN_ON_ONCE(1);
-		goto out;
+		return -EINVAL;
 	}
 
 	ret = trace_pid_write(filtered_pids, &pid_list, ubuf, cnt);
 	if (ret < 0)
-		goto out;
+		return ret;
 
 	switch (type) {
 	case TRACE_PIDS:
@@ -8332,11 +8309,8 @@ pid_write(struct file *filp, const char __user *ubuf,
 
 	ftrace_update_pid_func();
 	ftrace_startup_all(0);
- out:
-	mutex_unlock(&ftrace_lock);
 
-	if (ret > 0)
-		*ppos += ret;
+	*ppos += ret;
 
 	return ret;
 }
@@ -8739,17 +8713,17 @@ static int
 ftrace_enable_sysctl(const struct ctl_table *table, int write,
 		     void *buffer, size_t *lenp, loff_t *ppos)
 {
-	int ret = -ENODEV;
+	int ret;
 
-	mutex_lock(&ftrace_lock);
+	guard(mutex)(&ftrace_lock);
 
 	if (unlikely(ftrace_disabled))
-		goto out;
+		return -ENODEV;
 
 	ret = proc_dointvec(table, write, buffer, lenp, ppos);
 
 	if (ret || !write || (last_ftrace_enabled == !!ftrace_enabled))
-		goto out;
+		return ret;
 
 	if (ftrace_enabled) {
 
@@ -8763,8 +8737,7 @@ ftrace_enable_sysctl(const struct ctl_table *table, int write,
 	} else {
 		if (is_permanent_ops_registered()) {
 			ftrace_enabled = true;
-			ret = -EBUSY;
-			goto out;
+			return -EBUSY;
 		}
 
 		/* stopping ftrace calls (just send to ftrace_stub) */
@@ -8774,9 +8747,7 @@ ftrace_enable_sysctl(const struct ctl_table *table, int write,
 	}
 
 	last_ftrace_enabled = !!ftrace_enabled;
- out:
-	mutex_unlock(&ftrace_lock);
-	return ret;
+	return 0;
 }
 
 static struct ctl_table ftrace_sysctls[] = {
-- 
2.45.2



^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-12-23 18:48 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-23 18:46 [PATCH 0/4] ftrace: Graph tracing performance enhancements and clean ups Steven Rostedt
2024-12-23 18:46 ` [PATCH 1/4] fgraph: Remove unnecessary disabling of interrupts and recursion Steven Rostedt
2024-12-23 18:46 ` [PATCH 2/4] ftrace: Do not disable interrupts in profiler Steven Rostedt
2024-12-23 18:46 ` [PATCH 3/4] ftrace: Remove unneeded goto jumps Steven Rostedt
2024-12-23 18:46 ` [PATCH 4/4] ftrace: Switch ftrace.c code over to use guard() Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).