* ftrace regression with PREEMPT_RT kernels
@ 2025-06-05 9:03 Flot, Julien
2025-06-05 12:28 ` Steven Rostedt
0 siblings, 1 reply; 6+ messages in thread
From: Flot, Julien @ 2025-06-05 9:03 UTC (permalink / raw)
To: mathieu.desnoyers@efficios.com
Cc: Schmidt, Adriaan, Ziegler, Andreas, Kiszka, Jan,
linux-trace-kernel@vger.kernel.org, MOESSBAUER, Felix,
linux-rt-users@vger.kernel.org
Hi,
Sorry to create this duplicate message, the previous one was not
formatted as plain text ...
With recent versions of the kernel, ftrace seems to experience some
latencies when writing events filter files on a real time kernel.
Latencies experienced add a extra minute of runtime with trace-cmd
compare to a regular execution and sometimes `trace-cmd` just hangs for
multiples minutes.
The bug has been tested and reproduced with these version of the kernel
(all with real time enable) : 6.12.19, 6.14-rc7, 6.15 and current
upstream.
The bug has also been reproduced on two machines, both running Ubuntu
24.04 with an upstream kernel, one running with an INTEL(R) XEON(R)
GOLD 5512U, the other a Intel(R) Xeon(R) E-2278GE. The bug also appears
is virtualised environments (KVM/QEMU).
I have run `git-bisect` between v6.12 and v6.15, the faulty commit
seems to be this one : a363d27cdbc2bc2d1899b5a1520b64e3590fcd9a
To reproduce the bug I used this script :
```
while read file; do
echo "0" > $file
done
```
And ran it like this :
```
sudo time ./reproduce.sh < events.txt
```
When measuring execution time with `/usr/bin/time`, I have these
results :
Non RT kernel v6.15 :
```
0.06user 0.02system 0:08.77elapsed 1%CPU (0avgtext+0avgdata
22928maxresident)k
2016inputs+0outputs (9major+8835minor)pagefaults 0swaps
```
RT kernel v6.15 :
```
0.00user 0.00system 1:06.96elapsed 0%CPU (0avgtext+0avgdata
6888maxresident)k
0inputs+0outputs (0major+595minor)pagefaults 0swaps
```
RT kernel on a363d27cdbc2bc2d1899b5a1520b64e3590fcd9a :
```
0.00user 0.00system 0:43.21elapsed 0%CPU (0avgtext+0avgdata
6760maxresident)k
0inputs+0outputs (0major+596minor)pagefaults 0swaps
```
RT kernel on 4aadde89d81fbf4cf3105a61dbc48888b819ecfb :
```
0.00user 0.00system 0:01.25elapsed 0%CPU (0avgtext+0avgdata
6828maxresident)k
0inputs+0outputs (0major+602minor)pagefaults 0swaps
```
Problematic files were identified with running the following command :
`sudo strace -tt trace-cmd record -p function sleep 1`. Here is a
extract of the output produced by strace (on v6.15) :
```
...
07:56:34.868273 openat(AT_FDCWD,
"/sys/kernel/tracing/events/osnoise/filter", O_WRONLY) = 4
07:56:34.868401 write(4, "0", 1) = 1
07:56:35.485148 close(4) = 0
07:56:35.485251 openat(AT_FDCWD,
"/sys/kernel/tracing/events/page_isolation/filter", O_WRONLY) = 4
07:56:35.485375 write(4, "0", 1) = 1
07:56:36.397172 close(4) = 0
07:56:36.397328 openat(AT_FDCWD,
"/sys/kernel/tracing/events/page_pool/filter", O_WRONLY) = 4
07:56:36.397465 write(4, "0", 1) = 1
07:56:36.786157 close(4) = 0
07:56:36.786256 openat(AT_FDCWD,
"/sys/kernel/tracing/events/pagemap/filter", O_WRONLY) = 4
07:56:36.786379 write(4, "0", 1) = 1
07:56:37.446188 close(4) = 0
07:56:37.446286 openat(AT_FDCWD,
"/sys/kernel/tracing/events/percpu/filter", O_WRONLY) = 4
07:56:37.446403 write(4, "0", 1) = 1
07:56:37.738142 close(4) = 0
...
```
If you have an idea on how to fix this issue, I am very grateful.
--
Julien Flot
Siemens AG
[www.siemens.com](https://www.siemens.com)
Appendix
Faulty commit message :
```
commit a363d27cdbc2bc2d1899b5a1520b64e3590fcd9a
Author: Mathieu Desnoyers
<[mathieu.desnoyers@efficios.com](mailto:mathieu.desnoyers@efficios.com
)>
Date: Tue Oct 8 21:07:15 2024 -0400
fix
tracing: Allow system call tracepoints to handle page faults
Use Tasks Trace RCU to protect iteration of system call enter/exit
tracepoint probes to allow those probes to handle page faults.
In preparation for this change, all tracers registering to system call
enter/exit tracepoints should expect those to be called with preemption
enabled.
This allows tracers to fault-in userspace system call arguments such as
path strings within their probe callbacks.
Cc: Michael Jeanson
<[mjeanson@efficios.com](mailto:mjeanson@efficios.com)>
Cc: Masami Hiramatsu
<[mhiramat@kernel.org](mailto:mhiramat@kernel.org)>
Cc: Peter Zijlstra <[peterz@infradea.org](mailto:peterz@infradea.org)>
Cc: Alexei Starovoitov <[ast@kernel.org](mailto:ast@kernel.org)>
Cc: Yonghong Song <[yhs@fb.com](mailto:yhs@fb.com)>
Cc: Paul E. McKenney <[paulmck@kernel.org](mailto:paulmck@kernel.org)>
Cc: Ingo Molnar <[mingo@redhat.com](mailto:mingo@redhat.com)>
Cc: Arnaldo Carvalho de Melo
<[acme@kernel.org](mailto:acme@kernel.org)>
Cc: Mark Rutland <[mark.rutland@arm.com](mailto:mark.rutland@arm.com)>
Cc: Alexander Shishkin
<[alexander.shishkin@linux.intel.com](mailto:alexander.shishkin@linux.intel.com
)>
Cc: Namhyung Kim <[namhyung@kernel.org](mailto:namhyung@kernel.org)>
Cc: Andrii Nakryiko
<[andrii.nakryiko@gmail.com](mailto:andrii.nakryiko@gmail.com)>
Cc: [bpf@vger.kernel.org](mailto:bpf@vger.kernel.org)
Cc: Joel Fernandes
<[joel@joelfernandes.org](mailto:joel@joelfernandes.org)>
Link:
[https://lore.kernel.org/20241009010718.2050182-6-mathieu.desnoyers@efficios.com](https://lore.kernel.org/20241009010718.2050182-6-mathieu.desnoyers@efficios.com)
Signed-off-by: Mathieu Desnoyers
<[mathieu.desnoyers@efficios.com](mailto:mathieu.desnoyers@efficios.com
)>
Signed-off-by: Steven Rostedt (Google)
<[rostedt@goodmis.org](mailto:rostedt@goodmis.org)>
include/linux/tracepoint.h | 18 ++++++++++++++++--
init/Kconfig | 1 +
2 files changed, 17 insertions(+), 2 deletions(-)
```
List of files identified as problematic (on v6.15) :
```
/sys/kernel/tracing/events/alarmtimer/filter
/sys/kernel/tracing/events/amd_cpu/filter
/sys/kernel/tracing/events/avc/filter
/sys/kernel/tracing/events/block/filter
/sys/kernel/tracing/events/bpf_test_run/filter
/sys/kernel/tracing/events/bpf_trace/filter
/sys/kernel/tracing/events/bridge/filter
/sys/kernel/tracing/events/btrfs/filter
/sys/kernel/tracing/events/capability/filter
/sys/kernel/tracing/events/cfg80211/filter
/sys/kernel/tracing/events/cgroup/filter
/sys/kernel/tracing/events/clk/filter
/sys/kernel/tracing/events/compaction/filter
/sys/kernel/tracing/events/context_tracking/filter
/sys/kernel/tracing/events/cpuhp/filter
/sys/kernel/tracing/events/cros_ec/filter
/sys/kernel/tracing/events/csd/filter
/sys/kernel/tracing/events/cxl/filter
/sys/kernel/tracing/events/dev/filter
/sys/kernel/tracing/events/devfreq/filter
/sys/kernel/tracing/events/devlink/filter
/sys/kernel/tracing/events/dma/filter
/sys/kernel/tracing/events/dma_fence/filter
/sys/kernel/tracing/events/drm/filter
/sys/kernel/tracing/events/error_report/filter
/sys/kernel/tracing/events/exceptions/filter
/sys/kernel/tracing/events/ext4/filter
/sys/kernel/tracing/events/fib/filter
/sys/kernel/tracing/events/fib6/filter
/sys/kernel/tracing/events/filelock/filter
/sys/kernel/tracing/events/filemap/filter
/sys/kernel/tracing/events/fs_dax/filter
/sys/kernel/tracing/events/fuse/filter
/sys/kernel/tracing/events/gpio/filter
/sys/kernel/tracing/events/handshake/filter
/sys/kernel/tracing/events/hugetlbfs/filter
/sys/kernel/tracing/events/hwmon/filter
/sys/kernel/tracing/events/hyperv/filter
/sys/kernel/tracing/events/i2c/filter
/sys/kernel/tracing/events/icmp/filter
/sys/kernel/tracing/events/initcall/filter
/sys/kernel/tracing/events/intel_ifs/filter
/sys/kernel/tracing/events/intel_iommu/filter
/sys/kernel/tracing/events/interconnect/filter
/sys/kernel/tracing/events/io_uring/filter
/sys/kernel/tracing/events/iocost/filter
/sys/kernel/tracing/events/iomap/filter
/sys/kernel/tracing/events/iommu/filter
/sys/kernel/tracing/events/ipi/filter
/sys/kernel/tracing/events/irq/filter
/sys/kernel/tracing/events/irq_matrix/filter
/sys/kernel/tracing/events/irq_vectors/filter
/sys/kernel/tracing/events/jbd2/filter
/sys/kernel/tracing/events/kmem/filter
/sys/kernel/tracing/events/kvm/filter
/sys/kernel/tracing/events/kvmmmu/filter
/sys/kernel/tracing/events/libata/filter
/sys/kernel/tracing/events/lock/filter
/sys/kernel/tracing/events/maple_tree/filter
/sys/kernel/tracing/events/mce/filter
/sys/kernel/tracing/events/mctp/filter
/sys/kernel/tracing/events/mdio/filter
/sys/kernel/tracing/events/mei/filter
/sys/kernel/tracing/events/memcg/filter
/sys/kernel/tracing/events/migrate/filter
/sys/kernel/tracing/events/mmap/filter
/sys/kernel/tracing/events/mmap_lock/filter
/sys/kernel/tracing/events/mmc/filter
/sys/kernel/tracing/events/module/filter
/sys/kernel/tracing/events/mptcp/filter
/sys/kernel/tracing/events/msr/filter
/sys/kernel/tracing/events/napi/filter
/sys/kernel/tracing/events/neigh/filter
/sys/kernel/tracing/events/net/filter
/sys/kernel/tracing/events/netlink/filter
/sys/kernel/tracing/events/nmi/filter
/sys/kernel/tracing/events/notifier/filter
/sys/kernel/tracing/events/oom/filter
/sys/kernel/tracing/events/osnoise/filter
/sys/kernel/tracing/events/page_isolation/filter
/sys/kernel/tracing/events/page_pool/filter
/sys/kernel/tracing/events/pagemap/filter
/sys/kernel/tracing/events/percpu/filter
/sys/kernel/tracing/events/power/filter
/sys/kernel/tracing/events/printk/filter
/sys/kernel/tracing/events/pwm/filter
/sys/kernel/tracing/events/qdisc/filter
/sys/kernel/tracing/events/qrtr/filter
/sys/kernel/tracing/events/ras/filter
/sys/kernel/tracing/events/raw_syscalls/filter
/sys/kernel/tracing/events/rcu/filter
/sys/kernel/tracing/events/regmap/filter
/sys/kernel/tracing/events/regulator/filter
/sys/kernel/tracing/events/resctrl/filter
/sys/kernel/tracing/events/rpm/filter
/sys/kernel/tracing/events/rseq/filter
/sys/kernel/tracing/events/rtc/filter
/sys/kernel/tracing/events/rv/filter
/sys/kernel/tracing/events/sched/filter
/sys/kernel/tracing/events/scsi/filter
/sys/kernel/tracing/events/sd/filter
/sys/kernel/tracing/events/signal/filter
/sys/kernel/tracing/events/skb/filter
/sys/kernel/tracing/events/smbus/filter
/sys/kernel/tracing/events/sock/filter
/sys/kernel/tracing/events/spi/filter
/sys/kernel/tracing/events/swiotlb/filter
/sys/kernel/tracing/events/sync_trace/filter
/sys/kernel/tracing/events/syscalls/filter
/sys/kernel/tracing/events/task/filter
/sys/kernel/tracing/events/tcp/filter
/sys/kernel/tracing/events/thermal/filter
/sys/kernel/tracing/events/thermal_power_allocator/filter
/sys/kernel/tracing/events/timer/filter
/sys/kernel/tracing/events/timer_migration/filter
/sys/kernel/tracing/events/timestamp/filter
/sys/kernel/tracing/events/tlb/filter
/sys/kernel/tracing/events/udp/filter
/sys/kernel/tracing/events/vmalloc/filter
/sys/kernel/tracing/events/vmscan/filter
/sys/kernel/tracing/events/vsock/filter
/sys/kernel/tracing/events/vsyscall/filter
/sys/kernel/tracing/events/watchdog/filter
/sys/kernel/tracing/events/wbt/filter
/sys/kernel/tracing/events/workqueue/filter
/sys/kernel/tracing/events/writeback/filter
/sys/kernel/tracing/events/x86_fpu/filter
/sys/kernel/tracing/events/xdp/filter
/sys/kernel/tracing/events/xen/filter
/sys/kernel/tracing/events/xhci-hcd/filter
```
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: ftrace regression with PREEMPT_RT kernels
2025-06-05 9:03 ftrace regression with PREEMPT_RT kernels Flot, Julien
@ 2025-06-05 12:28 ` Steven Rostedt
2025-06-05 14:41 ` Flot, Julien
0 siblings, 1 reply; 6+ messages in thread
From: Steven Rostedt @ 2025-06-05 12:28 UTC (permalink / raw)
To: Flot, Julien
Cc: mathieu.desnoyers@efficios.com, Schmidt, Adriaan,
Ziegler, Andreas, Kiszka, Jan, linux-trace-kernel@vger.kernel.org,
MOESSBAUER, Felix, linux-rt-users@vger.kernel.org
On Thu, 5 Jun 2025 09:03:03 +0000
"Flot, Julien" <julien.flot@siemens.com> wrote:
> Hi,
>
> Sorry to create this duplicate message, the previous one was not
> formatted as plain text ...
>
> With recent versions of the kernel, ftrace seems to experience some
> latencies when writing events filter files on a real time kernel.
> Latencies experienced add a extra minute of runtime with trace-cmd
> compare to a regular execution and sometimes `trace-cmd` just hangs for
> multiples minutes.
>
> The bug has been tested and reproduced with these version of the kernel
> (all with real time enable) : 6.12.19, 6.14-rc7, 6.15 and current
> upstream.
> The bug has also been reproduced on two machines, both running Ubuntu
> 24.04 with an upstream kernel, one running with an INTEL(R) XEON(R)
> GOLD 5512U, the other a Intel(R) Xeon(R) E-2278GE. The bug also appears
> is virtualised environments (KVM/QEMU).
> I have run `git-bisect` between v6.12 and v6.15, the faulty commit
> seems to be this one : a363d27cdbc2bc2d1899b5a1520b64e3590fcd9a
>
> Faulty commit message :
> ```
> commit a363d27cdbc2bc2d1899b5a1520b64e3590fcd9a
> Author: Mathieu Desnoyers
> <[mathieu.desnoyers@efficios.com](mailto:mathieu.desnoyers@efficios.com
> )>
> Date: Tue Oct 8 21:07:15 2024 -0400
> fix
> tracing: Allow system call tracepoints to handle page faults
>
> Use Tasks Trace RCU to protect iteration of system call enter/exit
> tracepoint probes to allow those probes to handle page faults.
>
> In preparation for this change, all tracers registering to system call
> enter/exit tracepoints should expect those to be called with preemption
> enabled.
>
> This allows tracers to fault-in userspace system call arguments such as
> path strings within their probe callbacks.
>
I believe I know the issue. The filter logic calls:
tracepoint_synchronize_unregister();
And that patch added:
@@ -107,6 +108,7 @@ void for_each_tracepoint_in_module(struct module *mod,
#ifdef CONFIG_TRACEPOINTS
static inline void tracepoint_synchronize_unregister(void)
{
+ synchronize_rcu_tasks_trace();
synchronize_rcu();
}
Where synchronize_rcu_tasks_trace() is much more heavy weight call than
synchronize_rcu().
Can you try this patch to see if it helps?
-- Steve
diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
index 10f714c80e4d..a3857c36684b 100644
--- a/kernel/trace/trace_events_filter.c
+++ b/kernel/trace/trace_events_filter.c
@@ -2158,6 +2158,29 @@ struct filter_list {
struct event_filter *filter;
};
+struct filter_head {
+ struct list_head list;
+ struct rcu_head rcu;
+};
+
+static void free_filter_list(struct rcu_head *rhp)
+{
+ struct filter_head *filter_list = container_of(rhp, struct filter_head, rcu);
+ struct filter_list *filter_item, *tmp;
+
+ list_for_each_entry_safe(filter_item, tmp, &filter_list->list, list) {
+ __free_filter(filter_item->filter);
+ list_del(&filter_item->list);
+ kfree(filter_item);
+ }
+ kfree(filter_list);
+}
+
+static void free_filter_list_tasks(struct rcu_head *rhp)
+{
+ call_rcu(rhp, free_filter_list);
+}
+
static int process_system_preds(struct trace_subsystem_dir *dir,
struct trace_array *tr,
struct filter_parse_error *pe,
@@ -2167,10 +2190,16 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
struct filter_list *filter_item;
struct event_filter *filter = NULL;
struct filter_list *tmp;
- LIST_HEAD(filter_list);
+ struct filter_head *filter_list;
bool fail = true;
int err;
+ filter_list = kmalloc(sizeof(*filter_list), GFP_KERNEL);
+ if (!filter_list)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&filter_list->list);
+
list_for_each_entry(file, &tr->events, list) {
if (file->system != dir)
@@ -2197,7 +2226,7 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
if (!filter_item)
goto fail_mem;
- list_add_tail(&filter_item->list, &filter_list);
+ list_add_tail(&filter_item->list, &filter_list->list);
/*
* Regardless of if this returned an error, we still
* replace the filter for the call.
@@ -2217,31 +2246,22 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
* Do a synchronize_rcu() and to ensure all calls are
* done with them before we free them.
*/
- tracepoint_synchronize_unregister();
- list_for_each_entry_safe(filter_item, tmp, &filter_list, list) {
- __free_filter(filter_item->filter);
- list_del(&filter_item->list);
- kfree(filter_item);
- }
+ call_rcu_tasks(&filter_list->rcu, free_filter_list_tasks);
return 0;
fail:
/* No call succeeded */
- list_for_each_entry_safe(filter_item, tmp, &filter_list, list) {
- list_del(&filter_item->list);
- kfree(filter_item);
- }
+ free_filter_list(&filter_list->rcu);
parse_error(pe, FILT_ERR_BAD_SUBSYS_FILTER, 0);
return -EINVAL;
fail_mem:
__free_filter(filter);
+
/* If any call succeeded, we still need to sync */
if (!fail)
- tracepoint_synchronize_unregister();
- list_for_each_entry_safe(filter_item, tmp, &filter_list, list) {
- __free_filter(filter_item->filter);
- list_del(&filter_item->list);
- kfree(filter_item);
- }
+ call_rcu_tasks(&filter_list->rcu, free_filter_list_tasks);
+ else
+ free_filter_list(&filter_list->rcu);
+
return -ENOMEM;
}
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: ftrace regression with PREEMPT_RT kernels
2025-06-05 12:28 ` Steven Rostedt
@ 2025-06-05 14:41 ` Flot, Julien
2025-06-05 14:50 ` Steven Rostedt
0 siblings, 1 reply; 6+ messages in thread
From: Flot, Julien @ 2025-06-05 14:41 UTC (permalink / raw)
To: rostedt@goodmis.org
Cc: Schmidt, Adriaan, Ziegler, Andreas, Kiszka, Jan,
linux-trace-kernel@vger.kernel.org,
mathieu.desnoyers@efficios.com, MOESSBAUER, Felix,
linux-rt-users@vger.kernel.org
On Thu, 2025-06-05 at 08:28 -0400, Steven Rostedt wrote:
> On Thu, 5 Jun 2025 09:03:03 +0000
> "Flot, Julien" <julien.flot@siemens.com> wrote:
>
> > Hi,
> >
> > Sorry to create this duplicate message, the previous one was not
> > formatted as plain text ...
> >
> > With recent versions of the kernel, ftrace seems to experience some
> > latencies when writing events filter files on a real time kernel.
> > Latencies experienced add a extra minute of runtime with trace-cmd
> > compare to a regular execution and sometimes `trace-cmd` just hangs
> > for
> > multiples minutes.
> >
> > The bug has been tested and reproduced with these version of the
> > kernel
> > (all with real time enable) : 6.12.19, 6.14-rc7, 6.15 and current
> > upstream.
> > The bug has also been reproduced on two machines, both running
> > Ubuntu
> > 24.04 with an upstream kernel, one running with an INTEL(R) XEON(R)
> > GOLD 5512U, the other a Intel(R) Xeon(R) E-2278GE. The bug also
> > appears
> > is virtualised environments (KVM/QEMU).
> > I have run `git-bisect` between v6.12 and v6.15, the faulty commit
> > seems to be this one : a363d27cdbc2bc2d1899b5a1520b64e3590fcd9a
> >
>
>
> > Faulty commit message :
> > ```
> > commit a363d27cdbc2bc2d1899b5a1520b64e3590fcd9a
> > Author: Mathieu Desnoyers
> > <[
> > mathieu.desnoyers@efficios.com](mailto:mathieu.desnoyers@efficios.c
> > om
> > )>
> > Date: Tue Oct 8 21:07:15 2024 -0400
> > fix
> > tracing: Allow system call tracepoints to handle page faults
> >
> > Use Tasks Trace RCU to protect iteration of system call enter/exit
> > tracepoint probes to allow those probes to handle page faults.
> >
> > In preparation for this change, all tracers registering to system
> > call
> > enter/exit tracepoints should expect those to be called with
> > preemption
> > enabled.
> >
> > This allows tracers to fault-in userspace system call arguments
> > such as
> > path strings within their probe callbacks.
> >
>
> I believe I know the issue. The filter logic calls:
>
> tracepoint_synchronize_unregister();
>
> And that patch added:
>
> @@ -107,6 +108,7 @@ void for_each_tracepoint_in_module(struct module
> *mod,
> #ifdef CONFIG_TRACEPOINTS
> static inline void tracepoint_synchronize_unregister(void)
> {
> + synchronize_rcu_tasks_trace();
> synchronize_rcu();
> }
>
>
> Where synchronize_rcu_tasks_trace() is much more heavy weight call
> than
> synchronize_rcu().
>
> Can you try this patch to see if it helps?
>
> -- Steve
>
> diff --git a/kernel/trace/trace_events_filter.c
> b/kernel/trace/trace_events_filter.c
> index 10f714c80e4d..a3857c36684b 100644
> --- a/kernel/trace/trace_events_filter.c
> +++ b/kernel/trace/trace_events_filter.c
> @@ -2158,6 +2158,29 @@ struct filter_list {
> struct event_filter *filter;
> };
>
> +struct filter_head {
> + struct list_head list;
> + struct rcu_head rcu;
> +};
> +
> +static void free_filter_list(struct rcu_head *rhp)
> +{
> + struct filter_head *filter_list = container_of(rhp, struct
> filter_head, rcu);
> + struct filter_list *filter_item, *tmp;
> +
> + list_for_each_entry_safe(filter_item, tmp, &filter_list-
> >list, list) {
> + __free_filter(filter_item->filter);
> + list_del(&filter_item->list);
> + kfree(filter_item);
> + }
> + kfree(filter_list);
> +}
> +
> +static void free_filter_list_tasks(struct rcu_head *rhp)
> +{
> + call_rcu(rhp, free_filter_list);
> +}
> +
> static int process_system_preds(struct trace_subsystem_dir *dir,
> struct trace_array *tr,
> struct filter_parse_error *pe,
> @@ -2167,10 +2190,16 @@ static int process_system_preds(struct
> trace_subsystem_dir *dir,
> struct filter_list *filter_item;
> struct event_filter *filter = NULL;
> struct filter_list *tmp;
> - LIST_HEAD(filter_list);
> + struct filter_head *filter_list;
> bool fail = true;
> int err;
>
> + filter_list = kmalloc(sizeof(*filter_list), GFP_KERNEL);
> + if (!filter_list)
> + return -ENOMEM;
> +
> + INIT_LIST_HEAD(&filter_list->list);
> +
> list_for_each_entry(file, &tr->events, list) {
>
> if (file->system != dir)
> @@ -2197,7 +2226,7 @@ static int process_system_preds(struct
> trace_subsystem_dir *dir,
> if (!filter_item)
> goto fail_mem;
>
> - list_add_tail(&filter_item->list, &filter_list);
> + list_add_tail(&filter_item->list, &filter_list-
> >list);
> /*
> * Regardless of if this returned an error, we still
> * replace the filter for the call.
> @@ -2217,31 +2246,22 @@ static int process_system_preds(struct
> trace_subsystem_dir *dir,
> * Do a synchronize_rcu() and to ensure all calls are
> * done with them before we free them.
> */
> - tracepoint_synchronize_unregister();
> - list_for_each_entry_safe(filter_item, tmp, &filter_list,
> list) {
> - __free_filter(filter_item->filter);
> - list_del(&filter_item->list);
> - kfree(filter_item);
> - }
> + call_rcu_tasks(&filter_list->rcu, free_filter_list_tasks);
> return 0;
> fail:
> /* No call succeeded */
> - list_for_each_entry_safe(filter_item, tmp, &filter_list,
> list) {
> - list_del(&filter_item->list);
> - kfree(filter_item);
> - }
> + free_filter_list(&filter_list->rcu);
> parse_error(pe, FILT_ERR_BAD_SUBSYS_FILTER, 0);
> return -EINVAL;
> fail_mem:
> __free_filter(filter);
> +
> /* If any call succeeded, we still need to sync */
> if (!fail)
> - tracepoint_synchronize_unregister();
> - list_for_each_entry_safe(filter_item, tmp, &filter_list,
> list) {
> - __free_filter(filter_item->filter);
> - list_del(&filter_item->list);
> - kfree(filter_item);
> - }
> + call_rcu_tasks(&filter_list->rcu,
> free_filter_list_tasks);
> + else
> + free_filter_list(&filter_list->rcu);
> +
> return -ENOMEM;
> }
>
I have tried the patch, I am afraid it does not help. I have roughly
the same results as in previous versions.
-- Julien Flot
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: ftrace regression with PREEMPT_RT kernels
2025-06-05 14:41 ` Flot, Julien
@ 2025-06-05 14:50 ` Steven Rostedt
2025-06-05 19:41 ` Steven Rostedt
0 siblings, 1 reply; 6+ messages in thread
From: Steven Rostedt @ 2025-06-05 14:50 UTC (permalink / raw)
To: Flot, Julien
Cc: Schmidt, Adriaan, Ziegler, Andreas, Kiszka, Jan,
linux-trace-kernel@vger.kernel.org,
mathieu.desnoyers@efficios.com, MOESSBAUER, Felix,
linux-rt-users@vger.kernel.org
On Thu, 5 Jun 2025 14:41:43 +0000
"Flot, Julien" <julien.flot@siemens.com> wrote:
> I have tried the patch, I am afraid it does not help. I have roughly
> the same results as in previous versions.
Bah, there's three more places that call
tracepoint_synchronize_unregister() :-p
Let me try to get rid of those too.
-- Steve
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: ftrace regression with PREEMPT_RT kernels
2025-06-05 14:50 ` Steven Rostedt
@ 2025-06-05 19:41 ` Steven Rostedt
2025-06-06 8:13 ` Flot, Julien
0 siblings, 1 reply; 6+ messages in thread
From: Steven Rostedt @ 2025-06-05 19:41 UTC (permalink / raw)
To: Flot, Julien
Cc: Schmidt, Adriaan, Ziegler, Andreas, Kiszka, Jan,
linux-trace-kernel@vger.kernel.org,
mathieu.desnoyers@efficios.com, MOESSBAUER, Felix,
linux-rt-users@vger.kernel.org, Paul E. McKenney
On Thu, 5 Jun 2025 10:50:50 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:
> On Thu, 5 Jun 2025 14:41:43 +0000
> "Flot, Julien" <julien.flot@siemens.com> wrote:
>
>
> > I have tried the patch, I am afraid it does not help. I have roughly
> > the same results as in previous versions.
>
> Bah, there's three more places that call
> tracepoint_synchronize_unregister() :-p
>
> Let me try to get rid of those too.
So I decided to enable PREEMPT_RT and yes, I can reproduce this too.
I ran the following:
# trace-cmd sqlhist -e -n sched_stack select start.prev_state as state, end.next_comm as comm, TIMESTAMP_DELTA_USECS as delta, start.STACKTRACE as stack from sched_switch as start join sched_switch as end on start.prev_pid = end.next_pid
The above creates a synthetic event that creates a stack trace when a task
schedules out and records it with the time it scheduled back in. Basically
the time a task is off the CPU. It also records the state of the task when
it left the CPU (running, blocked, sleeping, etc). It also saves the comm
of the task as "comm" (needed for the next command).
# echo 'hist:keys=state,stack.stacktrace:vals=delta:sort=state,delta if comm == "trace-cmd" && state & 3' > /sys/kernel/tracing/events/synthetic/sched_stack/trigger
The above creates a histogram with buckets per state, per stack, and the
value of the total time it was off the CPU for that stack trace. It filters
on tasks with "comm == trace-cmd" and only the sleeping and blocked states
(1 - sleeping, 2 - blocked).
# trace-cmd record -p function sleep 1
# cat /sys/kernel/tracing/events/synthetic/sched_stack/hist | tail -50
path_openat+0x1a36/0x2990
do_filp_open+0x1c5/0x420
do_sys_openat2+0xed/0x180
__x64_sys_openat+0x108/0x1d0
do_syscall_64+0x7b/0x420
} hitcount: 2 delta: 74781
{ state: 2, stack.stacktrace __schedule+0x1545/0x3700
schedule+0xe2/0x390
schedule_timeout+0x175/0x200
wait_for_completion_state+0x294/0x440
__wait_rcu_gp+0x247/0x4f0
synchronize_rcu_tasks_generic+0x151/0x230
apply_subsystem_event_filter+0x77c/0x1300
subsystem_filter_write+0x67/0xc0
vfs_write+0x1e2/0xeb0
ksys_write+0xff/0x1d0
do_syscall_64+0x7b/0x420
entry_SYSCALL_64_after_hwframe+0x76/0x7e
} hitcount: 1 delta: 210957
{ state: 2, stack.stacktrace __schedule+0x1545/0x3700
schedule+0xe2/0x390
schedule_timeout+0x175/0x200
wait_for_completion_state+0x294/0x440
__wait_rcu_gp+0x247/0x4f0
synchronize_rcu_normal+0x3db/0x5c0
apply_subsystem_event_filter+0xa30/0x1300
subsystem_filter_write+0x67/0xc0
vfs_write+0x1e2/0xeb0
ksys_write+0xff/0x1d0
do_syscall_64+0x7b/0x420
entry_SYSCALL_64_after_hwframe+0x76/0x7e
} hitcount: 237 delta: 6316500
{ state: 2, stack.stacktrace __schedule+0x1545/0x3700
schedule+0xe2/0x390
schedule_timeout+0x175/0x200
wait_for_completion_state+0x294/0x440
__wait_rcu_gp+0x247/0x4f0
synchronize_rcu_tasks_generic+0x151/0x230
apply_subsystem_event_filter+0xa2b/0x1300
subsystem_filter_write+0x67/0xc0
vfs_write+0x1e2/0xeb0
ksys_write+0xff/0x1d0
do_syscall_64+0x7b/0x420
entry_SYSCALL_64_after_hwframe+0x76/0x7e
} hitcount: 237 delta: 99756288 <<--------------- Delta is 99 seconds!
Totals:
Hits: 525
Entries: 21
Dropped: 0
This shows that it is definitely waiting on the synchronize_rcu_tasks() but
I see that it's called by apply_subsystem_event_filter() which I didn't
cover. But I looked at all the tracepoint_synchronize_unregister() in the
trace_events_filter.c code and replaced them with a double call_rcu_task/rcu().
This appears to fix the issue. With that, trace-cmd record is much faster
and the histogram now looks like this:
~# cat /sys/kernel/tracing/events/synthetic/sched_stack/hist | tail -50
schedule_timeout+0x175/0x200
wait_for_completion_state+0x294/0x440
__wait_rcu_gp+0x247/0x4f0
synchronize_rcu_normal+0x3db/0x5c0
ring_buffer_reset_online_cpus+0x142/0x350
tracing_reset_online_cpus+0x14b/0x1e0
tracing_open+0x335/0x440
do_dentry_open+0x4c6/0x17a0
vfs_open+0x82/0x360
path_openat+0x1a36/0x2990
do_filp_open+0x1c5/0x420
do_sys_openat2+0xed/0x180
__x64_sys_openat+0x108/0x1d0
} hitcount: 2 delta: 61260
{ state: 2, stack.stacktrace __schedule+0x1545/0x3700
schedule+0xe2/0x390
schedule_timeout+0x175/0x200
wait_for_completion_state+0x294/0x440
__wait_rcu_gp+0x247/0x4f0
synchronize_rcu_normal+0x3db/0x5c0
ring_buffer_reset_online_cpus+0x142/0x350
tracing_reset_online_cpus+0x14b/0x1e0
tracing_set_tracer+0x4d5/0xa80
tracing_set_trace_write+0x138/0x1e0
vfs_write+0x1e2/0xeb0
ksys_write+0xff/0x1d0
do_syscall_64+0x7b/0x420
entry_SYSCALL_64_after_hwframe+0x76/0x7e
} hitcount: 2 delta: 61699
{ state: 2, stack.stacktrace __schedule+0x1545/0x3700
schedule+0xe2/0x390
schedule_timeout+0x175/0x200
wait_for_completion_state+0x294/0x440
__wait_rcu_gp+0x247/0x4f0
synchronize_rcu_normal+0x3db/0x5c0
tracing_reset_online_cpus+0x8f/0x1e0
tracing_open+0x335/0x440
do_dentry_open+0x4c6/0x17a0
vfs_open+0x82/0x360
path_openat+0x1a36/0x2990
do_filp_open+0x1c5/0x420
do_sys_openat2+0xed/0x180
__x64_sys_openat+0x108/0x1d0
do_syscall_64+0x7b/0x420
} hitcount: 2 delta: 77044
Totals:
Hits: 55
Entries: 28
Dropped: 0
With the max delta wait time of just 77 milliseconds total.
-- Steve
diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
index 2048560264bb..3ff782d6b522 100644
--- a/kernel/trace/trace_events_filter.c
+++ b/kernel/trace/trace_events_filter.c
@@ -1335,6 +1335,74 @@ static void filter_free_subsystem_preds(struct trace_subsystem_dir *dir,
}
}
+struct filter_list {
+ struct list_head list;
+ struct event_filter *filter;
+};
+
+struct filter_head {
+ struct list_head list;
+ struct rcu_head rcu;
+};
+
+
+static void free_filter_list(struct rcu_head *rhp)
+{
+ struct filter_head *filter_list = container_of(rhp, struct filter_head, rcu);
+ struct filter_list *filter_item, *tmp;
+
+ list_for_each_entry_safe(filter_item, tmp, &filter_list->list, list) {
+ __free_filter(filter_item->filter);
+ list_del(&filter_item->list);
+ kfree(filter_item);
+ }
+ kfree(filter_list);
+}
+
+static void free_filter_list_tasks(struct rcu_head *rhp)
+{
+ call_rcu(rhp, free_filter_list);
+}
+
+/*
+ * The tracepoint_synchronize_unregister() is a double rcu call.
+ * It calls synchronize_rcu_tasks_trace() followed by synchronize_rcu().
+ * Instead of waiting for it, simply call these via the call_rcu*()
+ * variants.
+ */
+static void delay_free_filter(struct filter_head *head)
+{
+ call_rcu_tasks_trace(&head->rcu, free_filter_list_tasks);
+}
+
+static void try_delay_free_filter(struct event_filter *filter)
+{
+ struct filter_head *head;
+ struct filter_list *item;
+
+ head = kmalloc(sizeof(*head), GFP_KERNEL);
+ if (!head)
+ goto free_now;
+
+ INIT_LIST_HEAD(&head->list);
+
+ item = kmalloc(sizeof(*item), GFP_KERNEL);
+ if (!item) {
+ kfree(head);
+ goto free_now;
+ }
+
+ item->filter = filter;
+ list_add_tail(&item->list, &head->list);
+ delay_free_filter(head);
+ return;
+
+ free_now:
+ /* Make sure the filter is not being used */
+ tracepoint_synchronize_unregister();
+ __free_filter(filter);
+}
+
static inline void __free_subsystem_filter(struct trace_event_file *file)
{
__free_filter(file->filter);
@@ -1342,15 +1410,53 @@ static inline void __free_subsystem_filter(struct trace_event_file *file)
}
static void filter_free_subsystem_filters(struct trace_subsystem_dir *dir,
- struct trace_array *tr)
+ struct trace_array *tr,
+ struct event_filter *filter)
{
struct trace_event_file *file;
+ struct filter_head *head;
+ struct filter_list *item;
+
+ head = kmalloc(sizeof(*head), GFP_KERNEL);
+ if (!head)
+ goto free_now;
+
+ INIT_LIST_HEAD(&head->list);
+
+ item = kmalloc(sizeof(*item), GFP_KERNEL);
+ if (!item) {
+ kfree(head);
+ goto free_now;
+ }
+
+ item->filter = filter;
+ list_add_tail(&item->list, &head->list);
list_for_each_entry(file, &tr->events, list) {
if (file->system != dir)
continue;
- __free_subsystem_filter(file);
+ item = kmalloc(sizeof(*item), GFP_KERNEL);
+ if (!item)
+ goto free_now;
+ item->filter = file->filter;
+ list_add_tail(&item->list, &head->list);
+ file->filter = NULL;
+ }
+
+ delay_free_filter(head);
+ return;
+ free_now:
+ tracepoint_synchronize_unregister();
+
+ if (head)
+ free_filter_list(&head->rcu);
+
+ list_for_each_entry(file, &tr->events, list) {
+ if (file->system != dir || !file->filter)
+ continue;
+ __free_filter(file->filter);
}
+ __free_filter(filter);
}
int filter_assign_type(const char *type)
@@ -2131,11 +2237,6 @@ static inline void event_clear_filter(struct trace_event_file *file)
RCU_INIT_POINTER(file->filter, NULL);
}
-struct filter_list {
- struct list_head list;
- struct event_filter *filter;
-};
-
static int process_system_preds(struct trace_subsystem_dir *dir,
struct trace_array *tr,
struct filter_parse_error *pe,
@@ -2144,11 +2245,16 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
struct trace_event_file *file;
struct filter_list *filter_item;
struct event_filter *filter = NULL;
- struct filter_list *tmp;
- LIST_HEAD(filter_list);
+ struct filter_head *filter_list;
bool fail = true;
int err;
+ filter_list = kmalloc(sizeof(*filter_list), GFP_KERNEL);
+ if (!filter_list)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&filter_list->list);
+
list_for_each_entry(file, &tr->events, list) {
if (file->system != dir)
@@ -2175,7 +2281,7 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
if (!filter_item)
goto fail_mem;
- list_add_tail(&filter_item->list, &filter_list);
+ list_add_tail(&filter_item->list, &filter_list->list);
/*
* Regardless of if this returned an error, we still
* replace the filter for the call.
@@ -2195,31 +2301,22 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
* Do a synchronize_rcu() and to ensure all calls are
* done with them before we free them.
*/
- tracepoint_synchronize_unregister();
- list_for_each_entry_safe(filter_item, tmp, &filter_list, list) {
- __free_filter(filter_item->filter);
- list_del(&filter_item->list);
- kfree(filter_item);
- }
+ delay_free_filter(filter_list);
return 0;
fail:
/* No call succeeded */
- list_for_each_entry_safe(filter_item, tmp, &filter_list, list) {
- list_del(&filter_item->list);
- kfree(filter_item);
- }
+ free_filter_list(&filter_list->rcu);
parse_error(pe, FILT_ERR_BAD_SUBSYS_FILTER, 0);
return -EINVAL;
fail_mem:
__free_filter(filter);
+
/* If any call succeeded, we still need to sync */
if (!fail)
- tracepoint_synchronize_unregister();
- list_for_each_entry_safe(filter_item, tmp, &filter_list, list) {
- __free_filter(filter_item->filter);
- list_del(&filter_item->list);
- kfree(filter_item);
- }
+ delay_free_filter(filter_list);
+ else
+ free_filter_list(&filter_list->rcu);
+
return -ENOMEM;
}
@@ -2361,9 +2458,7 @@ int apply_event_filter(struct trace_event_file *file, char *filter_string)
event_clear_filter(file);
- /* Make sure the filter is not being used */
- tracepoint_synchronize_unregister();
- __free_filter(filter);
+ try_delay_free_filter(filter);
return 0;
}
@@ -2387,11 +2482,8 @@ int apply_event_filter(struct trace_event_file *file, char *filter_string)
event_set_filter(file, filter);
- if (tmp) {
- /* Make sure the call is done with the filter */
- tracepoint_synchronize_unregister();
- __free_filter(tmp);
- }
+ if (tmp)
+ try_delay_free_filter(tmp);
}
return err;
@@ -2417,9 +2509,7 @@ int apply_subsystem_event_filter(struct trace_subsystem_dir *dir,
filter = system->filter;
system->filter = NULL;
/* Ensure all filters are no longer used */
- tracepoint_synchronize_unregister();
- filter_free_subsystem_filters(dir, tr);
- __free_filter(filter);
+ filter_free_subsystem_filters(dir, tr, filter);
return 0;
}
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: ftrace regression with PREEMPT_RT kernels
2025-06-05 19:41 ` Steven Rostedt
@ 2025-06-06 8:13 ` Flot, Julien
0 siblings, 0 replies; 6+ messages in thread
From: Flot, Julien @ 2025-06-06 8:13 UTC (permalink / raw)
To: rostedt@goodmis.org
Cc: Ziegler, Andreas, linux-rt-users@vger.kernel.org,
mathieu.desnoyers@efficios.com, paulmck@kernel.org,
MOESSBAUER, Felix, Schmidt, Adriaan, Kiszka, Jan,
linux-trace-kernel@vger.kernel.org
On Thu, 2025-06-05 at 15:41 -0400, Steven Rostedt wrote:
> On Thu, 5 Jun 2025 10:50:50 -0400
> Steven Rostedt <rostedt@goodmis.org> wrote:
>
> > On Thu, 5 Jun 2025 14:41:43 +0000
> > "Flot, Julien" <julien.flot@siemens.com> wrote:
> >
> >
> > > I have tried the patch, I am afraid it does not help. I have
> > > roughly
> > > the same results as in previous versions.
> >
> > Bah, there's three more places that call
> > tracepoint_synchronize_unregister() :-p
> >
> > Let me try to get rid of those too.
>
> So I decided to enable PREEMPT_RT and yes, I can reproduce this too.
>
> I ran the following:
>
> # trace-cmd sqlhist -e -n sched_stack select start.prev_state as
> state, end.next_comm as comm, TIMESTAMP_DELTA_USECS as delta,
> start.STACKTRACE as stack from sched_switch as start join
> sched_switch as end on start.prev_pid = end.next_pid
>
> The above creates a synthetic event that creates a stack trace when a
> task
> schedules out and records it with the time it scheduled back in.
> Basically
> the time a task is off the CPU. It also records the state of the task
> when
> it left the CPU (running, blocked, sleeping, etc). It also saves the
> comm
> of the task as "comm" (needed for the next command).
>
> # echo
> 'hist:keys=state,stack.stacktrace:vals=delta:sort=state,delta if comm
> == "trace-cmd" && state & 3' >
> /sys/kernel/tracing/events/synthetic/sched_stack/trigger
>
> The above creates a histogram with buckets per state, per stack, and
> the
> value of the total time it was off the CPU for that stack trace. It
> filters
> on tasks with "comm == trace-cmd" and only the sleeping and blocked
> states
> (1 - sleeping, 2 - blocked).
>
> # trace-cmd record -p function sleep 1
>
> # cat /sys/kernel/tracing/events/synthetic/sched_stack/hist | tail
> -50
> path_openat+0x1a36/0x2990
> do_filp_open+0x1c5/0x420
> do_sys_openat2+0xed/0x180
> __x64_sys_openat+0x108/0x1d0
> do_syscall_64+0x7b/0x420
> } hitcount: 2 delta: 74781
> { state: 2, stack.stacktrace
> __schedule+0x1545/0x3700
> schedule+0xe2/0x390
> schedule_timeout+0x175/0x200
> wait_for_completion_state+0x294/0x440
> __wait_rcu_gp+0x247/0x4f0
> synchronize_rcu_tasks_generic+0x151/0x230
> apply_subsystem_event_filter+0x77c/0x1300
> subsystem_filter_write+0x67/0xc0
> vfs_write+0x1e2/0xeb0
> ksys_write+0xff/0x1d0
> do_syscall_64+0x7b/0x420
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
> } hitcount: 1 delta: 210957
> { state: 2, stack.stacktrace
> __schedule+0x1545/0x3700
> schedule+0xe2/0x390
> schedule_timeout+0x175/0x200
> wait_for_completion_state+0x294/0x440
> __wait_rcu_gp+0x247/0x4f0
> synchronize_rcu_normal+0x3db/0x5c0
> apply_subsystem_event_filter+0xa30/0x1300
> subsystem_filter_write+0x67/0xc0
> vfs_write+0x1e2/0xeb0
> ksys_write+0xff/0x1d0
> do_syscall_64+0x7b/0x420
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
> } hitcount: 237 delta: 6316500
> { state: 2, stack.stacktrace
> __schedule+0x1545/0x3700
> schedule+0xe2/0x390
> schedule_timeout+0x175/0x200
> wait_for_completion_state+0x294/0x440
> __wait_rcu_gp+0x247/0x4f0
> synchronize_rcu_tasks_generic+0x151/0x230
> apply_subsystem_event_filter+0xa2b/0x1300
> subsystem_filter_write+0x67/0xc0
> vfs_write+0x1e2/0xeb0
> ksys_write+0xff/0x1d0
> do_syscall_64+0x7b/0x420
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
> } hitcount: 237 delta: 99756288 <<--------------- Delta is
> 99 seconds!
>
> Totals:
> Hits: 525
> Entries: 21
> Dropped: 0
>
>
> This shows that it is definitely waiting on the
> synchronize_rcu_tasks() but
> I see that it's called by apply_subsystem_event_filter() which I
> didn't
> cover. But I looked at all the tracepoint_synchronize_unregister() in
> the
> trace_events_filter.c code and replaced them with a double
> call_rcu_task/rcu().
>
> This appears to fix the issue. With that, trace-cmd record is much
> faster
> and the histogram now looks like this:
>
> ~# cat /sys/kernel/tracing/events/synthetic/sched_stack/hist | tail -
> 50
> schedule_timeout+0x175/0x200
> wait_for_completion_state+0x294/0x440
> __wait_rcu_gp+0x247/0x4f0
> synchronize_rcu_normal+0x3db/0x5c0Tested-by: Julien Flot
> <julien.flot@siemens.com>Tested-by: Julien Flot
> <julien.flot@siemens.com>
> ring_buffer_reset_online_cpus+0x142/0x350
> tracing_reset_online_cpus+0x14b/0x1e0
> tracing_open+0x335/0x440
> do_dentry_open+0x4c6/0x17a0
> vfs_open+0x82/0x360
> path_openat+0x1a36/0x2990
> do_filp_open+0x1c5/0x420
> do_sys_openat2+0xed/0x180
> __x64_sys_openat+0x108/0x1d0
> } hitcount: 2 delta: 61260
> { state: 2, stack.stacktrace
> __schedule+0x1545/0x3700
> schedule+0xe2/0x390
> schedule_timeout+0x175/0x200
> wait_for_completion_state+0x294/0x440
> __wait_rcu_gp+0x247/0x4f0
> synchronize_rcu_normal+0x3db/0x5c0
> ring_buffer_reset_online_cpus+0x142/0x350
> tracing_reset_online_cpus+0x14b/0x1e0
> tracing_set_tracer+0x4d5/0xa80
> tracing_set_trace_write+0x138/0x1e0
> vfs_write+0x1e2/0xeb0
> ksys_write+0xff/0x1d0
> do_syscall_64+0x7b/0x420
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
> } hitcount: 2 delta: 61699
> { state: 2, stack.stacktrace
> __schedule+0x1545/0x3700
> schedule+0xe2/0x390
> schedule_timeout+0x175/0x200
> wait_for_completion_state+0x294/0x440
> __wait_rcu_gp+0x247/0x4f0
> synchronize_rcu_normal+0x3db/0x5c0
> tracing_reset_online_cpus+0x8f/0x1e0
> tracing_open+0x335/0x440
> do_dentry_open+0x4c6/0x17a0
> vfs_open+0x82/0x360
> path_openat+0x1a36/0x2990
> do_filp_open+0x1c5/0x420
> do_sys_openat2+0xed/0x180
> __x64_sys_openat+0x108/0x1d0
> do_syscall_64+0x7b/0x420
> } hitcount: 2 delta: 77044
>
> Totals:
> Hits: 55
> Entries: 28
> Dropped: 0
>
>
> With the max delta wait time of just 77 milliseconds total.
>
> -- Steve
>
> diff --git a/kernel/trace/trace_events_filter.c
> b/kernel/trace/trace_events_filter.c
> index 2048560264bb..3ff782d6b522 100644
> --- a/kernel/trace/trace_events_filter.c
> +++ b/kernel/trace/trace_events_filter.c
> @@ -1335,6 +1335,74 @@ static void filter_free_subsystem_preds(struct
> trace_subsystem_dir *dir,
> }
> }
>
> +struct filter_list {
> + struct list_head list;
> + struct event_filter *filter;
> +};
> +
> +struct filter_head {
> + struct list_head list;
> + struct rcu_head rcu;
> +};
> +
> +
> +static void free_filter_list(struct rcu_head *rhp)
> +{
> + struct filter_head *filter_list = container_of(rhp, struct
> filter_head, rcu);
> + struct filter_list *filter_item, *tmp;
> +
> + list_for_each_entry_safe(filter_item, tmp, &filter_list-
> >list, list) {
> + __free_filter(filter_item->filter);
> + list_del(&filter_item->list);
> + kfree(filter_item);
> + }
> + kfree(filter_list);
> +}
> +
> +static void free_filter_list_tasks(struct rcu_head *rhp)
> +{
> + call_rcu(rhp, free_filter_list);
> +}
> +
> +/*
> + * The tracepoint_synchronize_unregister() is a double rcu call.
> + * It calls synchronize_rcu_tasks_trace() followed by
> synchronize_rcu().
> + * Instead of waiting for it, simply call these via the call_rcu*()
> + * variants.
> + */
> +static void delay_free_filter(struct filter_head *head)
> +{
> + call_rcu_tasks_trace(&head->rcu, free_filter_list_tasks);
> +}
> +
> +static void try_delay_free_filter(struct event_filter *filter)
> +{
> + struct filter_head *head;
> + struct filter_list *item;
> +
> + head = kmalloc(sizeof(*head), GFP_KERNEL);
> + if (!head)
> + goto free_now;
> +
> + INIT_LIST_HEAD(&head->list);
> +
> + item = kmalloc(sizeof(*item), GFP_KERNEL);
> + if (!item) {
> + kfree(head);
> + goto free_now;
> + }
> +
> + item->filter = filter;
> + list_add_tail(&item->list, &head->list);
> + delay_free_filter(head);
> + return;
> +
> + free_now:
> + /* Make sure the filter is not being used */
> + tracepoint_synchronize_unregister();
> + __free_filter(filter);
> +}
> +
> static inline void __free_subsystem_filter(struct trace_event_file
> *file)
> {
> __free_filter(file->filter);
> @@ -1342,15 +1410,53 @@ static inline void
> __free_subsystem_filter(struct trace_event_file *file)
> }
>
> static void filter_free_subsystem_filters(struct trace_subsystem_dir
> *dir,
> - struct trace_array *tr)
> + struct trace_array *tr,
> + struct event_filter
> *filter)
> {
> struct trace_event_file *file;
> + struct filter_head *head;
> + struct filter_list *item;
> +
> + head = kmalloc(sizeof(*head), GFP_KERNEL);
> + if (!head)
> + goto free_now;
> +
> + INIT_LIST_HEAD(&head->list);
> +
> + item = kmalloc(sizeof(*item), GFP_KERNEL);
> + if (!item) {
> + kfree(head);
> + goto free_now;
> + }
> +
> + item->filter = filter;
> + list_add_tail(&item->list, &head->list);
>
> list_for_each_entry(file, &tr->events, list) {
> if (file->system != dir)
> continue;
> - __free_subsystem_filter(file);
> + item = kmalloc(sizeof(*item), GFP_KERNEL);
> + if (!item)
> + goto free_now;
> + item->filter = file->filter;
> + list_add_tail(&item->list, &head->list);
> + file->filter = NULL;
> + }
> +
> + delay_free_filter(head);
> + return;
> + free_now:
> + tracepoint_synchronize_unregister();
> +
> + if (head)
> + free_filter_list(&head->rcu);
> +
> + list_for_each_entry(file, &tr->events, list) {
> + if (file->system != dir || !file->filter)
> + continue;
> + __free_filter(file->filter);
> }
> + __free_filter(filter);
> }
>
> int filter_assign_type(const char *type)
> @@ -2131,11 +2237,6 @@ static inline void event_clear_filter(struct
> trace_event_file *file)
> RCU_INIT_POINTER(file->filter, NULL);
> }
>
> -struct filter_list {
> - struct list_head list;
> - struct event_filter *filter;
> -};
> -
> static int process_system_preds(struct trace_subsystem_dir *dir,
> struct trace_array *tr,
> struct filter_parse_error *pe,
> @@ -2144,11 +2245,16 @@ static int process_system_preds(struct
> trace_subsystem_dir *dir,
> struct trace_event_file *file;
> struct filter_list *filter_item;
> struct event_filter *filter = NULL;
> - struct filter_list *tmp;
> - LIST_HEAD(filter_list);
> + struct filter_head *filter_list;
> bool fail = true;
> int err;
>
> + filter_list = kmalloc(sizeof(*filter_list), GFP_KERNEL);
> + if (!filter_list)
> + return -ENOMEM;
> +
> + INIT_LIST_HEAD(&filter_list->list);
> +
> list_for_each_entry(file, &tr->events, list) {
>
> if (file->system != dir)
> @@ -2175,7 +2281,7 @@ static int process_system_preds(struct
> trace_subsystem_dir *dir,
> if (!filter_item)
> goto fail_mem;
>
> - list_add_tail(&filter_item->list, &filter_list);
> + list_add_tail(&filter_item->list, &filter_list-
> >list);
> /*
> * Regardless of if this returned an error, we still
> * replace the filter for the call.
> @@ -2195,31 +2301,22 @@ static int process_system_preds(struct
> trace_subsystem_dir *dir,
> * Do a synchronize_rcu() and to ensure all calls are
> * done with them before we free them.
> */
> - tracepoint_synchronize_unregister();
> - list_for_each_entry_safe(filter_item, tmp, &filter_list,
> list) {
> - __free_filter(filter_item->filter);
> - list_del(&filter_item->list);
> - kfree(filter_item);
> - }
> + delay_free_filter(filter_list);
> return 0;
> fail:
> /* No call succeeded */
> - list_for_each_entry_safe(filter_item, tmp, &filter_list,
> list) {
> - list_del(&filter_item->list);
> - kfree(filter_item);
> - }
> + free_filter_list(&filter_list->rcu);
> parse_error(pe, FILT_ERR_BAD_SUBSYS_FILTER, 0);
> return -EINVAL;
> fail_mem:
> __free_filter(filter);
> +
> /* If any call succeeded, we still need to sync */
> if (!fail)
> - tracepoint_synchronize_unregister();
> - list_for_each_entry_safe(filter_item, tmp, &filter_list,
> list) {
> - __free_filter(filter_item->filter);
> - list_del(&filter_item->list);
> - kfree(filter_item);
> - }
> + delay_free_filter(filter_list);
> + else
> + free_filter_list(&filter_list->rcu);
> +
> return -ENOMEM;
> }
>
> @@ -2361,9 +2458,7 @@ int apply_event_filter(struct trace_event_file
> *file, char *filter_string)
>
> event_clear_filter(file);
>
> - /* Make sure the filter is not being used */
> - tracepoint_synchronize_unregister();
> - __free_filter(filter);
> + try_delay_free_filter(filter);
>
> return 0;
> }
> @@ -2387,11 +2482,8 @@ int apply_event_filter(struct trace_event_file
> *file, char *filter_string)
>
> event_set_filter(file, filter);
>
> - if (tmp) {
> - /* Make sure the call is done with the
> filter */
> - tracepoint_synchronize_unregister();
> - __free_filter(tmp);
> - }
> + if (tmp)
> + try_delay_free_filter(tmp);
> }
>
> return err;
> @@ -2417,9 +2509,7 @@ int apply_subsystem_event_filter(struct
> trace_subsystem_dir *dir,
> filter = system->filter;
> system->filter = NULL;
> /* Ensure all filters are no longer used */
> - tracepoint_synchronize_unregister();
> - filter_free_subsystem_filters(dir, tr);
> - __free_filter(filter);
> + filter_free_subsystem_filters(dir, tr, filter);
> return 0;
> }
>
Thank you very much, it also works for me !
Tested-by: Julien Flot <julien.flot@siemens.com>
--
Julien Flot
Siemens AG
www.siemens.com
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-06-06 8:14 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-05 9:03 ftrace regression with PREEMPT_RT kernels Flot, Julien
2025-06-05 12:28 ` Steven Rostedt
2025-06-05 14:41 ` Flot, Julien
2025-06-05 14:50 ` Steven Rostedt
2025-06-05 19:41 ` Steven Rostedt
2025-06-06 8:13 ` Flot, Julien
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).