* [PATCH v4 0/5] Tracing: Accelerate Kernel Boot by Asynchronizing
@ 2026-01-28 12:51 Yaxiong Tian
2026-01-28 12:53 ` [PATCH v4 1/5] tracing: Rename `eval_map_wq` and allow other parts of tracing use it Yaxiong Tian
` (5 more replies)
0 siblings, 6 replies; 19+ messages in thread
From: Yaxiong Tian @ 2026-01-28 12:51 UTC (permalink / raw)
To: mhiramat, rostedt, axboe, mathieu.desnoyers, corbet, skhan
Cc: linux-trace-kernel, linux-block, linux-kernel, linux-doc,
Yaxiong Tian
On my ARM64 platform, I observed that certain tracing module
initializations run for up to 200ms—for example, init_kprobe_trace().
Analysis reveals the root cause: the execution flow eval_map_work_func()
→trace_event_update_with_eval_map()→trace_event_update_all()
is highly time-consuming. Although this flow is placed in eval_map_wq
for asynchronous execution, it holds the trace_event_sem lock, causing
other modules to be blocked either directly or indirectly. Also in
init_blk_tracer(), this functions require trace_event_sem device_initcall.
To resolve this issue, I rename `eval_map_wq` and make it global and moved
other initialization functions under the tracing subsystem that are
related to this lock to run asynchronously on this workqueue. After this
optimization, boot time is reduced by approximately 200ms.
Given that asynchronous initialization makes it indeterminate when tracing
will begin, we introduce the trace_async_init kernel parameter.Asynchronous
behavior is enabled only when this parameter is explicitly provided.
Based on my analysis and testing, I've identified that only these two
locations significantly impact timing. Other initcall_* functions do not
exhibit relevant lock contention.
A brief summary of the test results is as follows:
Before this PATCHS:
[ 0.224933] calling init_kprobe_trace+0x0/0xe0 @ 1
[ 0.455016] initcall init_kprobe_trace+0x0/0xe0 returned 0 after 230080 usecs
Only opt setup_boot_kprobe_events() can see:
[ 0.258609] calling init_blk_tracer+0x0/0x68 @ 1
[ 0.454991] initcall init_blk_tracer+0x0/0x68 returned 0 after 196377 usecs
After this PATCHS:
[ 0.224940] calling init_kprobe_trace+0x0/0xe0 @ 1
[ 0.224946] initcall init_kprobe_trace+0x0/0xe0 returned 0 after 3 usecs
skip --------
[ 0.264835] calling init_blk_tracer+0x0/0x68 @ 1
[ 0.264841] initcall init_blk_tracer+0x0/0x68 returned 0 after 2 usecs
---
Changes in v2:
- Rename eval_map_wq to trace_init_wq.
Changes in v3:
- Opt PATCH 1/3 commit
Changes in v4:
- add trace_async_init boot parameter in patch2
- add init_kprobe_trace's skip logic in patch3
- add Suggested-by tag
- Other synchronous optimizations related to trace_async_init
Yaxiong Tian (5):
tracing: Rename `eval_map_wq` and allow other parts of tracing use it
tracing: add trace_async_init boot parameter
tracing/kprobes: Skip setup_boot_kprobe_events() when no cmdline event
tracing/kprobes: Make setup_boot_kprobe_events() asynchronous when
trace_async_init set
blktrace: Make init_blk_tracer() asynchronous when trace_async_init
set
.../admin-guide/kernel-parameters.txt | 8 ++++++
kernel/trace/blktrace.c | 23 +++++++++++++++-
kernel/trace/trace.c | 27 ++++++++++++-------
kernel/trace/trace.h | 2 ++
kernel/trace/trace_kprobe.c | 18 ++++++++++++-
5 files changed, 67 insertions(+), 11 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v4 1/5] tracing: Rename `eval_map_wq` and allow other parts of tracing use it
2026-01-28 12:51 [PATCH v4 0/5] Tracing: Accelerate Kernel Boot by Asynchronizing Yaxiong Tian
@ 2026-01-28 12:53 ` Yaxiong Tian
2026-01-28 12:54 ` [PATCH v4 2/5] tracing: add trace_async_init boot parameter Yaxiong Tian
` (4 subsequent siblings)
5 siblings, 0 replies; 19+ messages in thread
From: Yaxiong Tian @ 2026-01-28 12:53 UTC (permalink / raw)
To: mhiramat, rostedt, axboe, mathieu.desnoyers, corbet, skhan
Cc: linux-trace-kernel, linux-block, linux-kernel, linux-doc,
Yaxiong Tian
The eval_map_work_func() function, though queued in eval_map_wq,
holds the trace_event_sem read-write lock for a long time during
kernel boot. This causes blocking issues for other functions.
Rename eval_map_wq to trace_init_wq and make it global, thereby
allowing other parts of tracing to schedule work on this queue
asynchronously and avoiding blockage of the main boot thread.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Yaxiong Tian <tianyaxiong@kylinos.cn>
---
kernel/trace/trace.c | 18 +++++++++---------
kernel/trace/trace.h | 1 +
2 files changed, 10 insertions(+), 9 deletions(-)
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index b1cb30a7b83d..01df88e77818 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -10774,7 +10774,7 @@ int tracing_init_dentry(void)
extern struct trace_eval_map *__start_ftrace_eval_maps[];
extern struct trace_eval_map *__stop_ftrace_eval_maps[];
-static struct workqueue_struct *eval_map_wq __initdata;
+struct workqueue_struct *trace_init_wq __initdata;
static struct work_struct eval_map_work __initdata;
static struct work_struct tracerfs_init_work __initdata;
@@ -10790,15 +10790,15 @@ static int __init trace_eval_init(void)
{
INIT_WORK(&eval_map_work, eval_map_work_func);
- eval_map_wq = alloc_workqueue("eval_map_wq", WQ_UNBOUND, 0);
- if (!eval_map_wq) {
- pr_err("Unable to allocate eval_map_wq\n");
+ trace_init_wq = alloc_workqueue("trace_init_wq", WQ_UNBOUND, 0);
+ if (!trace_init_wq) {
+ pr_err("Unable to allocate trace_init_wq\n");
/* Do work here */
eval_map_work_func(&eval_map_work);
return -ENOMEM;
}
- queue_work(eval_map_wq, &eval_map_work);
+ queue_work(trace_init_wq, &eval_map_work);
return 0;
}
@@ -10807,8 +10807,8 @@ subsys_initcall(trace_eval_init);
static int __init trace_eval_sync(void)
{
/* Make sure the eval map updates are finished */
- if (eval_map_wq)
- destroy_workqueue(eval_map_wq);
+ if (trace_init_wq)
+ destroy_workqueue(trace_init_wq);
return 0;
}
@@ -10969,9 +10969,9 @@ static __init int tracer_init_tracefs(void)
if (ret)
return 0;
- if (eval_map_wq) {
+ if (trace_init_wq) {
INIT_WORK(&tracerfs_init_work, tracer_init_tracefs_work_func);
- queue_work(eval_map_wq, &tracerfs_init_work);
+ queue_work(trace_init_wq, &tracerfs_init_work);
} else {
tracer_init_tracefs_work_func(NULL);
}
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index de4e6713b84e..9e8d52503618 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -769,6 +769,7 @@ extern cpumask_var_t __read_mostly tracing_buffer_mask;
extern unsigned long nsecs_to_usecs(unsigned long nsecs);
extern unsigned long tracing_thresh;
+extern struct workqueue_struct *trace_init_wq __initdata;
/* PID filtering */
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v4 2/5] tracing: add trace_async_init boot parameter
2026-01-28 12:51 [PATCH v4 0/5] Tracing: Accelerate Kernel Boot by Asynchronizing Yaxiong Tian
2026-01-28 12:53 ` [PATCH v4 1/5] tracing: Rename `eval_map_wq` and allow other parts of tracing use it Yaxiong Tian
@ 2026-01-28 12:54 ` Yaxiong Tian
2026-01-28 12:55 ` [PATCH v4 3/5] tracing/kprobes: Skip setup_boot_kprobe_events() when no cmdline event Yaxiong Tian
` (3 subsequent siblings)
5 siblings, 0 replies; 19+ messages in thread
From: Yaxiong Tian @ 2026-01-28 12:54 UTC (permalink / raw)
To: mhiramat, rostedt, axboe, mathieu.desnoyers, corbet, skhan
Cc: linux-trace-kernel, linux-block, linux-kernel, linux-doc,
Yaxiong Tian
Some users prioritize faster kernel boot time. However, the tracing
subsystem, being a critical infrastructure, traditionally initializes
serially. To balance the need for deterministic timing in tracing
against the demand for quicker startup, the trace_async_init boot
parameter is introduced.
When users do not require strict timing determinism for trace
features—or do not use tracing at all during boot—they can add this
cmdline parameter to accelerate kernel startup.
Suggested-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Yaxiong Tian <tianyaxiong@kylinos.cn>
---
Documentation/admin-guide/kernel-parameters.txt | 8 ++++++++
kernel/trace/trace.c | 9 +++++++++
kernel/trace/trace.h | 1 +
3 files changed, 18 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 6b3460701910..d46fdfbfa961 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -7851,6 +7851,14 @@ Kernel parameters
This option can also be set at run time via the sysctl
option: kernel/traceoff_on_warning
+ trace_async_init
+ [FTRACE] Enable this option when faster boot time is the
+ priority. It is beneficial in scenarios where users either
+ do not require a strict initialization order for certain
+ tracing features during boot, or do not use tracing at all
+ in the early boot phase. This can lead to measurable
+ improvements in kernel startup speed.
+
transparent_hugepage=
[KNL]
Format: [always|madvise|never]
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 01df88e77818..9d571841fc84 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1725,6 +1725,15 @@ static int __init set_tracing_thresh(char *str)
}
__setup("tracing_thresh=", set_tracing_thresh);
+bool trace_async_init __initdata;
+
+static int __init setup_trace_async_init(char *str)
+{
+ trace_async_init = true;
+ return 1;
+}
+__setup("trace_async_init", setup_trace_async_init);
+
unsigned long nsecs_to_usecs(unsigned long nsecs)
{
return nsecs / 1000;
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 9e8d52503618..63ae83d7bd1c 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -769,6 +769,7 @@ extern cpumask_var_t __read_mostly tracing_buffer_mask;
extern unsigned long nsecs_to_usecs(unsigned long nsecs);
extern unsigned long tracing_thresh;
+extern bool trace_async_init __initdata;
extern struct workqueue_struct *trace_init_wq __initdata;
/* PID filtering */
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v4 3/5] tracing/kprobes: Skip setup_boot_kprobe_events() when no cmdline event
2026-01-28 12:51 [PATCH v4 0/5] Tracing: Accelerate Kernel Boot by Asynchronizing Yaxiong Tian
2026-01-28 12:53 ` [PATCH v4 1/5] tracing: Rename `eval_map_wq` and allow other parts of tracing use it Yaxiong Tian
2026-01-28 12:54 ` [PATCH v4 2/5] tracing: add trace_async_init boot parameter Yaxiong Tian
@ 2026-01-28 12:55 ` Yaxiong Tian
2026-01-28 12:55 ` [PATCH v4 4/5] tracing/kprobes: Make setup_boot_kprobe_events() asynchronous when trace_async_init set Yaxiong Tian
` (2 subsequent siblings)
5 siblings, 0 replies; 19+ messages in thread
From: Yaxiong Tian @ 2026-01-28 12:55 UTC (permalink / raw)
To: mhiramat, rostedt, axboe, mathieu.desnoyers, corbet, skhan
Cc: linux-trace-kernel, linux-block, linux-kernel, linux-doc,
Yaxiong Tian
When the 'kprobe_event=' kernel command-line parameter is not provided,
there is no need to execute setup_boot_kprobe_events().
This change optimizes the initialization function init_kprobe_trace()
by skipping unnecessary work and effectively prevents potential blocking
that could arise from contention on the event_mutex lock in subsequent
operations.
Signed-off-by: Yaxiong Tian <tianyaxiong@kylinos.cn>
---
kernel/trace/trace_kprobe.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 9953506370a5..89d2740f7bb5 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -2048,6 +2048,10 @@ static __init int init_kprobe_trace(void)
trace_create_file("kprobe_profile", TRACE_MODE_READ,
NULL, NULL, &kprobe_profile_ops);
+ /* If no 'kprobe_event=' cmd is provided, return directly. */
+ if (kprobe_boot_events_buf[0] == '\0')
+ return 0;
+
setup_boot_kprobe_events();
return 0;
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v4 4/5] tracing/kprobes: Make setup_boot_kprobe_events() asynchronous when trace_async_init set
2026-01-28 12:51 [PATCH v4 0/5] Tracing: Accelerate Kernel Boot by Asynchronizing Yaxiong Tian
` (2 preceding siblings ...)
2026-01-28 12:55 ` [PATCH v4 3/5] tracing/kprobes: Skip setup_boot_kprobe_events() when no cmdline event Yaxiong Tian
@ 2026-01-28 12:55 ` Yaxiong Tian
2026-01-28 12:55 ` [PATCH v4 5/5] blktrace: Make init_blk_tracer() " Yaxiong Tian
2026-01-28 23:38 ` [PATCH v4 0/5] Tracing: Accelerate Kernel Boot by Asynchronizing Masami Hiramatsu
5 siblings, 0 replies; 19+ messages in thread
From: Yaxiong Tian @ 2026-01-28 12:55 UTC (permalink / raw)
To: mhiramat, rostedt, axboe, mathieu.desnoyers, corbet, skhan
Cc: linux-trace-kernel, linux-block, linux-kernel, linux-doc,
Yaxiong Tian
During kernel boot, the setup_boot_kprobe_events() function causes
significant delays, increasing overall startup time.
The root cause is a lock contention chain: its child function
enable_boot_kprobe_events() requires the event_mutex, which is
already held by early_event_add_tracer(). early_event_add_tracer()
itself is blocked waiting for the trace_event_sem read-write lock,
which is held for an extended period by trace_event_update_all().
To resolve this, when the trace_async_init parameter is enabled,
the execution of setup_boot_kprobe_events() is moved to the
trace_init_wq workqueue, allowing it to run asynchronously.
Signed-off-by: Yaxiong Tian <tianyaxiong@kylinos.cn>
---
kernel/trace/trace_kprobe.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 89d2740f7bb5..fe69fc03018b 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -2031,6 +2031,13 @@ static __init int init_kprobe_trace_early(void)
}
core_initcall(init_kprobe_trace_early);
+static struct work_struct kprobe_trace_work __initdata;
+
+static void __init kprobe_trace_works_func(struct work_struct *work)
+{
+ setup_boot_kprobe_events();
+}
+
/* Make a tracefs interface for controlling probe points */
static __init int init_kprobe_trace(void)
{
@@ -2052,7 +2059,12 @@ static __init int init_kprobe_trace(void)
if (kprobe_boot_events_buf[0] == '\0')
return 0;
- setup_boot_kprobe_events();
+ if (trace_init_wq && trace_async_init) {
+ INIT_WORK(&kprobe_trace_work, kprobe_trace_works_func);
+ queue_work(trace_init_wq, &kprobe_trace_work);
+ } else {
+ setup_boot_kprobe_events();
+ }
return 0;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-28 12:51 [PATCH v4 0/5] Tracing: Accelerate Kernel Boot by Asynchronizing Yaxiong Tian
` (3 preceding siblings ...)
2026-01-28 12:55 ` [PATCH v4 4/5] tracing/kprobes: Make setup_boot_kprobe_events() asynchronous when trace_async_init set Yaxiong Tian
@ 2026-01-28 12:55 ` Yaxiong Tian
2026-01-29 0:41 ` Steven Rostedt
2026-01-28 23:38 ` [PATCH v4 0/5] Tracing: Accelerate Kernel Boot by Asynchronizing Masami Hiramatsu
5 siblings, 1 reply; 19+ messages in thread
From: Yaxiong Tian @ 2026-01-28 12:55 UTC (permalink / raw)
To: mhiramat, rostedt, axboe, mathieu.desnoyers, corbet, skhan
Cc: linux-trace-kernel, linux-block, linux-kernel, linux-doc,
Yaxiong Tian
The init_blk_tracer() function causes significant boot delay as it
waits for the trace_event_sem lock held by trace_event_update_all().
Specifically, its child function register_trace_event() requires
this lock, which is occupied for an extended period during boot.
To resolve this, when the trace_async_init parameter is enabled, the
execution of primary init_blk_tracer() is moved to the trace_init_wq
workqueue, allowing it to run asynchronously. and prevent blocking
the main boot thread.
Signed-off-by: Yaxiong Tian <tianyaxiong@kylinos.cn>
---
kernel/trace/blktrace.c | 23 ++++++++++++++++++++++-
1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index d031c8d80be4..56c7270ec447 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -1832,7 +1832,9 @@ static struct trace_event trace_blk_event = {
.funcs = &trace_blk_event_funcs,
};
-static int __init init_blk_tracer(void)
+static struct work_struct blktrace_works __initdata;
+
+static int __init __init_blk_tracer(void)
{
if (!register_trace_event(&trace_blk_event)) {
pr_warn("Warning: could not register block events\n");
@@ -1852,6 +1854,25 @@ static int __init init_blk_tracer(void)
return 0;
}
+static void __init blktrace_works_func(struct work_struct *work)
+{
+ __init_blk_tracer();
+}
+
+static int __init init_blk_tracer(void)
+{
+ int ret = 0;
+
+ if (trace_init_wq && trace_async_init) {
+ INIT_WORK(&blktrace_works, blktrace_works_func);
+ queue_work(trace_init_wq, &blktrace_works);
+ } else {
+ ret = __init_blk_tracer();
+ }
+
+ return ret;
+}
+
device_initcall(init_blk_tracer);
static int blk_trace_remove_queue(struct request_queue *q)
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v4 0/5] Tracing: Accelerate Kernel Boot by Asynchronizing
2026-01-28 12:51 [PATCH v4 0/5] Tracing: Accelerate Kernel Boot by Asynchronizing Yaxiong Tian
` (4 preceding siblings ...)
2026-01-28 12:55 ` [PATCH v4 5/5] blktrace: Make init_blk_tracer() " Yaxiong Tian
@ 2026-01-28 23:38 ` Masami Hiramatsu
5 siblings, 0 replies; 19+ messages in thread
From: Masami Hiramatsu @ 2026-01-28 23:38 UTC (permalink / raw)
To: Yaxiong Tian
Cc: rostedt, axboe, mathieu.desnoyers, corbet, skhan,
linux-trace-kernel, linux-block, linux-kernel, linux-doc
On Wed, 28 Jan 2026 20:51:17 +0800
Yaxiong Tian <tianyaxiong@kylinos.cn> wrote:
> On my ARM64 platform, I observed that certain tracing module
> initializations run for up to 200ms—for example, init_kprobe_trace().
> Analysis reveals the root cause: the execution flow eval_map_work_func()
> →trace_event_update_with_eval_map()→trace_event_update_all()
> is highly time-consuming. Although this flow is placed in eval_map_wq
> for asynchronous execution, it holds the trace_event_sem lock, causing
> other modules to be blocked either directly or indirectly. Also in
> init_blk_tracer(), this functions require trace_event_sem device_initcall.
>
> To resolve this issue, I rename `eval_map_wq` and make it global and moved
> other initialization functions under the tracing subsystem that are
> related to this lock to run asynchronously on this workqueue. After this
> optimization, boot time is reduced by approximately 200ms.
>
> Given that asynchronous initialization makes it indeterminate when tracing
> will begin, we introduce the trace_async_init kernel parameter.Asynchronous
> behavior is enabled only when this parameter is explicitly provided.
>
> Based on my analysis and testing, I've identified that only these two
> locations significantly impact timing. Other initcall_* functions do not
> exhibit relevant lock contention.
>
> A brief summary of the test results is as follows:
> Before this PATCHS:
> [ 0.224933] calling init_kprobe_trace+0x0/0xe0 @ 1
> [ 0.455016] initcall init_kprobe_trace+0x0/0xe0 returned 0 after 230080 usecs
>
> Only opt setup_boot_kprobe_events() can see:
> [ 0.258609] calling init_blk_tracer+0x0/0x68 @ 1
> [ 0.454991] initcall init_blk_tracer+0x0/0x68 returned 0 after 196377 usecs
>
> After this PATCHS:
> [ 0.224940] calling init_kprobe_trace+0x0/0xe0 @ 1
> [ 0.224946] initcall init_kprobe_trace+0x0/0xe0 returned 0 after 3 usecs
> skip --------
> [ 0.264835] calling init_blk_tracer+0x0/0x68 @ 1
> [ 0.264841] initcall init_blk_tracer+0x0/0x68 returned 0 after 2 usecs
OK, this series looks good to me.
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
for this series.
Thank you,
>
> ---
> Changes in v2:
> - Rename eval_map_wq to trace_init_wq.
> Changes in v3:
> - Opt PATCH 1/3 commit
> Changes in v4:
> - add trace_async_init boot parameter in patch2
> - add init_kprobe_trace's skip logic in patch3
> - add Suggested-by tag
> - Other synchronous optimizations related to trace_async_init
>
> Yaxiong Tian (5):
> tracing: Rename `eval_map_wq` and allow other parts of tracing use it
> tracing: add trace_async_init boot parameter
> tracing/kprobes: Skip setup_boot_kprobe_events() when no cmdline event
> tracing/kprobes: Make setup_boot_kprobe_events() asynchronous when
> trace_async_init set
> blktrace: Make init_blk_tracer() asynchronous when trace_async_init
> set
>
> .../admin-guide/kernel-parameters.txt | 8 ++++++
> kernel/trace/blktrace.c | 23 +++++++++++++++-
> kernel/trace/trace.c | 27 ++++++++++++-------
> kernel/trace/trace.h | 2 ++
> kernel/trace/trace_kprobe.c | 18 ++++++++++++-
> 5 files changed, 67 insertions(+), 11 deletions(-)
>
> --
> 2.25.1
>
>
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-28 12:55 ` [PATCH v4 5/5] blktrace: Make init_blk_tracer() " Yaxiong Tian
@ 2026-01-29 0:41 ` Steven Rostedt
2026-01-29 2:25 ` Jens Axboe
0 siblings, 1 reply; 19+ messages in thread
From: Steven Rostedt @ 2026-01-29 0:41 UTC (permalink / raw)
To: Yaxiong Tian, axboe
Cc: mhiramat, mathieu.desnoyers, corbet, skhan, linux-trace-kernel,
linux-block, linux-kernel, linux-doc
Jens,
Can you give me an acked-by on this patch and I can take the series through
my tree.
Or perhaps this doesn't even need to test the trace_async_init flag and can
always do the work queue? Does blk_trace ever do tracing at boot up? That
is, before user space starts?
Thanks,
-- Steve
On Wed, 28 Jan 2026 20:55:54 +0800
Yaxiong Tian <tianyaxiong@kylinos.cn> wrote:
> The init_blk_tracer() function causes significant boot delay as it
> waits for the trace_event_sem lock held by trace_event_update_all().
> Specifically, its child function register_trace_event() requires
> this lock, which is occupied for an extended period during boot.
>
> To resolve this, when the trace_async_init parameter is enabled, the
> execution of primary init_blk_tracer() is moved to the trace_init_wq
> workqueue, allowing it to run asynchronously. and prevent blocking
> the main boot thread.
>
> Signed-off-by: Yaxiong Tian <tianyaxiong@kylinos.cn>
> ---
> kernel/trace/blktrace.c | 23 ++++++++++++++++++++++-
> 1 file changed, 22 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
> index d031c8d80be4..56c7270ec447 100644
> --- a/kernel/trace/blktrace.c
> +++ b/kernel/trace/blktrace.c
> @@ -1832,7 +1832,9 @@ static struct trace_event trace_blk_event = {
> .funcs = &trace_blk_event_funcs,
> };
>
> -static int __init init_blk_tracer(void)
> +static struct work_struct blktrace_works __initdata;
> +
> +static int __init __init_blk_tracer(void)
> {
> if (!register_trace_event(&trace_blk_event)) {
> pr_warn("Warning: could not register block events\n");
> @@ -1852,6 +1854,25 @@ static int __init init_blk_tracer(void)
> return 0;
> }
>
> +static void __init blktrace_works_func(struct work_struct *work)
> +{
> + __init_blk_tracer();
> +}
> +
> +static int __init init_blk_tracer(void)
> +{
> + int ret = 0;
> +
> + if (trace_init_wq && trace_async_init) {
> + INIT_WORK(&blktrace_works, blktrace_works_func);
> + queue_work(trace_init_wq, &blktrace_works);
> + } else {
> + ret = __init_blk_tracer();
> + }
> +
> + return ret;
> +}
> +
> device_initcall(init_blk_tracer);
>
> static int blk_trace_remove_queue(struct request_queue *q)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-29 0:41 ` Steven Rostedt
@ 2026-01-29 2:25 ` Jens Axboe
2026-01-29 20:29 ` Steven Rostedt
0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2026-01-29 2:25 UTC (permalink / raw)
To: Steven Rostedt
Cc: Yaxiong Tian, mhiramat, mathieu.desnoyers, corbet, skhan,
linux-trace-kernel, linux-block, linux-kernel, linux-doc
On Jan 28, 2026, at 5:40 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
>
>
> Jens,
>
> Can you give me an acked-by on this patch and I can take the series through
> my tree.
On phone, hope this works:
Acked-by: Jens Axboe <axboe@kernel.dk>
> Or perhaps this doesn't even need to test the trace_async_init flag and can
> always do the work queue? Does blk_trace ever do tracing at boot up? That
> is, before user space starts?
Not via the traditonal way of running blktrace.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-29 2:25 ` Jens Axboe
@ 2026-01-29 20:29 ` Steven Rostedt
2026-01-30 1:35 ` Yaxiong Tian
` (2 more replies)
0 siblings, 3 replies; 19+ messages in thread
From: Steven Rostedt @ 2026-01-29 20:29 UTC (permalink / raw)
To: Jens Axboe
Cc: Yaxiong Tian, mhiramat, mathieu.desnoyers, corbet, skhan,
linux-trace-kernel, linux-block, linux-kernel, linux-doc
On Wed, 28 Jan 2026 19:25:46 -0700
Jens Axboe <axboe@kernel.dk> wrote:
> On Jan 28, 2026, at 5:40 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> >
> > Jens,
> >
> > Can you give me an acked-by on this patch and I can take the series through
> > my tree.
>
> On phone, hope this works:
>
> Acked-by: Jens Axboe <axboe@kernel.dk>
Thanks!
>
> > Or perhaps this doesn't even need to test the trace_async_init flag and can
> > always do the work queue? Does blk_trace ever do tracing at boot up? That
> > is, before user space starts?
>
> Not via the traditonal way of running blktrace.
Masami and Yaxiong,
I've been thinking about this more and I'm not sure we need the
trace_async_init kernel parameter at all. As blktrace should only be
enabled by user space, it can always use the work queue.
For kprobes, if someone is adding a kprobe on the kernel command line, then
they are already specifying that tracing is more important.
Patch 3 already keeps kprobes from being an issue with contention of the
tracing locks, so I don't think it ever needs to use the work queue.
Wouldn't it just be better to remove the trace_async_init and make blktrace
always use the work queue and kprobes never do it (but exit out early if
there were no kprobes registered)?
That is, remove patch 2 and 4 and make this patch always use the work queue.
-- Steve
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-29 20:29 ` Steven Rostedt
@ 2026-01-30 1:35 ` Yaxiong Tian
2026-01-30 3:09 ` Yaxiong Tian
2026-01-30 9:30 ` Masami Hiramatsu
2026-02-02 3:36 ` Yaxiong Tian
2 siblings, 1 reply; 19+ messages in thread
From: Yaxiong Tian @ 2026-01-30 1:35 UTC (permalink / raw)
To: Steven Rostedt, Jens Axboe
Cc: mhiramat, mathieu.desnoyers, corbet, skhan, linux-trace-kernel,
linux-block, linux-kernel, linux-doc
在 2026/1/30 04:29, Steven Rostedt 写道:
> On Wed, 28 Jan 2026 19:25:46 -0700
> Jens Axboe <axboe@kernel.dk> wrote:
>
>> On Jan 28, 2026, at 5:40 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
>>>
>>> Jens,
>>>
>>> Can you give me an acked-by on this patch and I can take the series through
>>> my tree.
>> On phone, hope this works:
>>
>> Acked-by: Jens Axboe <axboe@kernel.dk>
> Thanks!
>
>>> Or perhaps this doesn't even need to test the trace_async_init flag and can
>>> always do the work queue? Does blk_trace ever do tracing at boot up? That
>>> is, before user space starts?
>> Not via the traditonal way of running blktrace.
> Masami and Yaxiong,
>
> I've been thinking about this more and I'm not sure we need the
> trace_async_init kernel parameter at all. As blktrace should only be
> enabled by user space, it can always use the work queue.
>
> For kprobes, if someone is adding a kprobe on the kernel command line, then
> they are already specifying that tracing is more important.
>
> Patch 3 already keeps kprobes from being an issue with contention of the
> tracing locks, so I don't think it ever needs to use the work queue.
>
> Wouldn't it just be better to remove the trace_async_init and make blktrace
> always use the work queue and kprobes never do it (but exit out early if
> there were no kprobes registered)?
>
> That is, remove patch 2 and 4 and make this patch always use the work queue.
Yesterday, I was curious about|trace_event_update_all()|, so I
added|pr_err(xx)|prints within the function's loop. I discovered that
these prints appeared as late as 14 seconds later (printing is
time-consuming), by which time the desktop had already been up for quite
a while. However,|trace_eval_sync()|had already finished running at 0.6
seconds.
This implies that I originally
thought|trace_eval_sync()|'s|destroy_workqueue()|would wait for all
tasks to complete, but it seems that might not be the case. From this,
if the above conclusion is true, then strictly speaking, tasks
using|queue_work(xx)|cannot be guaranteed to finish before the init
process executes. If it's necessary to strictly ensure initialization
completes before user space starts,
using|async_synchronize_full()|or|async_synchronize_full_domain()|would
be better in such scenarios.
Of course, the situation described above is an extreme case. I don't
oppose this approach; I only hope to make the startup faster for
ordinary users who don’t use trace, while minimizing the impact on
others as much as possible.
>
> -- Steve
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-30 1:35 ` Yaxiong Tian
@ 2026-01-30 3:09 ` Yaxiong Tian
2026-01-30 3:26 ` Steven Rostedt
0 siblings, 1 reply; 19+ messages in thread
From: Yaxiong Tian @ 2026-01-30 3:09 UTC (permalink / raw)
To: Steven Rostedt, Jens Axboe
Cc: mhiramat, mathieu.desnoyers, corbet, skhan, linux-trace-kernel,
linux-block, linux-kernel, linux-doc
在 2026/1/30 09:35, Yaxiong Tian 写道:
>
> 在 2026/1/30 04:29, Steven Rostedt 写道:
>> On Wed, 28 Jan 2026 19:25:46 -0700
>> Jens Axboe <axboe@kernel.dk> wrote:
>>
>>> On Jan 28, 2026, at 5:40 PM, Steven Rostedt <rostedt@goodmis.org>
>>> wrote:
>>>>
>>>> Jens,
>>>>
>>>> Can you give me an acked-by on this patch and I can take the series
>>>> through
>>>> my tree.
>>> On phone, hope this works:
>>>
>>> Acked-by: Jens Axboe <axboe@kernel.dk>
>> Thanks!
>>
>>>> Or perhaps this doesn't even need to test the trace_async_init flag
>>>> and can
>>>> always do the work queue? Does blk_trace ever do tracing at boot
>>>> up? That
>>>> is, before user space starts?
>>> Not via the traditonal way of running blktrace.
>> Masami and Yaxiong,
>>
>> I've been thinking about this more and I'm not sure we need the
>> trace_async_init kernel parameter at all. As blktrace should only be
>> enabled by user space, it can always use the work queue.
>>
>> For kprobes, if someone is adding a kprobe on the kernel command
>> line, then
>> they are already specifying that tracing is more important.
>>
>> Patch 3 already keeps kprobes from being an issue with contention of the
>> tracing locks, so I don't think it ever needs to use the work queue.
>>
>> Wouldn't it just be better to remove the trace_async_init and make
>> blktrace
>> always use the work queue and kprobes never do it (but exit out early if
>> there were no kprobes registered)?
>>
>> That is, remove patch 2 and 4 and make this patch always use the work
>> queue.
>
> Yesterday, I was curious about|trace_event_update_all()|, so I
> added|pr_err(xx)|prints within the function's loop. I discovered that
> these prints appeared as late as 14 seconds later (printing is
> time-consuming), by which time the desktop had already been up for
> quite a while. However,|trace_eval_sync()|had already finished running
> at 0.6 seconds.
>
> This implies that I originally
> thought|trace_eval_sync()|'s|destroy_workqueue()|would wait for all
> tasks to complete, but it seems that might not be the case. From this,
> if the above conclusion is true, then strictly speaking, tasks
> using|queue_work(xx)|cannot be guaranteed to finish before the init
> process executes. If it's necessary to strictly ensure initialization
> completes before user space starts,
> using|async_synchronize_full()|or|async_synchronize_full_domain()|would
> be better in such scenarios.
I need to double-check this issue—theoretically, it shouldn't exist. But
I'm not sure why the print appeared at the 14-second mark.
>
> Of course, the situation described above is an extreme case. I don't
> oppose this approach; I only hope to make the startup faster for
> ordinary users who don’t use trace, while minimizing the impact on
> others as much as possible.
>
>>
>> -- Steve
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-30 3:09 ` Yaxiong Tian
@ 2026-01-30 3:26 ` Steven Rostedt
2026-01-30 3:31 ` Steven Rostedt
0 siblings, 1 reply; 19+ messages in thread
From: Steven Rostedt @ 2026-01-30 3:26 UTC (permalink / raw)
To: Yaxiong Tian
Cc: Jens Axboe, mhiramat, mathieu.desnoyers, corbet, skhan,
linux-trace-kernel, linux-block, linux-kernel, linux-doc
On Fri, 30 Jan 2026 11:09:26 +0800
Yaxiong Tian <tianyaxiong@kylinos.cn> wrote:
> > thought|trace_eval_sync()|'s|destroy_workqueue()|would wait for all
> > tasks to complete, but it seems that might not be the case. From this,
> > if the above conclusion is true, then strictly speaking, tasks
> > using|queue_work(xx)|cannot be guaranteed to finish before the init
> > process executes. If it's necessary to strictly ensure initialization
> > completes before user space starts,
> > using|async_synchronize_full()|or|async_synchronize_full_domain()|would
> > be better in such scenarios.
> I need to double-check this issue—theoretically, it shouldn't exist. But
> I'm not sure why the print appeared at the 14-second mark.
Use trace_printk() instead. printk now has a "deferred" output. I'm not
sure if the timestamps of when it prints is when the print took place
or when it got to the console :-/
-- Steve
> >
> > Of course, the situation described above is an extreme case. I don't
> > oppose this approach; I only hope to make the startup faster for
> > ordinary users who don’t use trace, while minimizing the impact on
> > others as much as possible.
> >
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-30 3:26 ` Steven Rostedt
@ 2026-01-30 3:31 ` Steven Rostedt
2026-01-30 3:45 ` Steven Rostedt
0 siblings, 1 reply; 19+ messages in thread
From: Steven Rostedt @ 2026-01-30 3:31 UTC (permalink / raw)
To: Yaxiong Tian
Cc: Jens Axboe, mhiramat, mathieu.desnoyers, corbet, skhan,
linux-trace-kernel, linux-block, linux-kernel, linux-doc
On Thu, 29 Jan 2026 22:26:08 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:
> Use trace_printk() instead. printk now has a "deferred" output. I'm not
> sure if the timestamps of when it prints is when the print took place
> or when it got to the console :-/
I added the below patch and have this result:
kworker/u33:1-79 [002] ..... 1.840855: trace_event_update_all: Start syncing
swapper/0-1 [005] ..... 6.045742: trace_eval_sync: sync maps
kworker/u33:1-79 [002] ..... 12.289296: trace_event_update_all: Finish syncing
swapper/0-1 [005] ..... 12.289387: trace_eval_sync: sync maps complete
Which shows that the final initcall waited for the work queue to complete:
-- Steve
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 396d59202438..33180d5622a8 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -10817,9 +10817,11 @@ subsys_initcall(trace_eval_init);
static int __init trace_eval_sync(void)
{
+ trace_printk("sync maps\n");
/* Make sure the eval map updates are finished */
if (eval_map_wq)
destroy_workqueue(eval_map_wq);
+ trace_printk("sync maps complete\n");
return 0;
}
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index af6d1fe5cab7..194b344400e9 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -3555,6 +3555,7 @@ void trace_event_update_all(struct trace_eval_map **map, int len)
int last_i;
int i;
+ trace_printk("Start syncing\n");
down_write(&trace_event_sem);
list_for_each_entry_safe(call, p, &ftrace_events, list) {
/* events are usually grouped together with systems */
@@ -3593,6 +3594,8 @@ void trace_event_update_all(struct trace_eval_map **map, int len)
cond_resched();
}
up_write(&trace_event_sem);
+ msleep(10000);
+ trace_printk("Finish syncing\n");
}
static bool event_in_systems(struct trace_event_call *call,
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-30 3:31 ` Steven Rostedt
@ 2026-01-30 3:45 ` Steven Rostedt
2026-01-30 4:10 ` Yaxiong Tian
0 siblings, 1 reply; 19+ messages in thread
From: Steven Rostedt @ 2026-01-30 3:45 UTC (permalink / raw)
To: Yaxiong Tian
Cc: Jens Axboe, mhiramat, mathieu.desnoyers, corbet, skhan,
linux-trace-kernel, linux-block, linux-kernel, linux-doc
On Thu, 29 Jan 2026 22:31:16 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:
> I added the below patch and have this result:
>
> kworker/u33:1-79 [002] ..... 1.840855: trace_event_update_all: Start syncing
> swapper/0-1 [005] ..... 6.045742: trace_eval_sync: sync maps
> kworker/u33:1-79 [002] ..... 12.289296: trace_event_update_all: Finish syncing
> swapper/0-1 [005] ..... 12.289387: trace_eval_sync: sync maps complete
>
> Which shows that the final initcall waited for the work queue to complete:
Switching to printk() gives me the same results:
# dmesg |grep sync
[ 1.117856] Start syncing
[ 4.498360] sync maps
[ 11.173304] Finish syncing
[ 11.175660] sync maps complete
-- Steve
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-30 3:45 ` Steven Rostedt
@ 2026-01-30 4:10 ` Yaxiong Tian
0 siblings, 0 replies; 19+ messages in thread
From: Yaxiong Tian @ 2026-01-30 4:10 UTC (permalink / raw)
To: Steven Rostedt
Cc: Jens Axboe, mhiramat, mathieu.desnoyers, corbet, skhan,
linux-trace-kernel, linux-block, linux-kernel, linux-doc
在 2026/1/30 11:45, Steven Rostedt 写道:
> On Thu, 29 Jan 2026 22:31:16 -0500
> Steven Rostedt <rostedt@goodmis.org> wrote:
>
>> I added the below patch and have this result:
>>
>> kworker/u33:1-79 [002] ..... 1.840855: trace_event_update_all: Start syncing
>> swapper/0-1 [005] ..... 6.045742: trace_eval_sync: sync maps
>> kworker/u33:1-79 [002] ..... 12.289296: trace_event_update_all: Finish syncing
>> swapper/0-1 [005] ..... 12.289387: trace_eval_sync: sync maps complete
>>
>> Which shows that the final initcall waited for the work queue to complete:
> Switching to printk() gives me the same results:
>
> # dmesg |grep sync
> [ 1.117856] Start syncing
> [ 4.498360] sync maps
> [ 11.173304] Finish syncing
> [ 11.175660] sync maps complete
>
> -- Steve
Sorry, yes, no problem. I confirmed that init_blk_tracer() is running
properly (when executed sequentially) — if there were an issue, it would
have already gotten stuck in a lock. It seems like this might be related
to the print buffer. I’ll look into this issue myself.
Back to this topic — I don’t object to that proposal.
I think each has its own advantages. Let’s see what others think.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-29 20:29 ` Steven Rostedt
2026-01-30 1:35 ` Yaxiong Tian
@ 2026-01-30 9:30 ` Masami Hiramatsu
2026-01-30 9:59 ` Yaxiong Tian
2026-02-02 3:36 ` Yaxiong Tian
2 siblings, 1 reply; 19+ messages in thread
From: Masami Hiramatsu @ 2026-01-30 9:30 UTC (permalink / raw)
To: Steven Rostedt
Cc: Jens Axboe, Yaxiong Tian, mhiramat, mathieu.desnoyers, corbet,
skhan, linux-trace-kernel, linux-block, linux-kernel, linux-doc
On Thu, 29 Jan 2026 15:29:58 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:
> On Wed, 28 Jan 2026 19:25:46 -0700
> Jens Axboe <axboe@kernel.dk> wrote:
>
> > On Jan 28, 2026, at 5:40 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> > >
> > >
> > > Jens,
> > >
> > > Can you give me an acked-by on this patch and I can take the series through
> > > my tree.
> >
> > On phone, hope this works:
> >
> > Acked-by: Jens Axboe <axboe@kernel.dk>
>
> Thanks!
>
> >
> > > Or perhaps this doesn't even need to test the trace_async_init flag and can
> > > always do the work queue? Does blk_trace ever do tracing at boot up? That
> > > is, before user space starts?
> >
> > Not via the traditonal way of running blktrace.
>
> Masami and Yaxiong,
>
> I've been thinking about this more and I'm not sure we need the
> trace_async_init kernel parameter at all. As blktrace should only be
> enabled by user space, it can always use the work queue.
>
> For kprobes, if someone is adding a kprobe on the kernel command line, then
> they are already specifying that tracing is more important.
>
> Patch 3 already keeps kprobes from being an issue with contention of the
> tracing locks, so I don't think it ever needs to use the work queue.
>
> Wouldn't it just be better to remove the trace_async_init and make blktrace
> always use the work queue and kprobes never do it (but exit out early if
> there were no kprobes registered)?
Yeah, for kprobes event case, that sounds good to me. I think [3/5] is
enough to speed it up if user does not define kprobe events on cmdline.
Thank you,
>
> That is, remove patch 2 and 4 and make this patch always use the work queue.
>
> -- Steve
--
Masami Hiramatsu (Google) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-30 9:30 ` Masami Hiramatsu
@ 2026-01-30 9:59 ` Yaxiong Tian
0 siblings, 0 replies; 19+ messages in thread
From: Yaxiong Tian @ 2026-01-30 9:59 UTC (permalink / raw)
To: Masami Hiramatsu (Google), Steven Rostedt, Jens Axboe
Cc: Jens Axboe, mathieu.desnoyers, corbet, skhan, linux-trace-kernel,
linux-block, linux-kernel, linux-doc
在 2026/1/30 17:30, Masami Hiramatsu (Google) 写道:
> On Thu, 29 Jan 2026 15:29:58 -0500
> Steven Rostedt <rostedt@goodmis.org> wrote:
>
>> On Wed, 28 Jan 2026 19:25:46 -0700
>> Jens Axboe <axboe@kernel.dk> wrote:
>>
>>> On Jan 28, 2026, at 5:40 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
>>>>
>>>> Jens,
>>>>
>>>> Can you give me an acked-by on this patch and I can take the series through
>>>> my tree.
>>> On phone, hope this works:
>>>
>>> Acked-by: Jens Axboe <axboe@kernel.dk>
>> Thanks!
>>
>>>> Or perhaps this doesn't even need to test the trace_async_init flag and can
>>>> always do the work queue? Does blk_trace ever do tracing at boot up? That
>>>> is, before user space starts?
>>> Not via the traditonal way of running blktrace.
>> Masami and Yaxiong,
>>
>> I've been thinking about this more and I'm not sure we need the
>> trace_async_init kernel parameter at all. As blktrace should only be
>> enabled by user space, it can always use the work queue.
>>
>> For kprobes, if someone is adding a kprobe on the kernel command line, then
>> they are already specifying that tracing is more important.
>>
>> Patch 3 already keeps kprobes from being an issue with contention of the
>> tracing locks, so I don't think it ever needs to use the work queue.
>>
>> Wouldn't it just be better to remove the trace_async_init and make blktrace
>> always use the work queue and kprobes never do it (but exit out early if
>> there were no kprobes registered)?
> Yeah, for kprobes event case, that sounds good to me. I think [3/5] is
> enough to speed it up if user does not define kprobe events on cmdline.
>
> Thank you,
Agreed.
Hi Jens:
what do you think about this proposal (making blktrace always use the
work queue)?
>
>> That is, remove patch 2 and 4 and make this patch always use the work queue.
>>
>> -- Steve
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v4 5/5] blktrace: Make init_blk_tracer() asynchronous when trace_async_init set
2026-01-29 20:29 ` Steven Rostedt
2026-01-30 1:35 ` Yaxiong Tian
2026-01-30 9:30 ` Masami Hiramatsu
@ 2026-02-02 3:36 ` Yaxiong Tian
2 siblings, 0 replies; 19+ messages in thread
From: Yaxiong Tian @ 2026-02-02 3:36 UTC (permalink / raw)
To: Steven Rostedt, Jens Axboe
Cc: mhiramat, mathieu.desnoyers, corbet, skhan, linux-trace-kernel,
linux-block, linux-kernel, linux-doc
在 2026/1/30 04:29, Steven Rostedt 写道:
> On Wed, 28 Jan 2026 19:25:46 -0700
> Jens Axboe <axboe@kernel.dk> wrote:
>
>> On Jan 28, 2026, at 5:40 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
>>>
>>> Jens,
>>>
>>> Can you give me an acked-by on this patch and I can take the series through
>>> my tree.
>> On phone, hope this works:
>>
>> Acked-by: Jens Axboe <axboe@kernel.dk>
> Thanks!
>
>>> Or perhaps this doesn't even need to test the trace_async_init flag and can
>>> always do the work queue? Does blk_trace ever do tracing at boot up? That
>>> is, before user space starts?
>> Not via the traditonal way of running blktrace.
> Masami and Yaxiong,
>
> I've been thinking about this more and I'm not sure we need the
> trace_async_init kernel parameter at all. As blktrace should only be
> enabled by user space, it can always use the work queue.
Hi Steven and Jens:
I've been thinking about this further. If we need to consider the
possibility of non-traditional blktrace usage during the boot phase,
could we perhaps use a grub parameter like 'ftrace=blk' to handle this?
More specifically, we could check this through
the|default_bootup_tracer|mechanism.
+bool __init trace_check_need_bootup_tracer(struct tracer *type)
+{
+ if (!default_bootup_tracer)
+ return false;
+
+ if (strncmp(default_bootup_tracer, type->name, MAX_TRACER_SIZE))
+ return false;
+ else
+ return true;
+}
+
>
> For kprobes, if someone is adding a kprobe on the kernel command line, then
> they are already specifying that tracing is more important.
>
> Patch 3 already keeps kprobes from being an issue with contention of the
> tracing locks, so I don't think it ever needs to use the work queue.
>
> Wouldn't it just be better to remove the trace_async_init and make blktrace
> always use the work queue and kprobes never do it (but exit out early if
> there were no kprobes registered)?
>
> That is, remove patch 2 and 4 and make this patch always use the work queue.
>
> -- Steve
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2026-02-02 3:36 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-28 12:51 [PATCH v4 0/5] Tracing: Accelerate Kernel Boot by Asynchronizing Yaxiong Tian
2026-01-28 12:53 ` [PATCH v4 1/5] tracing: Rename `eval_map_wq` and allow other parts of tracing use it Yaxiong Tian
2026-01-28 12:54 ` [PATCH v4 2/5] tracing: add trace_async_init boot parameter Yaxiong Tian
2026-01-28 12:55 ` [PATCH v4 3/5] tracing/kprobes: Skip setup_boot_kprobe_events() when no cmdline event Yaxiong Tian
2026-01-28 12:55 ` [PATCH v4 4/5] tracing/kprobes: Make setup_boot_kprobe_events() asynchronous when trace_async_init set Yaxiong Tian
2026-01-28 12:55 ` [PATCH v4 5/5] blktrace: Make init_blk_tracer() " Yaxiong Tian
2026-01-29 0:41 ` Steven Rostedt
2026-01-29 2:25 ` Jens Axboe
2026-01-29 20:29 ` Steven Rostedt
2026-01-30 1:35 ` Yaxiong Tian
2026-01-30 3:09 ` Yaxiong Tian
2026-01-30 3:26 ` Steven Rostedt
2026-01-30 3:31 ` Steven Rostedt
2026-01-30 3:45 ` Steven Rostedt
2026-01-30 4:10 ` Yaxiong Tian
2026-01-30 9:30 ` Masami Hiramatsu
2026-01-30 9:59 ` Yaxiong Tian
2026-02-02 3:36 ` Yaxiong Tian
2026-01-28 23:38 ` [PATCH v4 0/5] Tracing: Accelerate Kernel Boot by Asynchronizing Masami Hiramatsu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox