* [PATCH v3 0/5] tracepoints: Add warnings for unused tracepoints and trace events
@ 2025-07-22 15:20 Steven Rostedt
2025-07-22 15:20 ` [PATCH v3 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used Steven Rostedt
` (4 more replies)
0 siblings, 5 replies; 9+ messages in thread
From: Steven Rostedt @ 2025-07-22 15:20 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-kbuild, llvm
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas, Linus Torvalds
Every trace event can take up to 5K of memory in text and metadata regardless
if they are used or not. Trace events should not be created if they are not
used. Currently there's over a hundred events in the kernel that are defined
but unused, either because their callers were removed without removing the
trace event with it, or a config hides the trace event caller but not the
trace event itself. And in some cases, trace events were simply added but were
never called for whatever reason. The number of unused trace events continues
to grow.
This patch series aims to fix this.
The first patch creates a new section called __tracepoint_check, where all
callers of a tracepoint creates a variable that is placed in this section with
a pointer to the tracepoint they use. Then on boot up, it iterates this
section and will modify the tracepoint's "func" field to a value of 1 (all
tracepoints "func" fields are initialized to NULL and is only set when they
are registered). This takes place before any tracepoint can be registered.
Then each tracepoint is iterated on and if any tracepoint does not have its
"func" field set to 1 a warning is triggered and every tracepoint that doesn't
have that field set is printed. The "func" field is then reset back to NULL.
The second patch modifies scripts/sorttable.c to read the __tracepoint_check
section. It sorts it, and then reads the __tracepoint_ptr section that has all
compiled in tracepoints. It makes sure that every tracepoint is found in the
check section and if not, it prints a warning message about it. This lists the
missing tracepoints at build time.
The third patch updates sorttable to work for arm64 when compiled with gcc. As
gcc's arm64 build doesn't put addresses in their section but saves them off in
the RELA sections. This mostly takes the work done that was needed to do the
mcount sorting at boot up on arm64.
The fourth patch adds EXPORT_TRACEPOINT() to the __tracepoint_check section as
well. There was several locations that adds tracepoints in the kernel proper
that are only used in modules. It was getting quite complex trying to move
things around that I just decided to make any tracepoint in a
EXPORT_TRACEPOINT "used". I'm using the analogy of static and global
functions. An unused static function gets a warning but an unused global one
does not.
The last patch updates the trace_ftrace_test_filter boot up self test. That
selftest creates a trace event to run a bunch of filter tests on it without
actually calling the tracepoint. To quiet the warning, the selftest tracepoint
is called within a if (!trace_<event>_enabled()) section, where it will not be
optimized out, nor will it be called.
Changes since v2: https://lore.kernel.org/all/20250612235827.011358765@goodmis.org/
- Fixed some typos in the above change log, and in comments within patches.
- Have the build warning on unused events be default off. There is still too
many events that are unused to make a warning a default. But after all
current offenders are fixed, I plan on making it default enabled to prevent
new trace events from being added when they are unused.
Steven Rostedt (5):
tracepoints: Add verifier that makes sure all defined tracepoints are used
tracing: sorttable: Add a tracepoint verification check at build time
tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address
tracepoint: Do not warn for unused event that is exported
tracing: Call trace_ftrace_test_filter() for the event
----
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/tracepoint.h | 13 ++
kernel/trace/Kconfig | 30 +++
kernel/trace/trace_events_filter.c | 4 +
kernel/tracepoint.c | 26 +++
scripts/Makefile | 4 +
scripts/sorttable.c | 444 ++++++++++++++++++++++++++++++-------
7 files changed, 436 insertions(+), 86 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used
2025-07-22 15:20 [PATCH v3 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
@ 2025-07-22 15:20 ` Steven Rostedt
2025-07-22 16:16 ` Linus Torvalds
2025-07-22 15:20 ` [PATCH v3 2/5] tracing: sorttable: Add a tracepoint verification check at build time Steven Rostedt
` (3 subsequent siblings)
4 siblings, 1 reply; 9+ messages in thread
From: Steven Rostedt @ 2025-07-22 15:20 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-kbuild, llvm
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas, Linus Torvalds
From: Steven Rostedt <rostedt@goodmis.org>
If a tracepoint is defined via DECLARE_TRACE() or TRACE_EVENT() but never
called (via the trace_<tracepoint>() function), its metadata is still
around in memory and not discarded.
When created via TRACE_EVENT() the situation is worse because the
TRACE_EVENT() creates metadata that can be around 5k per trace event.
Having unused trace events causes several thousand of wasted bytes.
Add a verifier that injects a pointer to the tracepoint structure in the
functions that are used and added to a section called __tracepoint_check.
Then on boot up, iterate over this section and for every tracepoint
descriptor that is pointed to, update its ".funcs" field to (void *)1, as
the .funcs field is only set when a tracepoint is registered. At this
time, no tracepoints should be registered.
Then iterate all tracepoints and if any tracepoints doesn't have its
.funcs field set to (void*)1, trigger a warning, and list all tracepoints
that are not found.
Enabling this currently with a given config produces:
Tracepoint x86_fpu_before_restore unused
Tracepoint x86_fpu_after_restore unused
Tracepoint x86_fpu_init_state unused
Tracepoint pelt_hw_tp unused
Tracepoint pelt_irq_tp unused
Tracepoint ipi_raise unused
Tracepoint ipi_entry unused
Tracepoint ipi_exit unused
Tracepoint irq_matrix_alloc_reserved unused
Tracepoint psci_domain_idle_enter unused
Tracepoint psci_domain_idle_exit unused
Tracepoint powernv_throttle unused
Tracepoint clock_enable unused
Tracepoint clock_disable unused
Tracepoint clock_set_rate unused
Tracepoint power_domain_target unused
Tracepoint xdp_bulk_tx unused
Tracepoint xdp_redirect_map unused
Tracepoint xdp_redirect_map_err unused
Tracepoint mem_return_failed unused
Tracepoint vma_mas_szero unused
Tracepoint vma_store unused
Tracepoint hugepage_set_pmd unused
Tracepoint hugepage_set_pud unused
Tracepoint hugepage_update_pmd unused
Tracepoint hugepage_update_pud unused
Tracepoint dax_pmd_insert_mapping unused
Tracepoint dax_insert_mapping unused
Tracepoint block_rq_remap unused
Tracepoint xhci_dbc_handle_event unused
Tracepoint xhci_dbc_handle_transfer unused
Tracepoint xhci_dbc_gadget_ep_queue unused
Tracepoint xhci_dbc_alloc_request unused
Tracepoint xhci_dbc_free_request unused
Tracepoint xhci_dbc_queue_request unused
Tracepoint xhci_dbc_giveback_request unused
Tracepoint tcp_ao_wrong_maclen unused
Tracepoint tcp_ao_mismatch unused
Tracepoint tcp_ao_key_not_found unused
Tracepoint tcp_ao_rnext_request unused
Tracepoint tcp_ao_synack_no_key unused
Tracepoint tcp_ao_snd_sne_update unused
Tracepoint tcp_ao_rcv_sne_update unused
Some of the above is totally unused but others are not used due to their
"trace_" functions being inside configs, in which case, the defined
tracepoints should also be inside those same configs. Others are
architecture specific but defined in generic code, where they should
either be moved to the architecture or be surrounded by #ifdef for the
architectures they are for.
Note, currently this only handles tracepoints that are builtin. This can
easily be extended to verify tracepoints used by modules, but it requires a
slightly different approach as it needs updates to the module code.
Link: https://lore.kernel.org/all/20250528114549.4d8a5e03@gandalf.local.home/
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/tracepoint.h | 10 ++++++++++
kernel/trace/Kconfig | 19 +++++++++++++++++++
kernel/tracepoint.c | 26 ++++++++++++++++++++++++++
4 files changed, 56 insertions(+)
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index fa5f19b8d53a..600d8b51e315 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -708,6 +708,7 @@ defined(CONFIG_AUTOFDO_CLANG) || defined(CONFIG_PROPELLER_CLANG)
MCOUNT_REC() \
*(.init.rodata .init.rodata.*) \
FTRACE_EVENTS() \
+ BOUNDED_SECTION_BY(__tracepoint_check, ___tracepoint_check) \
TRACE_SYSCALLS() \
KPROBE_BLACKLIST() \
ERROR_INJECT_WHITELIST() \
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 826ce3f8e1f8..2b96c7e94c52 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -221,6 +221,14 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
__do_trace_##name(args); \
}
+#ifdef CONFIG_TRACEPOINT_VERIFY_USED
+# define TRACEPOINT_CHECK(name) \
+ static void __used __section("__tracepoint_check") *__trace_check = \
+ &__tracepoint_##name;
+#else
+# define TRACEPOINT_CHECK(name)
+#endif
+
/*
* Make sure the alignment of the structure in the __tracepoints section will
* not add unwanted padding between the beginning of the section and the
@@ -270,6 +278,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
__DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_proto)) \
static inline void __do_trace_##name(proto) \
{ \
+ TRACEPOINT_CHECK(name) \
if (cond) { \
guard(preempt_notrace)(); \
__DO_TRACE_CALL(name, TP_ARGS(args)); \
@@ -289,6 +298,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
__DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_proto)) \
static inline void __do_trace_##name(proto) \
{ \
+ TRACEPOINT_CHECK(name) \
guard(rcu_tasks_trace)(); \
__DO_TRACE_CALL(name, TP_ARGS(args)); \
} \
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index a3f35c7d83b6..e676b802b721 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -1044,6 +1044,25 @@ config GCOV_PROFILE_FTRACE
Note that on a kernel compiled with this config, ftrace will
run significantly slower.
+config TRACEPOINT_VERIFY_USED
+ bool
+ help
+ This option creates a section when tracepoints are used
+ that hold a pointer to the tracepoint that is used.
+ This can be used to test if a defined tracepoint is
+ used or not.
+
+config TRACEPOINT_WARN_ON_UNUSED
+ bool "Warn if any tracepoint is defined but not used"
+ depends on TRACEPOINTS
+ select TRACEPOINT_VERIFY_USED
+ help
+ This option checks if every builtin defined tracepoint is
+ used in the code. If a tracepoint is defined but not used,
+ it will waste memory as its metadata is still created.
+ A warning will be triggered if a tracepoint is found and
+ not used at bootup.
+
config FTRACE_SELFTEST
bool
diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
index 62719d2941c9..7701a6fed310 100644
--- a/kernel/tracepoint.c
+++ b/kernel/tracepoint.c
@@ -677,10 +677,36 @@ static struct notifier_block tracepoint_module_nb = {
.priority = 0,
};
+#ifdef CONFIG_TRACEPOINT_WARN_ON_UNUSED
+extern void * __start___tracepoint_check[];
+extern void * __stop___tracepoint_check[];
+
+#define VERIFIED_TRACEPOINT ((void *)1)
+
+static void check_tracepoint(struct tracepoint *tp, void *priv)
+{
+ if (WARN_ONCE(tp->funcs != VERIFIED_TRACEPOINT, "Unused tracepoints found"))
+ pr_warn("Tracepoint %s unused\n", tp->name);
+
+ tp->funcs = NULL;
+}
+#endif
+
static __init int init_tracepoints(void)
{
int ret;
+#ifdef CONFIG_TRACEPOINT_WARN_ON_UNUSED
+ for (void **ptr = __start___tracepoint_check;
+ ptr < __stop___tracepoint_check; ptr++) {
+ struct tracepoint *tp = *ptr;
+
+ tp->funcs = VERIFIED_TRACEPOINT;
+ }
+
+ for_each_kernel_tracepoint(check_tracepoint, NULL);
+#endif
+
ret = register_module_notifier(&tracepoint_module_nb);
if (ret)
pr_warn("Failed to register tracepoint module enter notifier\n");
--
2.47.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 2/5] tracing: sorttable: Add a tracepoint verification check at build time
2025-07-22 15:20 [PATCH v3 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
2025-07-22 15:20 ` [PATCH v3 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used Steven Rostedt
@ 2025-07-22 15:20 ` Steven Rostedt
2025-07-22 15:20 ` [PATCH v3 3/5] tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address Steven Rostedt
` (2 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2025-07-22 15:20 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-kbuild, llvm
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas, Linus Torvalds
From: Steven Rostedt <rostedt@goodmis.org>
Update the sorttable code to check the tracepoint_check and tracepoint_ptr
sections to see what trace events have been created but not used. Trace
events can take up approximately 5K of memory each regardless if they are
called or not.
List the tracepoints that are not used at build time. Note, this currently
only handles tracepoints that are builtin and not in modules.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Changes since v2: https://lore.kernel.org/20250613000328.791312828@goodmis.org
- Make default n for now. As there are still tracepoints in the kernel
that can be defined but not used, its best to not warn about them until
they are fixed.
kernel/trace/Kconfig | 11 ++
scripts/Makefile | 4 +
scripts/sorttable.c | 268 +++++++++++++++++++++++++++++++++++++++----
3 files changed, 260 insertions(+), 23 deletions(-)
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index e676b802b721..bf964983d9e2 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -1063,6 +1063,17 @@ config TRACEPOINT_WARN_ON_UNUSED
A warning will be triggered if a tracepoint is found and
not used at bootup.
+config TRACEPOINT_WARN_ON_UNUSED_BUILD
+ bool "Warn on build if a tracepoint is defined but not used"
+ depends on TRACEPOINTS
+ select TRACEPOINT_VERIFY_USED
+ help
+ This option checks if every builtin defined tracepoint is
+ used in the code. If a tracepoint is defined but not used,
+ it will waste memory as its metadata is still created.
+ This will cause a warning at build time if the architecture
+ supports it.
+
config FTRACE_SELFTEST
bool
diff --git a/scripts/Makefile b/scripts/Makefile
index 46f860529df5..f81947ec9486 100644
--- a/scripts/Makefile
+++ b/scripts/Makefile
@@ -42,6 +42,10 @@ HOSTCFLAGS_sorttable.o += -I$(srctree)/tools/arch/$(SRCARCH)/include
HOSTCFLAGS_sorttable.o += -DUNWINDER_ORC_ENABLED
endif
+ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+HOSTCFLAGS_sorttable.o += -DPREL32_RELOCATIONS
+endif
+
ifdef CONFIG_BUILDTIME_MCOUNT_SORT
HOSTCFLAGS_sorttable.o += -DMCOUNT_SORT_ENABLED
endif
diff --git a/scripts/sorttable.c b/scripts/sorttable.c
index deed676bfe38..ddcbec22ca96 100644
--- a/scripts/sorttable.c
+++ b/scripts/sorttable.c
@@ -92,6 +92,12 @@ static void (*w)(uint32_t, uint32_t *);
static void (*w8)(uint64_t, uint64_t *);
typedef void (*table_sort_t)(char *, int);
+static Elf_Shdr *init_data_sec;
+static Elf_Shdr *ro_data_sec;
+static Elf_Shdr *data_data_sec;
+
+static void *file_map_end;
+
static struct elf_funcs {
int (*compare_extable)(const void *a, const void *b);
uint64_t (*ehdr_shoff)(Elf_Ehdr *ehdr);
@@ -550,8 +556,6 @@ static void *sort_orctable(void *arg)
}
#endif
-#ifdef MCOUNT_SORT_ENABLED
-
static int compare_values_64(const void *a, const void *b)
{
uint64_t av = *(uint64_t *)a;
@@ -574,6 +578,22 @@ static int compare_values_32(const void *a, const void *b)
static int (*compare_values)(const void *a, const void *b);
+static int fill_addrs(void *ptr, uint64_t size, void *addrs)
+{
+ void *end = ptr + size;
+ int count = 0;
+
+ for (; ptr < end; ptr += long_size, addrs += long_size, count++) {
+ if (long_size == 4)
+ *(uint32_t *)ptr = r(addrs);
+ else
+ *(uint64_t *)ptr = r8(addrs);
+ }
+ return count;
+}
+
+#ifdef MCOUNT_SORT_ENABLED
+
/* Only used for sorting mcount table */
static void rela_write_addend(Elf_Rela *rela, uint64_t val)
{
@@ -684,7 +704,6 @@ static char m_err[ERRSTR_MAXSZ];
struct elf_mcount_loc {
Elf_Ehdr *ehdr;
- Elf_Shdr *init_data_sec;
uint64_t start_mcount_loc;
uint64_t stop_mcount_loc;
};
@@ -785,20 +804,6 @@ static void replace_relocs(void *ptr, uint64_t size, Elf_Ehdr *ehdr, uint64_t st
}
}
-static int fill_addrs(void *ptr, uint64_t size, void *addrs)
-{
- void *end = ptr + size;
- int count = 0;
-
- for (; ptr < end; ptr += long_size, addrs += long_size, count++) {
- if (long_size == 4)
- *(uint32_t *)ptr = r(addrs);
- else
- *(uint64_t *)ptr = r8(addrs);
- }
- return count;
-}
-
static void replace_addrs(void *ptr, uint64_t size, void *addrs)
{
void *end = ptr + size;
@@ -815,8 +820,8 @@ static void replace_addrs(void *ptr, uint64_t size, void *addrs)
static void *sort_mcount_loc(void *arg)
{
struct elf_mcount_loc *emloc = (struct elf_mcount_loc *)arg;
- uint64_t offset = emloc->start_mcount_loc - shdr_addr(emloc->init_data_sec)
- + shdr_offset(emloc->init_data_sec);
+ uint64_t offset = emloc->start_mcount_loc - shdr_addr(init_data_sec)
+ + shdr_offset(init_data_sec);
uint64_t size = emloc->stop_mcount_loc - emloc->start_mcount_loc;
unsigned char *start_loc = (void *)emloc->ehdr + offset;
Elf_Ehdr *ehdr = emloc->ehdr;
@@ -920,6 +925,211 @@ static void get_mcount_loc(struct elf_mcount_loc *emloc, Elf_Shdr *symtab_sec,
static inline int parse_symbols(const char *fname) { return 0; }
#endif
+struct elf_tracepoint {
+ Elf_Ehdr *ehdr;
+ uint64_t start_tracepoint_check;
+ uint64_t stop_tracepoint_check;
+ uint64_t start_tracepoint;
+ uint64_t stop_tracepoint;
+ uint64_t *array;
+ int count;
+};
+
+static void make_trace_array(struct elf_tracepoint *etrace)
+{
+ uint64_t offset = etrace->start_tracepoint_check - shdr_addr(init_data_sec)
+ + shdr_offset(init_data_sec);
+ uint64_t size = etrace->stop_tracepoint_check - etrace->start_tracepoint_check;
+ Elf_Ehdr *ehdr = etrace->ehdr;
+ void *start = (void *)ehdr + offset;
+ int count = 0;
+ void *vals;
+
+ etrace->array = NULL;
+
+ /* If CONFIG_TRACEPOINT_VERIFY_USED is not set, there's nothing to do */
+ if (!size)
+ return;
+
+ vals = malloc(long_size * size);
+ if (!vals) {
+ fprintf(stderr, "Failed to allocate tracepoint check array");
+ return;
+ }
+
+ count = fill_addrs(vals, size, start);
+
+ compare_values = long_size == 4 ? compare_values_32 : compare_values_64;
+ qsort(vals, count, long_size, compare_values);
+
+ etrace->array = vals;
+ etrace->count = count;
+}
+
+static int cmp_addr_64(const void *K, const void *A)
+{
+ uint64_t key = *(const uint64_t *)K;
+ const uint64_t *a = A;
+
+ if (key < *a)
+ return -1;
+ return key > *a;
+}
+
+static int cmp_addr_32(const void *K, const void *A)
+{
+ uint32_t key = *(const uint32_t *)K;
+ const uint32_t *a = A;
+
+ if (key < *a)
+ return -1;
+ return key > *a;
+}
+
+static int find_event(void *array, size_t size, uint64_t key)
+{
+ uint32_t val_32;
+ uint64_t val_64;
+ void *val;
+ int (*cmp_func)(const void *A, const void *B);
+
+ if (long_size == 4) {
+ val_32 = key;
+ val = &val_32;
+ cmp_func = cmp_addr_32;
+ } else {
+ val_64 = key;
+ val = &val_64;
+ cmp_func = cmp_addr_64;
+ }
+ return bsearch(val, array, size, long_size, cmp_func) != NULL;
+}
+
+static int failed_event(struct elf_tracepoint *etrace, uint64_t addr)
+{
+ uint64_t sec_addr = shdr_addr(data_data_sec);
+ uint64_t sec_offset = shdr_offset(data_data_sec);
+ uint64_t offset = addr - sec_addr + sec_offset;
+ Elf_Ehdr *ehdr = etrace->ehdr;
+ void *name_ptr = (void *)ehdr + offset;
+ char *name;
+
+ if (name_ptr > file_map_end)
+ goto bad_addr;
+
+ if (long_size == 4)
+ addr = r(name_ptr);
+ else
+ addr = r8(name_ptr);
+
+ sec_addr = shdr_addr(ro_data_sec);
+ sec_offset = shdr_offset(ro_data_sec);
+ offset = addr - sec_addr + sec_offset;
+ name = (char *)ehdr + offset;
+ if ((void *)name > file_map_end)
+ goto bad_addr;
+
+ fprintf(stderr, "warning: tracepoint '%s' is unused.\n", name);
+ return 0;
+bad_addr:
+ fprintf(stderr, "warning: Failed to verify unused trace events.\n");
+ return -1;
+}
+
+static void check_tracepoints(struct elf_tracepoint *etrace)
+{
+ uint64_t sec_addr = shdr_addr(ro_data_sec);
+ uint64_t sec_offset = shdr_offset(ro_data_sec);
+ uint64_t offset = etrace->start_tracepoint - sec_addr + sec_offset;
+ uint64_t size = etrace->stop_tracepoint - etrace->start_tracepoint;
+ Elf_Ehdr *ehdr = etrace->ehdr;
+ void *start = (void *)ehdr + offset;
+ void *end = start + size;
+ void *addrs;
+ int inc = long_size;
+
+ if (!etrace->array)
+ return;
+
+ if (!size)
+ return;
+
+#ifdef PREL32_RELOCATIONS
+ inc = 4;
+#endif
+
+ sec_offset = sec_offset + (uint64_t)ehdr;
+ for (addrs = start; addrs < end; addrs += inc) {
+ uint64_t val;
+
+#ifdef PREL32_RELOCATIONS
+ val = r(addrs);
+ val += sec_addr + ((uint64_t)addrs - sec_offset);
+#else
+ val = long_size == 4 ? r(addrs) : r8(addrs);
+#endif
+ if (!find_event(etrace->array, etrace->count, val)) {
+ if (failed_event(etrace, val))
+ return;
+ }
+ }
+ free(etrace->array);
+}
+
+static void *tracepoint_check(struct elf_tracepoint *etrace, Elf_Shdr *symtab_sec,
+ const char *strtab)
+{
+ Elf_Sym *sym, *end_sym;
+ int symentsize = shdr_entsize(symtab_sec);
+ int found = 0;
+
+ sym = (void *)etrace->ehdr + shdr_offset(symtab_sec);
+ end_sym = (void *)sym + shdr_size(symtab_sec);
+
+ while (sym < end_sym) {
+ if (!strcmp(strtab + sym_name(sym), "__start___tracepoint_check")) {
+ etrace->start_tracepoint_check = sym_value(sym);
+ if (++found == 4)
+ break;
+ } else if (!strcmp(strtab + sym_name(sym), "__stop___tracepoint_check")) {
+ etrace->stop_tracepoint_check = sym_value(sym);
+ if (++found == 4)
+ break;
+ } else if (!strcmp(strtab + sym_name(sym), "__start___tracepoints_ptrs")) {
+ etrace->start_tracepoint = sym_value(sym);
+ if (++found == 4)
+ break;
+ } else if (!strcmp(strtab + sym_name(sym), "__stop___tracepoints_ptrs")) {
+ etrace->stop_tracepoint = sym_value(sym);
+ if (++found == 4)
+ break;
+ }
+ sym = (void *)sym + symentsize;
+ }
+
+ if (!etrace->start_tracepoint_check) {
+ fprintf(stderr, "warning: get start_tracepoint_check error!\n");
+ return NULL;
+ }
+ if (!etrace->stop_tracepoint_check) {
+ fprintf(stderr, "warning: get stop_tracepoint_check error!\n");
+ return NULL;
+ }
+ if (!etrace->start_tracepoint) {
+ fprintf(stderr, "warning: get start_tracepoint error!\n");
+ return NULL;
+ }
+ if (!etrace->stop_tracepoint) {
+ fprintf(stderr, "warning: get start_tracepoint error!\n");
+ return NULL;
+ }
+
+ make_trace_array(etrace);
+ check_tracepoints(etrace);
+
+ return NULL;
+}
+
static int do_sort(Elf_Ehdr *ehdr,
char const *const fname,
table_sort_t custom_sort)
@@ -948,6 +1158,7 @@ static int do_sort(Elf_Ehdr *ehdr,
int i;
unsigned int shnum;
unsigned int shstrndx;
+ struct elf_tracepoint tstruct = {0};
#ifdef MCOUNT_SORT_ENABLED
struct elf_mcount_loc mstruct = {0};
#endif
@@ -985,11 +1196,17 @@ static int do_sort(Elf_Ehdr *ehdr,
symtab_shndx = (Elf32_Word *)((const char *)ehdr +
shdr_offset(shdr));
-#ifdef MCOUNT_SORT_ENABLED
/* locate the .init.data section in vmlinux */
if (!strcmp(secstrings + idx, ".init.data"))
- mstruct.init_data_sec = shdr;
-#endif
+ init_data_sec = shdr;
+
+ /* locate the .ro.data section in vmlinux */
+ if (!strcmp(secstrings + idx, ".rodata"))
+ ro_data_sec = shdr;
+
+ /* locate the .data section in vmlinux */
+ if (!strcmp(secstrings + idx, ".data"))
+ data_data_sec = shdr;
#ifdef UNWINDER_ORC_ENABLED
/* locate the ORC unwind tables */
@@ -1055,7 +1272,7 @@ static int do_sort(Elf_Ehdr *ehdr,
mstruct.ehdr = ehdr;
get_mcount_loc(&mstruct, symtab_sec, strtab);
- if (!mstruct.init_data_sec || !mstruct.start_mcount_loc || !mstruct.stop_mcount_loc) {
+ if (!init_data_sec || !mstruct.start_mcount_loc || !mstruct.stop_mcount_loc) {
fprintf(stderr,
"incomplete mcount's sort in file: %s\n",
fname);
@@ -1071,6 +1288,9 @@ static int do_sort(Elf_Ehdr *ehdr,
}
#endif
+ tstruct.ehdr = ehdr;
+ tracepoint_check(&tstruct, symtab_sec, strtab);
+
if (custom_sort) {
custom_sort(extab_image, shdr_size(extab_sec));
} else {
@@ -1404,6 +1624,8 @@ int main(int argc, char *argv[])
continue;
}
+ file_map_end = addr + size;
+
if (do_file(argv[i], addr))
++n_error;
--
2.47.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 3/5] tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address
2025-07-22 15:20 [PATCH v3 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
2025-07-22 15:20 ` [PATCH v3 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used Steven Rostedt
2025-07-22 15:20 ` [PATCH v3 2/5] tracing: sorttable: Add a tracepoint verification check at build time Steven Rostedt
@ 2025-07-22 15:20 ` Steven Rostedt
2025-07-22 15:20 ` [PATCH v3 4/5] tracepoint: Do not warn for unused event that is exported Steven Rostedt
2025-07-22 15:20 ` [PATCH v3 5/5] tracing: Call trace_ftrace_test_filter() for the event Steven Rostedt
4 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2025-07-22 15:20 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-kbuild, llvm
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas, Linus Torvalds
From: Steven Rostedt <rostedt@goodmis.org>
The addresses in the ARM64 ELF file is stored in the RELA sections similar
to the mcount location table. Add support to find the addresses from the
RELA section to use to get the actual addresses for checking the
tracepoints and the checking variables to show if all tracepoints are
used.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
scripts/sorttable.c | 186 ++++++++++++++++++++++++++++----------------
1 file changed, 118 insertions(+), 68 deletions(-)
diff --git a/scripts/sorttable.c b/scripts/sorttable.c
index ddcbec22ca96..edca5b06d8ce 100644
--- a/scripts/sorttable.c
+++ b/scripts/sorttable.c
@@ -578,6 +578,106 @@ static int compare_values_32(const void *a, const void *b)
static int (*compare_values)(const void *a, const void *b);
+static char m_err[ERRSTR_MAXSZ];
+static long rela_type;
+static bool sort_reloc;
+static int reloc_shnum;
+
+/* Fill the array with the content of the relocs */
+static int fill_relocs(void *ptr, uint64_t size, Elf_Ehdr *ehdr, uint64_t start_loc)
+{
+ Elf_Shdr *shdr_start;
+ Elf_Rela *rel;
+ unsigned int shnum;
+ unsigned int count = 0;
+ int shentsize;
+ void *array_end = ptr + size;
+
+ shdr_start = (Elf_Shdr *)((char *)ehdr + ehdr_shoff(ehdr));
+ shentsize = ehdr_shentsize(ehdr);
+
+ shnum = ehdr_shnum(ehdr);
+ if (shnum == SHN_UNDEF)
+ shnum = shdr_size(shdr_start);
+
+ for (int i = 0; i < shnum; i++) {
+ Elf_Shdr *shdr = get_index(shdr_start, shentsize, i);
+ void *end;
+
+ if (shdr_type(shdr) != SHT_RELA)
+ continue;
+
+ reloc_shnum = i;
+
+ rel = (void *)ehdr + shdr_offset(shdr);
+ end = (void *)rel + shdr_size(shdr);
+
+ for (; (void *)rel < end; rel = (void *)rel + shdr_entsize(shdr)) {
+ uint64_t offset = rela_offset(rel);
+
+ if (offset >= start_loc && offset < start_loc + size) {
+ if (ptr + long_size > array_end) {
+ snprintf(m_err, ERRSTR_MAXSZ,
+ "Too many relocations");
+ return -1;
+ }
+
+ /* Make sure this has the correct type */
+ if (rela_info(rel) != rela_type) {
+ snprintf(m_err, ERRSTR_MAXSZ,
+ "rela has type %lx but expected %lx\n",
+ (long)rela_info(rel), rela_type);
+ return -1;
+ }
+
+ if (long_size == 4)
+ *(uint32_t *)ptr = rela_addend(rel);
+ else
+ *(uint64_t *)ptr = rela_addend(rel);
+ ptr += long_size;
+ count++;
+ }
+ }
+ }
+ return count;
+}
+
+static uint64_t get_addr_reloc(Elf_Ehdr *ehdr, uint64_t addr)
+{
+ Elf_Shdr *shdr_start;
+ Elf_Shdr *shdr;
+ Elf_Rela *rel;
+ unsigned int shnum;
+ int shentsize;
+ void *end;
+
+ shdr_start = (Elf_Shdr *)((char *)ehdr + ehdr_shoff(ehdr));
+ shentsize = ehdr_shentsize(ehdr);
+
+ shnum = ehdr_shnum(ehdr);
+ if (shnum == SHN_UNDEF)
+ shnum = shdr_size(shdr_start);
+
+ shdr = get_index(shdr_start, shentsize, reloc_shnum);
+ if (shdr_type(shdr) != SHT_RELA)
+ return 0;
+
+ rel = (void *)ehdr + shdr_offset(shdr);
+ end = (void *)rel + shdr_size(shdr);
+
+ for (; (void *)rel < end; rel = (void *)rel + shdr_entsize(shdr)) {
+ uint64_t offset = rela_offset(rel);
+
+ if (offset == addr) {
+ if (long_size == 4)
+ return rela_addend(rel);
+ else
+ return rela_addend(rel);
+ }
+ }
+ return 0;
+}
+
static int fill_addrs(void *ptr, uint64_t size, void *addrs)
{
void *end = ptr + size;
@@ -696,11 +796,6 @@ static int parse_symbols(const char *fname)
}
static pthread_t mcount_sort_thread;
-static bool sort_reloc;
-
-static long rela_type;
-
-static char m_err[ERRSTR_MAXSZ];
struct elf_mcount_loc {
Elf_Ehdr *ehdr;
@@ -708,63 +803,6 @@ struct elf_mcount_loc {
uint64_t stop_mcount_loc;
};
-/* Fill the array with the content of the relocs */
-static int fill_relocs(void *ptr, uint64_t size, Elf_Ehdr *ehdr, uint64_t start_loc)
-{
- Elf_Shdr *shdr_start;
- Elf_Rela *rel;
- unsigned int shnum;
- unsigned int count = 0;
- int shentsize;
- void *array_end = ptr + size;
-
- shdr_start = (Elf_Shdr *)((char *)ehdr + ehdr_shoff(ehdr));
- shentsize = ehdr_shentsize(ehdr);
-
- shnum = ehdr_shnum(ehdr);
- if (shnum == SHN_UNDEF)
- shnum = shdr_size(shdr_start);
-
- for (int i = 0; i < shnum; i++) {
- Elf_Shdr *shdr = get_index(shdr_start, shentsize, i);
- void *end;
-
- if (shdr_type(shdr) != SHT_RELA)
- continue;
-
- rel = (void *)ehdr + shdr_offset(shdr);
- end = (void *)rel + shdr_size(shdr);
-
- for (; (void *)rel < end; rel = (void *)rel + shdr_entsize(shdr)) {
- uint64_t offset = rela_offset(rel);
-
- if (offset >= start_loc && offset < start_loc + size) {
- if (ptr + long_size > array_end) {
- snprintf(m_err, ERRSTR_MAXSZ,
- "Too many relocations");
- return -1;
- }
-
- /* Make sure this has the correct type */
- if (rela_info(rel) != rela_type) {
- snprintf(m_err, ERRSTR_MAXSZ,
- "rela has type %lx but expected %lx\n",
- (long)rela_info(rel), rela_type);
- return -1;
- }
-
- if (long_size == 4)
- *(uint32_t *)ptr = rela_addend(rel);
- else
- *(uint64_t *)ptr = rela_addend(rel);
- ptr += long_size;
- count++;
- }
- }
- }
- return count;
-}
-
/* Put the sorted vals back into the relocation elements */
static void replace_relocs(void *ptr, uint64_t size, Elf_Ehdr *ehdr, uint64_t start_loc)
{
@@ -957,7 +995,15 @@ static void make_trace_array(struct elf_tracepoint *etrace)
return;
}
- count = fill_addrs(vals, size, start);
+ if (sort_reloc) {
+ count = fill_relocs(vals, size, ehdr, etrace->start_tracepoint_check);
+ /* gcc may use relocs to save the addresses, but clang does not. */
+ if (!count) {
+ count = fill_addrs(vals, size, start);
+ sort_reloc = 0;
+ }
+ } else
+ count = fill_addrs(vals, size, start);
compare_values = long_size == 4 ? compare_values_32 : compare_values_64;
qsort(vals, count, long_size, compare_values);
@@ -1017,10 +1063,14 @@ static int failed_event(struct elf_tracepoint *etrace, uint64_t addr)
if (name_ptr > file_map_end)
goto bad_addr;
- if (long_size == 4)
- addr = r(name_ptr);
- else
- addr = r8(name_ptr);
+ if (sort_reloc) {
+ addr = get_addr_reloc(ehdr, addr);
+ } else {
+ if (long_size == 4)
+ addr = r(name_ptr);
+ else
+ addr = r8(name_ptr);
+ }
sec_addr = shdr_addr(ro_data_sec);
sec_offset = shdr_offset(ro_data_sec);
@@ -1473,9 +1523,9 @@ static int do_file(char const *const fname, void *addr)
switch (r2(&ehdr->e32.e_machine)) {
case EM_AARCH64:
-#ifdef MCOUNT_SORT_ENABLED
sort_reloc = true;
rela_type = 0x403;
+#ifdef MCOUNT_SORT_ENABLED
/* arm64 uses patchable function entry placing before function */
before_func = 8;
#endif
--
2.47.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 4/5] tracepoint: Do not warn for unused event that is exported
2025-07-22 15:20 [PATCH v3 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
` (2 preceding siblings ...)
2025-07-22 15:20 ` [PATCH v3 3/5] tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address Steven Rostedt
@ 2025-07-22 15:20 ` Steven Rostedt
2025-07-22 15:20 ` [PATCH v3 5/5] tracing: Call trace_ftrace_test_filter() for the event Steven Rostedt
4 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2025-07-22 15:20 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-kbuild, llvm
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas, Linus Torvalds
From: Steven Rostedt <rostedt@goodmis.org>
There are a few generic events that may only be used by modules. They are
defined and then set with EXPORT_TRACEPOINT*(). Mark events that are
exported as being used, even though they still waste memory in the kernel
proper.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
include/linux/tracepoint.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 2b96c7e94c52..8026a0659580 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -223,7 +223,8 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
#ifdef CONFIG_TRACEPOINT_VERIFY_USED
# define TRACEPOINT_CHECK(name) \
- static void __used __section("__tracepoint_check") *__trace_check = \
+ static void __used __section("__tracepoint_check") * \
+ __trace_check_##name = \
&__tracepoint_##name;
#else
# define TRACEPOINT_CHECK(name)
@@ -381,10 +382,12 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
__DEFINE_TRACE_EXT(_name, NULL, PARAMS(_proto), PARAMS(_args));
#define EXPORT_TRACEPOINT_SYMBOL_GPL(name) \
+ TRACEPOINT_CHECK(name) \
EXPORT_SYMBOL_GPL(__tracepoint_##name); \
EXPORT_SYMBOL_GPL(__traceiter_##name); \
EXPORT_STATIC_CALL_GPL(tp_func_##name)
#define EXPORT_TRACEPOINT_SYMBOL(name) \
+ TRACEPOINT_CHECK(name) \
EXPORT_SYMBOL(__tracepoint_##name); \
EXPORT_SYMBOL(__traceiter_##name); \
EXPORT_STATIC_CALL(tp_func_##name)
--
2.47.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 5/5] tracing: Call trace_ftrace_test_filter() for the event
2025-07-22 15:20 [PATCH v3 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
` (3 preceding siblings ...)
2025-07-22 15:20 ` [PATCH v3 4/5] tracepoint: Do not warn for unused event that is exported Steven Rostedt
@ 2025-07-22 15:20 ` Steven Rostedt
4 siblings, 0 replies; 9+ messages in thread
From: Steven Rostedt @ 2025-07-22 15:20 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-kbuild, llvm
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas, Linus Torvalds
From: Steven Rostedt <rostedt@goodmis.org>
The trace event filter bootup self test tests a bunch of filter logic
against the ftrace_test_filter event, but does not actually call the
event. Work is being done to cause a warning if an event is defined but
not used. To quiet the warning call the trace event under an if statement
where it is disabled so it doesn't get optimized out.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace_events_filter.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
index 3885aadc434d..e4581e10782b 100644
--- a/kernel/trace/trace_events_filter.c
+++ b/kernel/trace/trace_events_filter.c
@@ -2900,6 +2900,10 @@ static __init int ftrace_test_event_filter(void)
if (i == DATA_CNT)
printk(KERN_CONT "OK\n");
+ /* Need to call ftrace_test_filter to prevent a warning */
+ if (!trace_ftrace_test_filter_enabled())
+ trace_ftrace_test_filter(1, 2, 3, 4, 5, 6, 7, 8);
+
return 0;
}
--
2.47.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used
2025-07-22 15:20 ` [PATCH v3 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used Steven Rostedt
@ 2025-07-22 16:16 ` Linus Torvalds
2025-07-22 17:04 ` Steven Rostedt
0 siblings, 1 reply; 9+ messages in thread
From: Linus Torvalds @ 2025-07-22 16:16 UTC (permalink / raw)
To: Steven Rostedt
Cc: linux-kernel, linux-trace-kernel, linux-kbuild, llvm,
Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas
On Tue, 22 Jul 2025 at 08:21, Steven Rostedt <rostedt@kernel.org> wrote:
>
> Add a verifier that injects a pointer to the tracepoint structure in the
> functions that are used and added to a section called __tracepoint_check.
> Then on boot up, iterate over this section and for every tracepoint
> descriptor that is pointed to, update its ".funcs" field to (void *)1, as
> the .funcs field is only set when a tracepoint is registered. At this
> time, no tracepoints should be registered.
Why?
Doing this at runtime seems completely bass-ackwards.
If you want to do this verification, why don't you do it at build-time?
Linus
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used
2025-07-22 16:16 ` Linus Torvalds
@ 2025-07-22 17:04 ` Steven Rostedt
2025-07-22 18:12 ` Linus Torvalds
0 siblings, 1 reply; 9+ messages in thread
From: Steven Rostedt @ 2025-07-22 17:04 UTC (permalink / raw)
To: Linus Torvalds
Cc: linux-kernel, linux-trace-kernel, linux-kbuild, llvm,
Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas
On Tue, 22 Jul 2025 09:16:49 -0700
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Tue, 22 Jul 2025 at 08:21, Steven Rostedt <rostedt@kernel.org> wrote:
> >
> > Add a verifier that injects a pointer to the tracepoint structure in the
> > functions that are used and added to a section called __tracepoint_check.
> > Then on boot up, iterate over this section and for every tracepoint
> > descriptor that is pointed to, update its ".funcs" field to (void *)1, as
> > the .funcs field is only set when a tracepoint is registered. At this
> > time, no tracepoints should be registered.
>
> Why?
>
> Doing this at runtime seems completely bass-ackwards.
>
> If you want to do this verification, why don't you do it at build-time?
The second patch does that, but is much more complex due to having to
parse the elf file. It hooks into the sorttable code as that already
does most of the parsing that was needed. It uses the same trick as the
runtime verifier by checking tracepoints against the __tracepoint_check
section (but not with the '.func' test, but instead it sorts it and
does a binary search).
I kept this to verify that the build time worked too, as the runtime
check is so much simpler and easier to implement. Basically, I use this
to verify that the build time works. That's why if you enable this it
will select the build time version. It was used to catch bugs in the
build version as I developed it.
And currently neither tests modules (I run this against an
allyesconfig), but the runtime version could easily be modified to test
modules too, whereas the build time would require adding another elf
parser as the sorttable isn't run on module code.
I can remove the runtime check if you prefer.
-- Steve
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used
2025-07-22 17:04 ` Steven Rostedt
@ 2025-07-22 18:12 ` Linus Torvalds
0 siblings, 0 replies; 9+ messages in thread
From: Linus Torvalds @ 2025-07-22 18:12 UTC (permalink / raw)
To: Steven Rostedt
Cc: linux-kernel, linux-trace-kernel, linux-kbuild, llvm,
Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas
On Tue, 22 Jul 2025 at 10:04, Steven Rostedt <rostedt@kernel.org> wrote:
>
> I kept this to verify that the build time worked too, as the runtime
> check is so much simpler and easier to implement.
Honestly, do it for your 8own private verification.
Don't sent out the patch and expect anybody else to care.
The static verification may make sense. Adding code to the kernel just
to then do runtime verification does NOT.
This is not a "criticcal correctness" issue. This is a "help make
people not waste memory pointlessly by having tracepoints that aren't
used"
There is *NO* reason to add kernel code for this. Zero. Nada. None.
Linus
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-07-22 18:12 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-22 15:20 [PATCH v3 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
2025-07-22 15:20 ` [PATCH v3 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used Steven Rostedt
2025-07-22 16:16 ` Linus Torvalds
2025-07-22 17:04 ` Steven Rostedt
2025-07-22 18:12 ` Linus Torvalds
2025-07-22 15:20 ` [PATCH v3 2/5] tracing: sorttable: Add a tracepoint verification check at build time Steven Rostedt
2025-07-22 15:20 ` [PATCH v3 3/5] tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address Steven Rostedt
2025-07-22 15:20 ` [PATCH v3 4/5] tracepoint: Do not warn for unused event that is exported Steven Rostedt
2025-07-22 15:20 ` [PATCH v3 5/5] tracing: Call trace_ftrace_test_filter() for the event Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).