* [PATCH v4 0/4] tracepoints: Add warnings for unused tracepoints and trace events
@ 2025-07-23 19:41 Steven Rostedt
2025-07-23 19:41 ` [PATCH v4 1/4] tracing: sorttable: Add a tracepoint verification check at build time Steven Rostedt
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: Steven Rostedt @ 2025-07-23 19:41 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-kbuild, llvm
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas, Linus Torvalds
Every trace event can take up to 5K of memory in text and metadata regardless
if they are used or not. Trace events should not be created if they are not
used. Currently there's over a hundred events in the kernel that are defined
but unused, either because their callers were removed without removing the
trace event with it, or a config hides the trace event caller but not the
trace event itself. And in some cases, trace events were simply added but were
never called for whatever reason. The number of unused trace events continues
to grow.
This patch series aims to fix this.
The first patch creates a new section called __tracepoint_check, where all
callers of a tracepoint creates a variable that is placed in this section with
a pointer to the tracepoint they use. The scripts/sorttable.c is modified to
read the __tracepoint_check section. It sorts it, and then reads the
__tracepoint_ptr section that has all compiled in tracepoints. It makes sure
that every tracepoint is found in the check section and if not, it prints a
warning message about it. This lists the missing tracepoints at build time.
The secord patch updates sorttable to work for arm64 when compiled with gcc. As
gcc's arm64 build doesn't put addresses in their section but saves them off in
the RELA sections. This mostly takes the work done that was needed to do the
mcount sorting at boot up on arm64.
The third patch adds EXPORT_TRACEPOINT() to the __tracepoint_check section as
well. There was several locations that adds tracepoints in the kernel proper
that are only used in modules. It was getting quite complex trying to move
things around that I just decided to make any tracepoint in a
EXPORT_TRACEPOINT "used". I'm using the analogy of static and global
functions. An unused static function gets a warning but an unused global one
does not.
The last patch updates the trace_ftrace_test_filter boot up self test. That
selftest creates a trace event to run a bunch of filter tests on it without
actually calling the tracepoint. To quiet the warning, the selftest tracepoint
is called within a if (!trace_<event>_enabled()) section, where it will not be
optimized out, nor will it be called.
Changes since v3: https://lore.kernel.org/linux-trace-kernel/20250722152053.343028095@kernel.org/
- Folded this patch with patch 2: https://lore.kernel.org/20250722152157.839415861@kernel.org
- Removed the runtime boot check and only have the build time check (Linus Torvalds).
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
unused-tracepoints/core
Head SHA1: 474058274955ba3bfec3053171cfa468e4849b5e
Steven Rostedt (4):
tracing: sorttable: Add a tracepoint verification check at build time
tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address
tracepoint: Do not warn for unused event that is exported
tracing: Call trace_ftrace_test_filter() for the event
----
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/tracepoint.h | 13 ++
kernel/trace/Kconfig | 19 ++
kernel/trace/trace_events_filter.c | 4 +
scripts/Makefile | 4 +
scripts/sorttable.c | 444 ++++++++++++++++++++++++++++++-------
6 files changed, 399 insertions(+), 86 deletions(-)
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v4 1/4] tracing: sorttable: Add a tracepoint verification check at build time
2025-07-23 19:41 [PATCH v4 0/4] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
@ 2025-07-23 19:41 ` Steven Rostedt
2025-07-23 20:31 ` Linus Torvalds
2025-07-23 19:41 ` [PATCH v4 2/4] tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address Steven Rostedt
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: Steven Rostedt @ 2025-07-23 19:41 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-kbuild, llvm
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas, Linus Torvalds
From: Steven Rostedt <rostedt@goodmis.org>
If a tracepoint is defined via DECLARE_TRACE() or TRACE_EVENT() but never
called (via the trace_<tracepoint>() function), its metadata is still
around in memory and not discarded.
When created via TRACE_EVENT() the situation is worse because the
TRACE_EVENT() creates metadata that can be around 5k per trace event.
Having unused trace events causes several thousand of wasted bytes.
Add a verifier that injects a pointer to the tracepoint structure in the
functions that are used and added to a section called __tracepoint_check.
For every builtin tracepoint within the tracepoint_ptr section that is
used, will have a corresponding pointer back to it in the __tracepoint_check
section.
Update the sorttable code to check the tracepoint_check and tracepoint_ptr
sections to see what trace events have been created but not used.
List the tracepoints that are not used at build time. Note, this currently
only handles tracepoints that are builtin and not in modules.
Enabling this currently with a given config produces:
warning: tracepoint 'sched_move_numa' is unused.
warning: tracepoint 'sched_stick_numa' is unused.
warning: tracepoint 'sched_swap_numa' is unused.
warning: tracepoint 'pelt_hw_tp' is unused.
warning: tracepoint 'pelt_irq_tp' is unused.
warning: tracepoint 'rcu_preempt_task' is unused.
warning: tracepoint 'rcu_unlock_preempted_task' is unused.
warning: tracepoint 'xdp_bulk_tx' is unused.
warning: tracepoint 'xdp_redirect_map' is unused.
warning: tracepoint 'xdp_redirect_map_err' is unused.
warning: tracepoint 'vma_mas_szero' is unused.
warning: tracepoint 'vma_store' is unused.
warning: tracepoint 'hugepage_set_pmd' is unused.
warning: tracepoint 'hugepage_set_pud' is unused.
warning: tracepoint 'hugepage_update_pmd' is unused.
warning: tracepoint 'hugepage_update_pud' is unused.
warning: tracepoint 'block_rq_remap' is unused.
warning: tracepoint 'xhci_dbc_handle_event' is unused.
warning: tracepoint 'xhci_dbc_handle_transfer' is unused.
warning: tracepoint 'xhci_dbc_gadget_ep_queue' is unused.
warning: tracepoint 'xhci_dbc_alloc_request' is unused.
warning: tracepoint 'xhci_dbc_free_request' is unused.
warning: tracepoint 'xhci_dbc_queue_request' is unused.
warning: tracepoint 'xhci_dbc_giveback_request' is unused.
warning: tracepoint 'tcp_ao_wrong_maclen' is unused.
warning: tracepoint 'tcp_ao_mismatch' is unused.
warning: tracepoint 'tcp_ao_key_not_found' is unused.
warning: tracepoint 'tcp_ao_rnext_request' is unused.
warning: tracepoint 'tcp_ao_synack_no_key' is unused.
warning: tracepoint 'tcp_ao_snd_sne_update' is unused.
warning: tracepoint 'tcp_ao_rcv_sne_update' is unused.
Some of the above is totally unused but others are not used due to their
"trace_" functions being inside configs, in which case, the defined
tracepoints should also be inside those same configs. Others are
architecture specific but defined in generic code, where they should
either be moved to the architecture or be surrounded by #ifdef for the
architectures they are for.
Link: https://lore.kernel.org/all/20250528114549.4d8a5e03@gandalf.local.home/
Changes since v3: https://lore.kernel.org/20250722152157.664260747@kernel.org
- Folded this patch with patch 2: https://lore.kernel.org/20250722152157.839415861@kernel.org
- Removed the runtime boot check and only have the build time check (Linus Torvalds).
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/tracepoint.h | 10 ++
kernel/trace/Kconfig | 19 +++
scripts/Makefile | 4 +
scripts/sorttable.c | 268 +++++++++++++++++++++++++++---
5 files changed, 279 insertions(+), 23 deletions(-)
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index fa5f19b8d53a..600d8b51e315 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -708,6 +708,7 @@ defined(CONFIG_AUTOFDO_CLANG) || defined(CONFIG_PROPELLER_CLANG)
MCOUNT_REC() \
*(.init.rodata .init.rodata.*) \
FTRACE_EVENTS() \
+ BOUNDED_SECTION_BY(__tracepoint_check, ___tracepoint_check) \
TRACE_SYSCALLS() \
KPROBE_BLACKLIST() \
ERROR_INJECT_WHITELIST() \
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 826ce3f8e1f8..2b96c7e94c52 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -221,6 +221,14 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
__do_trace_##name(args); \
}
+#ifdef CONFIG_TRACEPOINT_VERIFY_USED
+# define TRACEPOINT_CHECK(name) \
+ static void __used __section("__tracepoint_check") *__trace_check = \
+ &__tracepoint_##name;
+#else
+# define TRACEPOINT_CHECK(name)
+#endif
+
/*
* Make sure the alignment of the structure in the __tracepoints section will
* not add unwanted padding between the beginning of the section and the
@@ -270,6 +278,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
__DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_proto)) \
static inline void __do_trace_##name(proto) \
{ \
+ TRACEPOINT_CHECK(name) \
if (cond) { \
guard(preempt_notrace)(); \
__DO_TRACE_CALL(name, TP_ARGS(args)); \
@@ -289,6 +298,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
__DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_proto)) \
static inline void __do_trace_##name(proto) \
{ \
+ TRACEPOINT_CHECK(name) \
guard(rcu_tasks_trace)(); \
__DO_TRACE_CALL(name, TP_ARGS(args)); \
} \
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 35448f7233fe..90ffb83a43dc 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -1050,6 +1050,25 @@ config GCOV_PROFILE_FTRACE
Note that on a kernel compiled with this config, ftrace will
run significantly slower.
+config TRACEPOINT_VERIFY_USED
+ bool
+ help
+ This option creates a section when tracepoints are used
+ that hold a pointer to the tracepoint that is used.
+ This can be used to test if a defined tracepoint is
+ used or not.
+
+config TRACEPOINT_WARN_ON_UNUSED_BUILD
+ bool "Warn on build if a tracepoint is defined but not used"
+ depends on TRACEPOINTS
+ select TRACEPOINT_VERIFY_USED
+ help
+ This option checks if every builtin defined tracepoint is
+ used in the code. If a tracepoint is defined but not used,
+ it will waste memory as its metadata is still created.
+ This will cause a warning at build time if the architecture
+ supports it.
+
config FTRACE_SELFTEST
bool
diff --git a/scripts/Makefile b/scripts/Makefile
index 46f860529df5..f81947ec9486 100644
--- a/scripts/Makefile
+++ b/scripts/Makefile
@@ -42,6 +42,10 @@ HOSTCFLAGS_sorttable.o += -I$(srctree)/tools/arch/$(SRCARCH)/include
HOSTCFLAGS_sorttable.o += -DUNWINDER_ORC_ENABLED
endif
+ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+HOSTCFLAGS_sorttable.o += -DPREL32_RELOCATIONS
+endif
+
ifdef CONFIG_BUILDTIME_MCOUNT_SORT
HOSTCFLAGS_sorttable.o += -DMCOUNT_SORT_ENABLED
endif
diff --git a/scripts/sorttable.c b/scripts/sorttable.c
index deed676bfe38..ddcbec22ca96 100644
--- a/scripts/sorttable.c
+++ b/scripts/sorttable.c
@@ -92,6 +92,12 @@ static void (*w)(uint32_t, uint32_t *);
static void (*w8)(uint64_t, uint64_t *);
typedef void (*table_sort_t)(char *, int);
+static Elf_Shdr *init_data_sec;
+static Elf_Shdr *ro_data_sec;
+static Elf_Shdr *data_data_sec;
+
+static void *file_map_end;
+
static struct elf_funcs {
int (*compare_extable)(const void *a, const void *b);
uint64_t (*ehdr_shoff)(Elf_Ehdr *ehdr);
@@ -550,8 +556,6 @@ static void *sort_orctable(void *arg)
}
#endif
-#ifdef MCOUNT_SORT_ENABLED
-
static int compare_values_64(const void *a, const void *b)
{
uint64_t av = *(uint64_t *)a;
@@ -574,6 +578,22 @@ static int compare_values_32(const void *a, const void *b)
static int (*compare_values)(const void *a, const void *b);
+static int fill_addrs(void *ptr, uint64_t size, void *addrs)
+{
+ void *end = ptr + size;
+ int count = 0;
+
+ for (; ptr < end; ptr += long_size, addrs += long_size, count++) {
+ if (long_size == 4)
+ *(uint32_t *)ptr = r(addrs);
+ else
+ *(uint64_t *)ptr = r8(addrs);
+ }
+ return count;
+}
+
+#ifdef MCOUNT_SORT_ENABLED
+
/* Only used for sorting mcount table */
static void rela_write_addend(Elf_Rela *rela, uint64_t val)
{
@@ -684,7 +704,6 @@ static char m_err[ERRSTR_MAXSZ];
struct elf_mcount_loc {
Elf_Ehdr *ehdr;
- Elf_Shdr *init_data_sec;
uint64_t start_mcount_loc;
uint64_t stop_mcount_loc;
};
@@ -785,20 +804,6 @@ static void replace_relocs(void *ptr, uint64_t size, Elf_Ehdr *ehdr, uint64_t st
}
}
-static int fill_addrs(void *ptr, uint64_t size, void *addrs)
-{
- void *end = ptr + size;
- int count = 0;
-
- for (; ptr < end; ptr += long_size, addrs += long_size, count++) {
- if (long_size == 4)
- *(uint32_t *)ptr = r(addrs);
- else
- *(uint64_t *)ptr = r8(addrs);
- }
- return count;
-}
-
static void replace_addrs(void *ptr, uint64_t size, void *addrs)
{
void *end = ptr + size;
@@ -815,8 +820,8 @@ static void replace_addrs(void *ptr, uint64_t size, void *addrs)
static void *sort_mcount_loc(void *arg)
{
struct elf_mcount_loc *emloc = (struct elf_mcount_loc *)arg;
- uint64_t offset = emloc->start_mcount_loc - shdr_addr(emloc->init_data_sec)
- + shdr_offset(emloc->init_data_sec);
+ uint64_t offset = emloc->start_mcount_loc - shdr_addr(init_data_sec)
+ + shdr_offset(init_data_sec);
uint64_t size = emloc->stop_mcount_loc - emloc->start_mcount_loc;
unsigned char *start_loc = (void *)emloc->ehdr + offset;
Elf_Ehdr *ehdr = emloc->ehdr;
@@ -920,6 +925,211 @@ static void get_mcount_loc(struct elf_mcount_loc *emloc, Elf_Shdr *symtab_sec,
static inline int parse_symbols(const char *fname) { return 0; }
#endif
+struct elf_tracepoint {
+ Elf_Ehdr *ehdr;
+ uint64_t start_tracepoint_check;
+ uint64_t stop_tracepoint_check;
+ uint64_t start_tracepoint;
+ uint64_t stop_tracepoint;
+ uint64_t *array;
+ int count;
+};
+
+static void make_trace_array(struct elf_tracepoint *etrace)
+{
+ uint64_t offset = etrace->start_tracepoint_check - shdr_addr(init_data_sec)
+ + shdr_offset(init_data_sec);
+ uint64_t size = etrace->stop_tracepoint_check - etrace->start_tracepoint_check;
+ Elf_Ehdr *ehdr = etrace->ehdr;
+ void *start = (void *)ehdr + offset;
+ int count = 0;
+ void *vals;
+
+ etrace->array = NULL;
+
+ /* If CONFIG_TRACEPOINT_VERIFY_USED is not set, there's nothing to do */
+ if (!size)
+ return;
+
+ vals = malloc(long_size * size);
+ if (!vals) {
+ fprintf(stderr, "Failed to allocate tracepoint check array");
+ return;
+ }
+
+ count = fill_addrs(vals, size, start);
+
+ compare_values = long_size == 4 ? compare_values_32 : compare_values_64;
+ qsort(vals, count, long_size, compare_values);
+
+ etrace->array = vals;
+ etrace->count = count;
+}
+
+static int cmp_addr_64(const void *K, const void *A)
+{
+ uint64_t key = *(const uint64_t *)K;
+ const uint64_t *a = A;
+
+ if (key < *a)
+ return -1;
+ return key > *a;
+}
+
+static int cmp_addr_32(const void *K, const void *A)
+{
+ uint32_t key = *(const uint32_t *)K;
+ const uint32_t *a = A;
+
+ if (key < *a)
+ return -1;
+ return key > *a;
+}
+
+static int find_event(void *array, size_t size, uint64_t key)
+{
+ uint32_t val_32;
+ uint64_t val_64;
+ void *val;
+ int (*cmp_func)(const void *A, const void *B);
+
+ if (long_size == 4) {
+ val_32 = key;
+ val = &val_32;
+ cmp_func = cmp_addr_32;
+ } else {
+ val_64 = key;
+ val = &val_64;
+ cmp_func = cmp_addr_64;
+ }
+ return bsearch(val, array, size, long_size, cmp_func) != NULL;
+}
+
+static int failed_event(struct elf_tracepoint *etrace, uint64_t addr)
+{
+ uint64_t sec_addr = shdr_addr(data_data_sec);
+ uint64_t sec_offset = shdr_offset(data_data_sec);
+ uint64_t offset = addr - sec_addr + sec_offset;
+ Elf_Ehdr *ehdr = etrace->ehdr;
+ void *name_ptr = (void *)ehdr + offset;
+ char *name;
+
+ if (name_ptr > file_map_end)
+ goto bad_addr;
+
+ if (long_size == 4)
+ addr = r(name_ptr);
+ else
+ addr = r8(name_ptr);
+
+ sec_addr = shdr_addr(ro_data_sec);
+ sec_offset = shdr_offset(ro_data_sec);
+ offset = addr - sec_addr + sec_offset;
+ name = (char *)ehdr + offset;
+ if ((void *)name > file_map_end)
+ goto bad_addr;
+
+ fprintf(stderr, "warning: tracepoint '%s' is unused.\n", name);
+ return 0;
+bad_addr:
+ fprintf(stderr, "warning: Failed to verify unused trace events.\n");
+ return -1;
+}
+
+static void check_tracepoints(struct elf_tracepoint *etrace)
+{
+ uint64_t sec_addr = shdr_addr(ro_data_sec);
+ uint64_t sec_offset = shdr_offset(ro_data_sec);
+ uint64_t offset = etrace->start_tracepoint - sec_addr + sec_offset;
+ uint64_t size = etrace->stop_tracepoint - etrace->start_tracepoint;
+ Elf_Ehdr *ehdr = etrace->ehdr;
+ void *start = (void *)ehdr + offset;
+ void *end = start + size;
+ void *addrs;
+ int inc = long_size;
+
+ if (!etrace->array)
+ return;
+
+ if (!size)
+ return;
+
+#ifdef PREL32_RELOCATIONS
+ inc = 4;
+#endif
+
+ sec_offset = sec_offset + (uint64_t)ehdr;
+ for (addrs = start; addrs < end; addrs += inc) {
+ uint64_t val;
+
+#ifdef PREL32_RELOCATIONS
+ val = r(addrs);
+ val += sec_addr + ((uint64_t)addrs - sec_offset);
+#else
+ val = long_size == 4 ? r(addrs) : r8(addrs);
+#endif
+ if (!find_event(etrace->array, etrace->count, val)) {
+ if (failed_event(etrace, val))
+ return;
+ }
+ }
+ free(etrace->array);
+}
+
+static void *tracepoint_check(struct elf_tracepoint *etrace, Elf_Shdr *symtab_sec,
+ const char *strtab)
+{
+ Elf_Sym *sym, *end_sym;
+ int symentsize = shdr_entsize(symtab_sec);
+ int found = 0;
+
+ sym = (void *)etrace->ehdr + shdr_offset(symtab_sec);
+ end_sym = (void *)sym + shdr_size(symtab_sec);
+
+ while (sym < end_sym) {
+ if (!strcmp(strtab + sym_name(sym), "__start___tracepoint_check")) {
+ etrace->start_tracepoint_check = sym_value(sym);
+ if (++found == 4)
+ break;
+ } else if (!strcmp(strtab + sym_name(sym), "__stop___tracepoint_check")) {
+ etrace->stop_tracepoint_check = sym_value(sym);
+ if (++found == 4)
+ break;
+ } else if (!strcmp(strtab + sym_name(sym), "__start___tracepoints_ptrs")) {
+ etrace->start_tracepoint = sym_value(sym);
+ if (++found == 4)
+ break;
+ } else if (!strcmp(strtab + sym_name(sym), "__stop___tracepoints_ptrs")) {
+ etrace->stop_tracepoint = sym_value(sym);
+ if (++found == 4)
+ break;
+ }
+ sym = (void *)sym + symentsize;
+ }
+
+ if (!etrace->start_tracepoint_check) {
+ fprintf(stderr, "warning: get start_tracepoint_check error!\n");
+ return NULL;
+ }
+ if (!etrace->stop_tracepoint_check) {
+ fprintf(stderr, "warning: get stop_tracepoint_check error!\n");
+ return NULL;
+ }
+ if (!etrace->start_tracepoint) {
+ fprintf(stderr, "warning: get start_tracepoint error!\n");
+ return NULL;
+ }
+ if (!etrace->stop_tracepoint) {
+ fprintf(stderr, "warning: get start_tracepoint error!\n");
+ return NULL;
+ }
+
+ make_trace_array(etrace);
+ check_tracepoints(etrace);
+
+ return NULL;
+}
+
static int do_sort(Elf_Ehdr *ehdr,
char const *const fname,
table_sort_t custom_sort)
@@ -948,6 +1158,7 @@ static int do_sort(Elf_Ehdr *ehdr,
int i;
unsigned int shnum;
unsigned int shstrndx;
+ struct elf_tracepoint tstruct = {0};
#ifdef MCOUNT_SORT_ENABLED
struct elf_mcount_loc mstruct = {0};
#endif
@@ -985,11 +1196,17 @@ static int do_sort(Elf_Ehdr *ehdr,
symtab_shndx = (Elf32_Word *)((const char *)ehdr +
shdr_offset(shdr));
-#ifdef MCOUNT_SORT_ENABLED
/* locate the .init.data section in vmlinux */
if (!strcmp(secstrings + idx, ".init.data"))
- mstruct.init_data_sec = shdr;
-#endif
+ init_data_sec = shdr;
+
+ /* locate the .ro.data section in vmlinux */
+ if (!strcmp(secstrings + idx, ".rodata"))
+ ro_data_sec = shdr;
+
+ /* locate the .data section in vmlinux */
+ if (!strcmp(secstrings + idx, ".data"))
+ data_data_sec = shdr;
#ifdef UNWINDER_ORC_ENABLED
/* locate the ORC unwind tables */
@@ -1055,7 +1272,7 @@ static int do_sort(Elf_Ehdr *ehdr,
mstruct.ehdr = ehdr;
get_mcount_loc(&mstruct, symtab_sec, strtab);
- if (!mstruct.init_data_sec || !mstruct.start_mcount_loc || !mstruct.stop_mcount_loc) {
+ if (!init_data_sec || !mstruct.start_mcount_loc || !mstruct.stop_mcount_loc) {
fprintf(stderr,
"incomplete mcount's sort in file: %s\n",
fname);
@@ -1071,6 +1288,9 @@ static int do_sort(Elf_Ehdr *ehdr,
}
#endif
+ tstruct.ehdr = ehdr;
+ tracepoint_check(&tstruct, symtab_sec, strtab);
+
if (custom_sort) {
custom_sort(extab_image, shdr_size(extab_sec));
} else {
@@ -1404,6 +1624,8 @@ int main(int argc, char *argv[])
continue;
}
+ file_map_end = addr + size;
+
if (do_file(argv[i], addr))
++n_error;
--
2.47.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v4 2/4] tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address
2025-07-23 19:41 [PATCH v4 0/4] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
2025-07-23 19:41 ` [PATCH v4 1/4] tracing: sorttable: Add a tracepoint verification check at build time Steven Rostedt
@ 2025-07-23 19:41 ` Steven Rostedt
2025-07-23 19:41 ` [PATCH v4 3/4] tracepoint: Do not warn for unused event that is exported Steven Rostedt
2025-07-23 19:41 ` [PATCH v4 4/4] tracing: Call trace_ftrace_test_filter() for the event Steven Rostedt
3 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2025-07-23 19:41 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-kbuild, llvm
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas, Linus Torvalds
From: Steven Rostedt <rostedt@goodmis.org>
The addresses in the ARM64 ELF file is stored in the RELA sections similar
to the mcount location table. Add support to find the addresses from the
RELA section to use to get the actual addresses for checking the
tracepoints and the checking variables to show if all tracepoints are
used.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
scripts/sorttable.c | 186 ++++++++++++++++++++++++++++----------------
1 file changed, 118 insertions(+), 68 deletions(-)
diff --git a/scripts/sorttable.c b/scripts/sorttable.c
index ddcbec22ca96..edca5b06d8ce 100644
--- a/scripts/sorttable.c
+++ b/scripts/sorttable.c
@@ -578,6 +578,106 @@ static int compare_values_32(const void *a, const void *b)
static int (*compare_values)(const void *a, const void *b);
+static char m_err[ERRSTR_MAXSZ];
+static long rela_type;
+static bool sort_reloc;
+static int reloc_shnum;
+
+/* Fill the array with the content of the relocs */
+static int fill_relocs(void *ptr, uint64_t size, Elf_Ehdr *ehdr, uint64_t start_loc)
+{
+ Elf_Shdr *shdr_start;
+ Elf_Rela *rel;
+ unsigned int shnum;
+ unsigned int count = 0;
+ int shentsize;
+ void *array_end = ptr + size;
+
+ shdr_start = (Elf_Shdr *)((char *)ehdr + ehdr_shoff(ehdr));
+ shentsize = ehdr_shentsize(ehdr);
+
+ shnum = ehdr_shnum(ehdr);
+ if (shnum == SHN_UNDEF)
+ shnum = shdr_size(shdr_start);
+
+ for (int i = 0; i < shnum; i++) {
+ Elf_Shdr *shdr = get_index(shdr_start, shentsize, i);
+ void *end;
+
+ if (shdr_type(shdr) != SHT_RELA)
+ continue;
+
+ reloc_shnum = i;
+
+ rel = (void *)ehdr + shdr_offset(shdr);
+ end = (void *)rel + shdr_size(shdr);
+
+ for (; (void *)rel < end; rel = (void *)rel + shdr_entsize(shdr)) {
+ uint64_t offset = rela_offset(rel);
+
+ if (offset >= start_loc && offset < start_loc + size) {
+ if (ptr + long_size > array_end) {
+ snprintf(m_err, ERRSTR_MAXSZ,
+ "Too many relocations");
+ return -1;
+ }
+
+ /* Make sure this has the correct type */
+ if (rela_info(rel) != rela_type) {
+ snprintf(m_err, ERRSTR_MAXSZ,
+ "rela has type %lx but expected %lx\n",
+ (long)rela_info(rel), rela_type);
+ return -1;
+ }
+
+ if (long_size == 4)
+ *(uint32_t *)ptr = rela_addend(rel);
+ else
+ *(uint64_t *)ptr = rela_addend(rel);
+ ptr += long_size;
+ count++;
+ }
+ }
+ }
+ return count;
+}
+
+static uint64_t get_addr_reloc(Elf_Ehdr *ehdr, uint64_t addr)
+{
+ Elf_Shdr *shdr_start;
+ Elf_Shdr *shdr;
+ Elf_Rela *rel;
+ unsigned int shnum;
+ int shentsize;
+ void *end;
+
+ shdr_start = (Elf_Shdr *)((char *)ehdr + ehdr_shoff(ehdr));
+ shentsize = ehdr_shentsize(ehdr);
+
+ shnum = ehdr_shnum(ehdr);
+ if (shnum == SHN_UNDEF)
+ shnum = shdr_size(shdr_start);
+
+ shdr = get_index(shdr_start, shentsize, reloc_shnum);
+ if (shdr_type(shdr) != SHT_RELA)
+ return 0;
+
+ rel = (void *)ehdr + shdr_offset(shdr);
+ end = (void *)rel + shdr_size(shdr);
+
+ for (; (void *)rel < end; rel = (void *)rel + shdr_entsize(shdr)) {
+ uint64_t offset = rela_offset(rel);
+
+ if (offset == addr) {
+ if (long_size == 4)
+ return rela_addend(rel);
+ else
+ return rela_addend(rel);
+ }
+ }
+ return 0;
+}
+
static int fill_addrs(void *ptr, uint64_t size, void *addrs)
{
void *end = ptr + size;
@@ -696,11 +796,6 @@ static int parse_symbols(const char *fname)
}
static pthread_t mcount_sort_thread;
-static bool sort_reloc;
-
-static long rela_type;
-
-static char m_err[ERRSTR_MAXSZ];
struct elf_mcount_loc {
Elf_Ehdr *ehdr;
@@ -708,63 +803,6 @@ struct elf_mcount_loc {
uint64_t stop_mcount_loc;
};
-/* Fill the array with the content of the relocs */
-static int fill_relocs(void *ptr, uint64_t size, Elf_Ehdr *ehdr, uint64_t start_loc)
-{
- Elf_Shdr *shdr_start;
- Elf_Rela *rel;
- unsigned int shnum;
- unsigned int count = 0;
- int shentsize;
- void *array_end = ptr + size;
-
- shdr_start = (Elf_Shdr *)((char *)ehdr + ehdr_shoff(ehdr));
- shentsize = ehdr_shentsize(ehdr);
-
- shnum = ehdr_shnum(ehdr);
- if (shnum == SHN_UNDEF)
- shnum = shdr_size(shdr_start);
-
- for (int i = 0; i < shnum; i++) {
- Elf_Shdr *shdr = get_index(shdr_start, shentsize, i);
- void *end;
-
- if (shdr_type(shdr) != SHT_RELA)
- continue;
-
- rel = (void *)ehdr + shdr_offset(shdr);
- end = (void *)rel + shdr_size(shdr);
-
- for (; (void *)rel < end; rel = (void *)rel + shdr_entsize(shdr)) {
- uint64_t offset = rela_offset(rel);
-
- if (offset >= start_loc && offset < start_loc + size) {
- if (ptr + long_size > array_end) {
- snprintf(m_err, ERRSTR_MAXSZ,
- "Too many relocations");
- return -1;
- }
-
- /* Make sure this has the correct type */
- if (rela_info(rel) != rela_type) {
- snprintf(m_err, ERRSTR_MAXSZ,
- "rela has type %lx but expected %lx\n",
- (long)rela_info(rel), rela_type);
- return -1;
- }
-
- if (long_size == 4)
- *(uint32_t *)ptr = rela_addend(rel);
- else
- *(uint64_t *)ptr = rela_addend(rel);
- ptr += long_size;
- count++;
- }
- }
- }
- return count;
-}
-
/* Put the sorted vals back into the relocation elements */
static void replace_relocs(void *ptr, uint64_t size, Elf_Ehdr *ehdr, uint64_t start_loc)
{
@@ -957,7 +995,15 @@ static void make_trace_array(struct elf_tracepoint *etrace)
return;
}
- count = fill_addrs(vals, size, start);
+ if (sort_reloc) {
+ count = fill_relocs(vals, size, ehdr, etrace->start_tracepoint_check);
+ /* gcc may use relocs to save the addresses, but clang does not. */
+ if (!count) {
+ count = fill_addrs(vals, size, start);
+ sort_reloc = 0;
+ }
+ } else
+ count = fill_addrs(vals, size, start);
compare_values = long_size == 4 ? compare_values_32 : compare_values_64;
qsort(vals, count, long_size, compare_values);
@@ -1017,10 +1063,14 @@ static int failed_event(struct elf_tracepoint *etrace, uint64_t addr)
if (name_ptr > file_map_end)
goto bad_addr;
- if (long_size == 4)
- addr = r(name_ptr);
- else
- addr = r8(name_ptr);
+ if (sort_reloc) {
+ addr = get_addr_reloc(ehdr, addr);
+ } else {
+ if (long_size == 4)
+ addr = r(name_ptr);
+ else
+ addr = r8(name_ptr);
+ }
sec_addr = shdr_addr(ro_data_sec);
sec_offset = shdr_offset(ro_data_sec);
@@ -1473,9 +1523,9 @@ static int do_file(char const *const fname, void *addr)
switch (r2(&ehdr->e32.e_machine)) {
case EM_AARCH64:
-#ifdef MCOUNT_SORT_ENABLED
sort_reloc = true;
rela_type = 0x403;
+#ifdef MCOUNT_SORT_ENABLED
/* arm64 uses patchable function entry placing before function */
before_func = 8;
#endif
--
2.47.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v4 3/4] tracepoint: Do not warn for unused event that is exported
2025-07-23 19:41 [PATCH v4 0/4] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
2025-07-23 19:41 ` [PATCH v4 1/4] tracing: sorttable: Add a tracepoint verification check at build time Steven Rostedt
2025-07-23 19:41 ` [PATCH v4 2/4] tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address Steven Rostedt
@ 2025-07-23 19:41 ` Steven Rostedt
2025-07-23 19:41 ` [PATCH v4 4/4] tracing: Call trace_ftrace_test_filter() for the event Steven Rostedt
3 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2025-07-23 19:41 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-kbuild, llvm
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas, Linus Torvalds
From: Steven Rostedt <rostedt@goodmis.org>
There are a few generic events that may only be used by modules. They are
defined and then set with EXPORT_TRACEPOINT*(). Mark events that are
exported as being used, even though they still waste memory in the kernel
proper.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
include/linux/tracepoint.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 2b96c7e94c52..8026a0659580 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -223,7 +223,8 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
#ifdef CONFIG_TRACEPOINT_VERIFY_USED
# define TRACEPOINT_CHECK(name) \
- static void __used __section("__tracepoint_check") *__trace_check = \
+ static void __used __section("__tracepoint_check") * \
+ __trace_check_##name = \
&__tracepoint_##name;
#else
# define TRACEPOINT_CHECK(name)
@@ -381,10 +382,12 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
__DEFINE_TRACE_EXT(_name, NULL, PARAMS(_proto), PARAMS(_args));
#define EXPORT_TRACEPOINT_SYMBOL_GPL(name) \
+ TRACEPOINT_CHECK(name) \
EXPORT_SYMBOL_GPL(__tracepoint_##name); \
EXPORT_SYMBOL_GPL(__traceiter_##name); \
EXPORT_STATIC_CALL_GPL(tp_func_##name)
#define EXPORT_TRACEPOINT_SYMBOL(name) \
+ TRACEPOINT_CHECK(name) \
EXPORT_SYMBOL(__tracepoint_##name); \
EXPORT_SYMBOL(__traceiter_##name); \
EXPORT_STATIC_CALL(tp_func_##name)
--
2.47.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v4 4/4] tracing: Call trace_ftrace_test_filter() for the event
2025-07-23 19:41 [PATCH v4 0/4] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
` (2 preceding siblings ...)
2025-07-23 19:41 ` [PATCH v4 3/4] tracepoint: Do not warn for unused event that is exported Steven Rostedt
@ 2025-07-23 19:41 ` Steven Rostedt
3 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2025-07-23 19:41 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-kbuild, llvm
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas, Linus Torvalds
From: Steven Rostedt <rostedt@goodmis.org>
The trace event filter bootup self test tests a bunch of filter logic
against the ftrace_test_filter event, but does not actually call the
event. Work is being done to cause a warning if an event is defined but
not used. To quiet the warning call the trace event under an if statement
where it is disabled so it doesn't get optimized out.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/trace_events_filter.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
index 3885aadc434d..e4581e10782b 100644
--- a/kernel/trace/trace_events_filter.c
+++ b/kernel/trace/trace_events_filter.c
@@ -2900,6 +2900,10 @@ static __init int ftrace_test_event_filter(void)
if (i == DATA_CNT)
printk(KERN_CONT "OK\n");
+ /* Need to call ftrace_test_filter to prevent a warning */
+ if (!trace_ftrace_test_filter_enabled())
+ trace_ftrace_test_filter(1, 2, 3, 4, 5, 6, 7, 8);
+
return 0;
}
--
2.47.2
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v4 1/4] tracing: sorttable: Add a tracepoint verification check at build time
2025-07-23 19:41 ` [PATCH v4 1/4] tracing: sorttable: Add a tracepoint verification check at build time Steven Rostedt
@ 2025-07-23 20:31 ` Linus Torvalds
2025-07-23 22:27 ` Steven Rostedt
0 siblings, 1 reply; 10+ messages in thread
From: Linus Torvalds @ 2025-07-23 20:31 UTC (permalink / raw)
To: Steven Rostedt
Cc: linux-kernel, linux-trace-kernel, linux-kbuild, llvm,
Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Arnd Bergmann, Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
Nick Desaulniers, Catalin Marinas
On Wed, 23 Jul 2025 at 12:42, Steven Rostedt <rostedt@kernel.org> wrote:
>
> kernel/trace/Kconfig | 19 +++
Annoying "one step forward, two steps back" change.
You literally sent a "remove one pointless Kconfig option" patch
within ten minutes of sending out this "add two pointless Kconfig
options" patch.
Because as long as it's a build-time thing, please just fix the
problems it points out, and it should have no real cost to being
enabled.
We don't want to ask people questions that don't matter.
Of course, because you *used* to check this at run-time, you put the
new "__tracepoint_check" table in a section that is actually loaded
into memory, and it appears to be still there.
Just put it in the discard section, the same way we have (for example)
the export_symbols table that we also check at build-time without
actually ever making it part of the kernel.
Linus
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 1/4] tracing: sorttable: Add a tracepoint verification check at build time
2025-07-23 20:31 ` Linus Torvalds
@ 2025-07-23 22:27 ` Steven Rostedt
2025-07-23 22:33 ` Linus Torvalds
2025-07-23 22:33 ` Steven Rostedt
0 siblings, 2 replies; 10+ messages in thread
From: Steven Rostedt @ 2025-07-23 22:27 UTC (permalink / raw)
To: Linus Torvalds
Cc: Steven Rostedt, linux-kernel, linux-trace-kernel, linux-kbuild,
llvm, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
Andrew Morton, Arnd Bergmann, Masahiro Yamada, Nathan Chancellor,
Nicolas Schier, Nick Desaulniers, Catalin Marinas
On Wed, 23 Jul 2025 13:31:52 -0700
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> On Wed, 23 Jul 2025 at 12:42, Steven Rostedt <rostedt@kernel.org> wrote:
> >
> > kernel/trace/Kconfig | 19 +++
>
> Annoying "one step forward, two steps back" change.
>
> You literally sent a "remove one pointless Kconfig option" patch
> within ten minutes of sending out this "add two pointless Kconfig
> options" patch.
You mean to also get rid of the TRACEPOINT_VERIFY_USED config?
>
> Because as long as it's a build-time thing, please just fix the
> problems it points out, and it should have no real cost to being
> enabled.
>
> We don't want to ask people questions that don't matter.
>
> Of course, because you *used* to check this at run-time, you put the
> new "__tracepoint_check" table in a section that is actually loaded
> into memory, and it appears to be still there.
>
> Just put it in the discard section, the same way we have (for example)
> the export_symbols table that we also check at build-time without
> actually ever making it part of the kernel.
Ah, I wasn't sure how to do that. So you are saying to do:
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 600d8b51e315..365c92e6d0ce 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -1048,6 +1047,7 @@ defined(CONFIG_AUTOFDO_CLANG) || defined(CONFIG_PROPELLER_CLANG)
*(.modinfo) \
/* ld.bfd warns about .gnu.version* even when not emitted */ \
*(.gnu.version*) \
+ *(.__tracepoint_check)
#define DISCARDS \
/DISCARD/ : { \
But this will require that I change how it's done, as it doesn't look like
it's available when sorttable.c is used.
Looks like it will require a separate application to search the vmlinux.o
instead of the vmlinux (which sorttable does).
I could probably take some of the sorttable.c elf parsing and move that
into a header that would share the code.
-- Steve
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v4 1/4] tracing: sorttable: Add a tracepoint verification check at build time
2025-07-23 22:27 ` Steven Rostedt
@ 2025-07-23 22:33 ` Linus Torvalds
2025-07-23 22:47 ` Steven Rostedt
2025-07-23 22:33 ` Steven Rostedt
1 sibling, 1 reply; 10+ messages in thread
From: Linus Torvalds @ 2025-07-23 22:33 UTC (permalink / raw)
To: Steven Rostedt
Cc: Steven Rostedt, linux-kernel, linux-trace-kernel, linux-kbuild,
llvm, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
Andrew Morton, Arnd Bergmann, Masahiro Yamada, Nathan Chancellor,
Nicolas Schier, Nick Desaulniers, Catalin Marinas
On Wed, 23 Jul 2025 at 15:27, Steven Rostedt <rostedt@goodmis.org> wrote:
>
> You mean to also get rid of the TRACEPOINT_VERIFY_USED config?
Both of them, I think.
> But this will require that I change how it's done, as it doesn't look like
> it's available when sorttable.c is used.
>
> Looks like it will require a separate application to search the vmlinux.o
> instead of the vmlinux (which sorttable does).
>
> I could probably take some of the sorttable.c elf parsing and move that
> into a header that would share the code.
Hmm. Why not just make sorttable then use vmlinux.o?
No need to do it twice. Can't you just work on the original object
file before re-linking?
Linus
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 1/4] tracing: sorttable: Add a tracepoint verification check at build time
2025-07-23 22:27 ` Steven Rostedt
2025-07-23 22:33 ` Linus Torvalds
@ 2025-07-23 22:33 ` Steven Rostedt
1 sibling, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2025-07-23 22:33 UTC (permalink / raw)
To: Linus Torvalds
Cc: Steven Rostedt, linux-kernel, linux-trace-kernel, linux-kbuild,
llvm, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
Andrew Morton, Arnd Bergmann, Masahiro Yamada, Nathan Chancellor,
Nicolas Schier, Nick Desaulniers, Catalin Marinas
On Wed, 23 Jul 2025 18:27:01 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:
> Looks like it will require a separate application to search the vmlinux.o
> instead of the vmlinux (which sorttable does).
>
> I could probably take some of the sorttable.c elf parsing and move that
> into a header that would share the code.
Anyway, I'll just drop these patches for now as I don't have time to
rewrite it. I'll at least put together the unused events that I've already
removed and send that out for the next merge window.
-- Steve
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4 1/4] tracing: sorttable: Add a tracepoint verification check at build time
2025-07-23 22:33 ` Linus Torvalds
@ 2025-07-23 22:47 ` Steven Rostedt
0 siblings, 0 replies; 10+ messages in thread
From: Steven Rostedt @ 2025-07-23 22:47 UTC (permalink / raw)
To: Linus Torvalds
Cc: Steven Rostedt, linux-kernel, linux-trace-kernel, linux-kbuild,
llvm, Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers,
Andrew Morton, Arnd Bergmann, Masahiro Yamada, Nathan Chancellor,
Nicolas Schier, Nick Desaulniers, Catalin Marinas
On Wed, 23 Jul 2025 15:33:16 -0700
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> > I could probably take some of the sorttable.c elf parsing and move that
> > into a header that would share the code.
>
> Hmm. Why not just make sorttable then use vmlinux.o?
>
> No need to do it twice. Can't you just work on the original object
> file before re-linking?
I believe the sorttable code needs vmlinux to be finished linked, otherwise
it needs to handle all the relocations in vmlinux.o
$ readelf -r vmlinux
There are no relocations in this file.
Compared to:
$ readelf -r vmlinux.o| wc -l
22449337
But we are looking at moving the fix-ups of the event format files from
boot time into build time. That is, currently at boot, the enum strings
are converted to their numbers and some expanded tags from PAHOLE_TAG gets
removed from the format files to not confuse the parsers.
The sorttable.c should be just that, code to sort Linux tables. I think
adding the tracepoint check to it was a bit of a hack. Perhaps it should
have its own program that focuses on tracepoint changes.
Thus it could find unused trace points and fix up the format strings. I'm
not sure how bad the relocations not being resolved will affect that though.
-- Steve
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-07-23 22:47 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-23 19:41 [PATCH v4 0/4] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
2025-07-23 19:41 ` [PATCH v4 1/4] tracing: sorttable: Add a tracepoint verification check at build time Steven Rostedt
2025-07-23 20:31 ` Linus Torvalds
2025-07-23 22:27 ` Steven Rostedt
2025-07-23 22:33 ` Linus Torvalds
2025-07-23 22:47 ` Steven Rostedt
2025-07-23 22:33 ` Steven Rostedt
2025-07-23 19:41 ` [PATCH v4 2/4] tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address Steven Rostedt
2025-07-23 19:41 ` [PATCH v4 3/4] tracepoint: Do not warn for unused event that is exported Steven Rostedt
2025-07-23 19:41 ` [PATCH v4 4/4] tracing: Call trace_ftrace_test_filter() for the event Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).