linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events
@ 2025-06-12 23:58 Steven Rostedt
  2025-06-12 23:58 ` [PATCH v2 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used Steven Rostedt
                   ` (7 more replies)
  0 siblings, 8 replies; 11+ messages in thread
From: Steven Rostedt @ 2025-06-12 23:58 UTC (permalink / raw)
  To: linux-arch, linux-kernel, linux-trace-kernel, linux-kbuild, llvm
  Cc: Mathieu Desnoyers, Masami Hiramatsu, Arnd Bergmann,
	Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
	Nick Desaulniers, Catalin Marinas, Andrew Morton


Every trace event can take up to 5K of memory in text and meta data regardless
if they are used or not. Trace events should not be created if they are not
used.  Currently there's over a hundred events in the kernel that are defined
but unused, either because their callers were removed without removing the
trace event with it, or a config hides the trace event caller but not the
trace event itself. And in some cases, trace events were simply added but were
never called for whatever reason. The number of unused trace events continues
to grow.

This patch series aims to fix this.

The first patch creates a new section called __tracepoint_check, where all
callers of a tracepoint creates a variable that is placed in this section with
a pointer to the tracepoint they use. Then on boot up, it iterates this
section and will modify the tracepoint's "func" field to a value of 1 (all
tracepoints "func" fields are initialized to NULL and is only set when they
are registered). This takes place before any tracepoint can be registered.

Then each tracepoint is iterated on and if any tracepoint does not have its
"func" field set to 1 a warning is triggerd and every tracepoint that doesn't
have that field set is printed. The "func" field is then reset back to NULL.

The second patch modifies scripts/sorttable.c to read the __tracepoint_check
section. It sorts it, and then reads the __tracepoint_ptr section that has all
compiled in tracepoints. It makes sure that every tracepoint is found in the
check section and if not, it prints a warning message about it. This lists the
missing tracepoints at build time.

The third patch updates sorttable to work for arm64 when compiled with gcc. As
gcc's arm64 build doesn't put addresses in their section but saves them off in
the RELA sections. This mostly takes the work done that was needed to do the
mcount sorting at boot up on arm64.

The forth patch adds EXPORT_TRACEPOINT() to the __tracepoint_check section as
well. There was several locations that adds tracepoints in the kernel proper
that are only used in modules. It was getting quite complex trying to move
things around that I just decided to make any tracepoint in a
EXPORT_TRACEPOINT "used". I'm using the analogy of static and global
functions. An unused static function gets a warning but an unused global one
does not.

The last patch updates the trace_ftrace_test_filter boot up self test. That
selftest creates a trace event to run a bunch of filter tests on it without
actually calling the tracepoint. To quiet the warning, the selftest tracepoint
is called within a if (!trace_<event>_enabled()) section, where it will not be
optimized out, nor will it be called.

This is v2 from: https://lore.kernel.org/linux-trace-kernel/20250529130138.544ffec4@gandalf.local.home/
which was simply the first patch. This version adds the other patches.

Steven Rostedt (5):
      tracepoints: Add verifier that makes sure all defined tracepoints are used
      tracing: sorttable: Add a tracepoint verification check at build time
      tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address
      tracepoint: Do not warn for unused event that is exported
      tracing: Call trace_ftrace_test_filter() for the event

----
 include/asm-generic/vmlinux.lds.h  |   1 +
 include/linux/tracepoint.h         |  13 ++
 kernel/trace/Kconfig               |  31 +++
 kernel/trace/trace_events_filter.c |   4 +
 kernel/tracepoint.c                |  26 +++
 scripts/Makefile                   |   4 +
 scripts/sorttable.c                | 444 ++++++++++++++++++++++++++++++-------
 7 files changed, 437 insertions(+), 86 deletions(-)

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used
  2025-06-12 23:58 [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
@ 2025-06-12 23:58 ` Steven Rostedt
  2025-06-12 23:58 ` [PATCH v2 2/5] tracing: sorttable: Add a tracepoint verification check at build time Steven Rostedt
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2025-06-12 23:58 UTC (permalink / raw)
  To: linux-arch, linux-kernel, linux-trace-kernel, linux-kbuild, llvm
  Cc: Mathieu Desnoyers, Masami Hiramatsu, Arnd Bergmann,
	Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
	Nick Desaulniers, Catalin Marinas, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

If a tracepoint is defined via DECLARE_TRACE() or TRACE_EVENT() but never
called (via the trace_<tracepoint>() function), its meta data is still
around in memory and not discarded.

When created via TRACE_EVENT() the situation is worse because the
TRACE_EVENT() creates meta data that can be around 5k per trace event.
Having unused trace events causes several thousand of wasted bytes.

Add a verifier that injects a pointer to the tracepoint structure in the
functions that are used and added to a section called __tracepoint_check.
Then on boot up, iterate over this section and for every tracepoint
descriptor that is pointed to, update its ".funcs" field to (void *)1, as
the .funcs field is only set when a tracepoint is registered. At this
time, no tracepoints should be registered.

Then iterate all tracepoints and if any tracepoints doesn't have its
.funcs field set to (void*)1, trigger a warning, and list all tracepoints
that are not found.

Enabling this currently with a given config produces:

 Tracepoint x86_fpu_before_restore unused
 Tracepoint x86_fpu_after_restore unused
 Tracepoint x86_fpu_init_state unused
 Tracepoint pelt_hw_tp unused
 Tracepoint pelt_irq_tp unused
 Tracepoint ipi_raise unused
 Tracepoint ipi_entry unused
 Tracepoint ipi_exit unused
 Tracepoint irq_matrix_alloc_reserved unused
 Tracepoint psci_domain_idle_enter unused
 Tracepoint psci_domain_idle_exit unused
 Tracepoint powernv_throttle unused
 Tracepoint clock_enable unused
 Tracepoint clock_disable unused
 Tracepoint clock_set_rate unused
 Tracepoint power_domain_target unused
 Tracepoint xdp_bulk_tx unused
 Tracepoint xdp_redirect_map unused
 Tracepoint xdp_redirect_map_err unused
 Tracepoint mem_return_failed unused
 Tracepoint vma_mas_szero unused
 Tracepoint vma_store unused
 Tracepoint hugepage_set_pmd unused
 Tracepoint hugepage_set_pud unused
 Tracepoint hugepage_update_pmd unused
 Tracepoint hugepage_update_pud unused
 Tracepoint dax_pmd_insert_mapping unused
 Tracepoint dax_insert_mapping unused
 Tracepoint block_rq_remap unused
 Tracepoint xhci_dbc_handle_event unused
 Tracepoint xhci_dbc_handle_transfer unused
 Tracepoint xhci_dbc_gadget_ep_queue unused
 Tracepoint xhci_dbc_alloc_request unused
 Tracepoint xhci_dbc_free_request unused
 Tracepoint xhci_dbc_queue_request unused
 Tracepoint xhci_dbc_giveback_request unused
 Tracepoint tcp_ao_wrong_maclen unused
 Tracepoint tcp_ao_mismatch unused
 Tracepoint tcp_ao_key_not_found unused
 Tracepoint tcp_ao_rnext_request unused
 Tracepoint tcp_ao_synack_no_key unused
 Tracepoint tcp_ao_snd_sne_update unused
 Tracepoint tcp_ao_rcv_sne_update unused

Some of the above is totally unused but others are not used due to their
"trace_" functions being inside configs, in which case, the defined
tracepoints should also be inside those same configs. Others are
architecture specific but defined in generic code, where they should
either be moved to the architecture or be surrounded by #ifdef for the
architectures they are for.

Note, currently this only handles tracepoints that are builtin. This can
easily be extended to verify tracepoints used by modules, but it requires a
slightly different approach as it needs updates to the module code.

Link: https://lore.kernel.org/all/20250528114549.4d8a5e03@gandalf.local.home/

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Changes since v1: https://lore.kernel.org/20250529130138.544ffec4@gandalf.local.home

- Separate the config that does the runtime warning from the
  sections added to the calls to tracepoints so that it can
  be used for build time warnings.

 include/asm-generic/vmlinux.lds.h |  1 +
 include/linux/tracepoint.h        | 10 ++++++++++
 kernel/trace/Kconfig              | 19 +++++++++++++++++++
 kernel/tracepoint.c               | 26 ++++++++++++++++++++++++++
 4 files changed, 56 insertions(+)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index fa5f19b8d53a..600d8b51e315 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -708,6 +708,7 @@ defined(CONFIG_AUTOFDO_CLANG) || defined(CONFIG_PROPELLER_CLANG)
 	MCOUNT_REC()							\
 	*(.init.rodata .init.rodata.*)					\
 	FTRACE_EVENTS()							\
+	BOUNDED_SECTION_BY(__tracepoint_check, ___tracepoint_check)	\
 	TRACE_SYSCALLS()						\
 	KPROBE_BLACKLIST()						\
 	ERROR_INJECT_WHITELIST()					\
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 826ce3f8e1f8..2b96c7e94c52 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -221,6 +221,14 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
 		__do_trace_##name(args);				\
 	}
 
+#ifdef CONFIG_TRACEPOINT_VERIFY_USED
+# define TRACEPOINT_CHECK(name)						\
+	static void __used __section("__tracepoint_check") *__trace_check = \
+		&__tracepoint_##name;
+#else
+# define TRACEPOINT_CHECK(name)
+#endif
+
 /*
  * Make sure the alignment of the structure in the __tracepoints section will
  * not add unwanted padding between the beginning of the section and the
@@ -270,6 +278,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
 	__DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_proto)) \
 	static inline void __do_trace_##name(proto)			\
 	{								\
+		TRACEPOINT_CHECK(name)					\
 		if (cond) {						\
 			guard(preempt_notrace)();			\
 			__DO_TRACE_CALL(name, TP_ARGS(args));		\
@@ -289,6 +298,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
 	__DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_proto)) \
 	static inline void __do_trace_##name(proto)			\
 	{								\
+		TRACEPOINT_CHECK(name)					\
 		guard(rcu_tasks_trace)();				\
 		__DO_TRACE_CALL(name, TP_ARGS(args));			\
 	}								\
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index a3f35c7d83b6..e676b802b721 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -1044,6 +1044,25 @@ config GCOV_PROFILE_FTRACE
 	  Note that on a kernel compiled with this config, ftrace will
 	  run significantly slower.
 
+config TRACEPOINT_VERIFY_USED
+	bool
+	help
+          This option creates a section when tracepoints are used
+	  that hold a pointer to the tracepoint that is used.
+	  This can be used to test if a defined tracepoint is
+	  used or not.
+
+config TRACEPOINT_WARN_ON_UNUSED
+	bool "Warn if any tracepoint is defined but not used"
+	depends on TRACEPOINTS
+	select TRACEPOINT_VERIFY_USED
+	help
+	  This option checks if every builtin defined tracepoint is
+	  used in the code. If a tracepoint is defined but not used,
+	  it will waste memory as its meta data is still created.
+	  A warning will be triggered if a tracepoint is found and
+	  not used at bootup.
+
 config FTRACE_SELFTEST
 	bool
 
diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
index 62719d2941c9..7701a6fed310 100644
--- a/kernel/tracepoint.c
+++ b/kernel/tracepoint.c
@@ -677,10 +677,36 @@ static struct notifier_block tracepoint_module_nb = {
 	.priority = 0,
 };
 
+#ifdef CONFIG_TRACEPOINT_WARN_ON_UNUSED
+extern void * __start___tracepoint_check[];
+extern void * __stop___tracepoint_check[];
+
+#define VERIFIED_TRACEPOINT	((void *)1)
+
+static void check_tracepoint(struct tracepoint *tp, void *priv)
+{
+	if (WARN_ONCE(tp->funcs != VERIFIED_TRACEPOINT, "Unused tracepoints found"))
+		pr_warn("Tracepoint %s unused\n", tp->name);
+
+	tp->funcs = NULL;
+}
+#endif
+
 static __init int init_tracepoints(void)
 {
 	int ret;
 
+#ifdef CONFIG_TRACEPOINT_WARN_ON_UNUSED
+	for (void **ptr = __start___tracepoint_check;
+	     ptr < __stop___tracepoint_check; ptr++) {
+		struct tracepoint *tp = *ptr;
+
+		tp->funcs = VERIFIED_TRACEPOINT;
+	}
+
+	for_each_kernel_tracepoint(check_tracepoint, NULL);
+#endif
+
 	ret = register_module_notifier(&tracepoint_module_nb);
 	if (ret)
 		pr_warn("Failed to register tracepoint module enter notifier\n");
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/5] tracing: sorttable: Add a tracepoint verification check at build time
  2025-06-12 23:58 [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
  2025-06-12 23:58 ` [PATCH v2 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used Steven Rostedt
@ 2025-06-12 23:58 ` Steven Rostedt
  2025-06-12 23:58 ` [PATCH v2 3/5] tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address Steven Rostedt
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2025-06-12 23:58 UTC (permalink / raw)
  To: linux-arch, linux-kernel, linux-trace-kernel, linux-kbuild, llvm
  Cc: Mathieu Desnoyers, Masami Hiramatsu, Arnd Bergmann,
	Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
	Nick Desaulniers, Catalin Marinas, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

Update the sorttable code to check the tracepoint_check and tracepoint_ptr
sections to see what trace events have been created but not used. Trace
events can take up approximately 5K of memory each regardless if they are
called or not.

List the tracepoints that are not used at build time. Note, this currently
only handles tracepoints that are builtin and not in modules.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/Kconfig |  12 ++
 scripts/Makefile     |   4 +
 scripts/sorttable.c  | 268 +++++++++++++++++++++++++++++++++++++++----
 3 files changed, 261 insertions(+), 23 deletions(-)

diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index e676b802b721..6c28b06c9231 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -1063,6 +1063,18 @@ config TRACEPOINT_WARN_ON_UNUSED
 	  A warning will be triggered if a tracepoint is found and
 	  not used at bootup.
 
+config TRACEPOINT_WARN_ON_UNUSED_BUILD
+	bool "Warn on build if a tracepoint is defined but not used"
+	depends on TRACEPOINTS
+	select TRACEPOINT_VERIFY_USED
+	default y
+	help
+	  This option checks if every builtin defined tracepoint is
+	  used in the code. If a tracepoint is defined but not used,
+	  it will waste memory as its meta data is still created.
+	  This will cause a warning at build time if the architecture
+	  supports it.
+
 config FTRACE_SELFTEST
 	bool
 
diff --git a/scripts/Makefile b/scripts/Makefile
index 46f860529df5..f81947ec9486 100644
--- a/scripts/Makefile
+++ b/scripts/Makefile
@@ -42,6 +42,10 @@ HOSTCFLAGS_sorttable.o += -I$(srctree)/tools/arch/$(SRCARCH)/include
 HOSTCFLAGS_sorttable.o += -DUNWINDER_ORC_ENABLED
 endif
 
+ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
+HOSTCFLAGS_sorttable.o += -DPREL32_RELOCATIONS
+endif
+
 ifdef CONFIG_BUILDTIME_MCOUNT_SORT
 HOSTCFLAGS_sorttable.o += -DMCOUNT_SORT_ENABLED
 endif
diff --git a/scripts/sorttable.c b/scripts/sorttable.c
index deed676bfe38..ddcbec22ca96 100644
--- a/scripts/sorttable.c
+++ b/scripts/sorttable.c
@@ -92,6 +92,12 @@ static void (*w)(uint32_t, uint32_t *);
 static void (*w8)(uint64_t, uint64_t *);
 typedef void (*table_sort_t)(char *, int);
 
+static Elf_Shdr *init_data_sec;
+static Elf_Shdr *ro_data_sec;
+static Elf_Shdr *data_data_sec;
+
+static void *file_map_end;
+
 static struct elf_funcs {
 	int (*compare_extable)(const void *a, const void *b);
 	uint64_t (*ehdr_shoff)(Elf_Ehdr *ehdr);
@@ -550,8 +556,6 @@ static void *sort_orctable(void *arg)
 }
 #endif
 
-#ifdef MCOUNT_SORT_ENABLED
-
 static int compare_values_64(const void *a, const void *b)
 {
 	uint64_t av = *(uint64_t *)a;
@@ -574,6 +578,22 @@ static int compare_values_32(const void *a, const void *b)
 
 static int (*compare_values)(const void *a, const void *b);
 
+static int fill_addrs(void *ptr, uint64_t size, void *addrs)
+{
+	void *end = ptr + size;
+	int count = 0;
+
+	for (; ptr < end; ptr += long_size, addrs += long_size, count++) {
+		if (long_size == 4)
+			*(uint32_t *)ptr = r(addrs);
+		else
+			*(uint64_t *)ptr = r8(addrs);
+	}
+	return count;
+}
+
+#ifdef MCOUNT_SORT_ENABLED
+
 /* Only used for sorting mcount table */
 static void rela_write_addend(Elf_Rela *rela, uint64_t val)
 {
@@ -684,7 +704,6 @@ static char m_err[ERRSTR_MAXSZ];
 
 struct elf_mcount_loc {
 	Elf_Ehdr *ehdr;
-	Elf_Shdr *init_data_sec;
 	uint64_t start_mcount_loc;
 	uint64_t stop_mcount_loc;
 };
@@ -785,20 +804,6 @@ static void replace_relocs(void *ptr, uint64_t size, Elf_Ehdr *ehdr, uint64_t st
 	}
 }
 
-static int fill_addrs(void *ptr, uint64_t size, void *addrs)
-{
-	void *end = ptr + size;
-	int count = 0;
-
-	for (; ptr < end; ptr += long_size, addrs += long_size, count++) {
-		if (long_size == 4)
-			*(uint32_t *)ptr = r(addrs);
-		else
-			*(uint64_t *)ptr = r8(addrs);
-	}
-	return count;
-}
-
 static void replace_addrs(void *ptr, uint64_t size, void *addrs)
 {
 	void *end = ptr + size;
@@ -815,8 +820,8 @@ static void replace_addrs(void *ptr, uint64_t size, void *addrs)
 static void *sort_mcount_loc(void *arg)
 {
 	struct elf_mcount_loc *emloc = (struct elf_mcount_loc *)arg;
-	uint64_t offset = emloc->start_mcount_loc - shdr_addr(emloc->init_data_sec)
-					+ shdr_offset(emloc->init_data_sec);
+	uint64_t offset = emloc->start_mcount_loc - shdr_addr(init_data_sec)
+					+ shdr_offset(init_data_sec);
 	uint64_t size = emloc->stop_mcount_loc - emloc->start_mcount_loc;
 	unsigned char *start_loc = (void *)emloc->ehdr + offset;
 	Elf_Ehdr *ehdr = emloc->ehdr;
@@ -920,6 +925,211 @@ static void get_mcount_loc(struct elf_mcount_loc *emloc, Elf_Shdr *symtab_sec,
 static inline int parse_symbols(const char *fname) { return 0; }
 #endif
 
+struct elf_tracepoint {
+	Elf_Ehdr *ehdr;
+	uint64_t start_tracepoint_check;
+	uint64_t stop_tracepoint_check;
+	uint64_t start_tracepoint;
+	uint64_t stop_tracepoint;
+	uint64_t *array;
+	int count;
+};
+
+static void make_trace_array(struct elf_tracepoint *etrace)
+{
+	uint64_t offset = etrace->start_tracepoint_check - shdr_addr(init_data_sec)
+					+ shdr_offset(init_data_sec);
+	uint64_t size = etrace->stop_tracepoint_check - etrace->start_tracepoint_check;
+	Elf_Ehdr *ehdr = etrace->ehdr;
+	void *start = (void *)ehdr + offset;
+	int count = 0;
+	void *vals;
+
+	etrace->array = NULL;
+
+	/* If CONFIG_TRACEPOINT_VERIFY_USED is not set, there's nothing to do */
+	if (!size)
+		return;
+
+	vals = malloc(long_size * size);
+	if (!vals) {
+		fprintf(stderr, "Failed to allocate tracepoint check array");
+		return;
+	}
+
+	count = fill_addrs(vals, size, start);
+
+	compare_values = long_size == 4 ? compare_values_32 : compare_values_64;
+	qsort(vals, count, long_size, compare_values);
+
+	etrace->array = vals;
+	etrace->count = count;
+}
+
+static int cmp_addr_64(const void *K, const void *A)
+{
+	uint64_t key = *(const uint64_t *)K;
+	const uint64_t *a = A;
+
+	if (key < *a)
+		return -1;
+	return key > *a;
+}
+
+static int cmp_addr_32(const void *K, const void *A)
+{
+	uint32_t key = *(const uint32_t *)K;
+	const uint32_t *a = A;
+
+	if (key < *a)
+		return -1;
+	return key > *a;
+}
+
+static int find_event(void *array, size_t size, uint64_t key)
+{
+	uint32_t val_32;
+	uint64_t val_64;
+	void *val;
+	int (*cmp_func)(const void *A, const void *B);
+
+	if (long_size == 4) {
+		val_32 = key;
+		val = &val_32;
+		cmp_func = cmp_addr_32;
+	} else {
+		val_64 = key;
+		val = &val_64;
+		cmp_func = cmp_addr_64;
+	}
+	return bsearch(val, array, size, long_size, cmp_func) != NULL;
+}
+
+static int failed_event(struct elf_tracepoint *etrace, uint64_t addr)
+{
+	uint64_t sec_addr = shdr_addr(data_data_sec);
+	uint64_t sec_offset = shdr_offset(data_data_sec);
+	uint64_t offset = addr - sec_addr + sec_offset;
+	Elf_Ehdr *ehdr = etrace->ehdr;
+	void *name_ptr = (void *)ehdr + offset;
+	char *name;
+
+	if (name_ptr > file_map_end)
+		goto bad_addr;
+
+	if (long_size == 4)
+		addr = r(name_ptr);
+	else
+		addr = r8(name_ptr);
+
+	sec_addr = shdr_addr(ro_data_sec);
+	sec_offset = shdr_offset(ro_data_sec);
+	offset = addr - sec_addr + sec_offset;
+	name = (char *)ehdr + offset;
+	if ((void *)name > file_map_end)
+		goto bad_addr;
+
+	fprintf(stderr, "warning: tracepoint '%s' is unused.\n", name);
+	return 0;
+bad_addr:
+	fprintf(stderr, "warning: Failed to verify unused trace events.\n");
+	return -1;
+}
+
+static void check_tracepoints(struct elf_tracepoint *etrace)
+{
+	uint64_t sec_addr = shdr_addr(ro_data_sec);
+	uint64_t sec_offset = shdr_offset(ro_data_sec);
+	uint64_t offset = etrace->start_tracepoint - sec_addr + sec_offset;
+	uint64_t size = etrace->stop_tracepoint - etrace->start_tracepoint;
+	Elf_Ehdr *ehdr = etrace->ehdr;
+	void *start = (void *)ehdr + offset;
+	void *end = start + size;
+	void *addrs;
+	int inc = long_size;
+
+	if (!etrace->array)
+		return;
+
+	if (!size)
+		return;
+
+#ifdef PREL32_RELOCATIONS
+	inc = 4;
+#endif
+
+	sec_offset = sec_offset + (uint64_t)ehdr;
+	for (addrs = start; addrs < end; addrs += inc) {
+		uint64_t val;
+
+#ifdef PREL32_RELOCATIONS
+		val = r(addrs);
+		val += sec_addr + ((uint64_t)addrs - sec_offset);
+#else
+		val = long_size == 4 ? r(addrs) : r8(addrs);
+#endif
+		if (!find_event(etrace->array, etrace->count, val)) {
+			if (failed_event(etrace, val))
+				return;
+		}
+	}
+	free(etrace->array);
+}
+
+static void *tracepoint_check(struct elf_tracepoint *etrace, Elf_Shdr *symtab_sec,
+			      const char *strtab)
+{
+	Elf_Sym *sym, *end_sym;
+	int symentsize = shdr_entsize(symtab_sec);
+	int found = 0;
+
+	sym = (void *)etrace->ehdr + shdr_offset(symtab_sec);
+	end_sym = (void *)sym + shdr_size(symtab_sec);
+
+	while (sym < end_sym) {
+		if (!strcmp(strtab + sym_name(sym), "__start___tracepoint_check")) {
+			etrace->start_tracepoint_check = sym_value(sym);
+			if (++found == 4)
+				break;
+		} else if (!strcmp(strtab + sym_name(sym), "__stop___tracepoint_check")) {
+			etrace->stop_tracepoint_check = sym_value(sym);
+			if (++found == 4)
+				break;
+		} else if (!strcmp(strtab + sym_name(sym), "__start___tracepoints_ptrs")) {
+			etrace->start_tracepoint = sym_value(sym);
+			if (++found == 4)
+				break;
+		} else if (!strcmp(strtab + sym_name(sym), "__stop___tracepoints_ptrs")) {
+			etrace->stop_tracepoint = sym_value(sym);
+			if (++found == 4)
+				break;
+		}
+		sym = (void *)sym + symentsize;
+	}
+
+	if (!etrace->start_tracepoint_check) {
+		fprintf(stderr, "warning: get start_tracepoint_check error!\n");
+		return NULL;
+	}
+	if (!etrace->stop_tracepoint_check) {
+		fprintf(stderr, "warning: get stop_tracepoint_check error!\n");
+		return NULL;
+	}
+	if (!etrace->start_tracepoint) {
+		fprintf(stderr, "warning: get start_tracepoint error!\n");
+		return NULL;
+	}
+	if (!etrace->stop_tracepoint) {
+		fprintf(stderr, "warning: get start_tracepoint error!\n");
+		return NULL;
+	}
+
+	make_trace_array(etrace);
+	check_tracepoints(etrace);
+
+	return NULL;
+}
+
 static int do_sort(Elf_Ehdr *ehdr,
 		   char const *const fname,
 		   table_sort_t custom_sort)
@@ -948,6 +1158,7 @@ static int do_sort(Elf_Ehdr *ehdr,
 	int i;
 	unsigned int shnum;
 	unsigned int shstrndx;
+	struct elf_tracepoint tstruct = {0};
 #ifdef MCOUNT_SORT_ENABLED
 	struct elf_mcount_loc mstruct = {0};
 #endif
@@ -985,11 +1196,17 @@ static int do_sort(Elf_Ehdr *ehdr,
 			symtab_shndx = (Elf32_Word *)((const char *)ehdr +
 						      shdr_offset(shdr));
 
-#ifdef MCOUNT_SORT_ENABLED
 		/* locate the .init.data section in vmlinux */
 		if (!strcmp(secstrings + idx, ".init.data"))
-			mstruct.init_data_sec = shdr;
-#endif
+			init_data_sec = shdr;
+
+		/* locate the .ro.data section in vmlinux */
+		if (!strcmp(secstrings + idx, ".rodata"))
+			ro_data_sec = shdr;
+
+		/* locate the .data section in vmlinux */
+		if (!strcmp(secstrings + idx, ".data"))
+			data_data_sec = shdr;
 
 #ifdef UNWINDER_ORC_ENABLED
 		/* locate the ORC unwind tables */
@@ -1055,7 +1272,7 @@ static int do_sort(Elf_Ehdr *ehdr,
 	mstruct.ehdr = ehdr;
 	get_mcount_loc(&mstruct, symtab_sec, strtab);
 
-	if (!mstruct.init_data_sec || !mstruct.start_mcount_loc || !mstruct.stop_mcount_loc) {
+	if (!init_data_sec || !mstruct.start_mcount_loc || !mstruct.stop_mcount_loc) {
 		fprintf(stderr,
 			"incomplete mcount's sort in file: %s\n",
 			fname);
@@ -1071,6 +1288,9 @@ static int do_sort(Elf_Ehdr *ehdr,
 	}
 #endif
 
+	tstruct.ehdr = ehdr;
+	tracepoint_check(&tstruct, symtab_sec, strtab);
+
 	if (custom_sort) {
 		custom_sort(extab_image, shdr_size(extab_sec));
 	} else {
@@ -1404,6 +1624,8 @@ int main(int argc, char *argv[])
 			continue;
 		}
 
+		file_map_end = addr + size;
+
 		if (do_file(argv[i], addr))
 			++n_error;
 
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 3/5] tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address
  2025-06-12 23:58 [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
  2025-06-12 23:58 ` [PATCH v2 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used Steven Rostedt
  2025-06-12 23:58 ` [PATCH v2 2/5] tracing: sorttable: Add a tracepoint verification check at build time Steven Rostedt
@ 2025-06-12 23:58 ` Steven Rostedt
  2025-06-12 23:58 ` [PATCH v2 4/5] tracepoint: Do not warn for unused event that is exported Steven Rostedt
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2025-06-12 23:58 UTC (permalink / raw)
  To: linux-arch, linux-kernel, linux-trace-kernel, linux-kbuild, llvm
  Cc: Mathieu Desnoyers, Masami Hiramatsu, Arnd Bergmann,
	Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
	Nick Desaulniers, Catalin Marinas, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The addresses in the ARM64 ELF file is stored in the RELA sections similar
to the mcount location table. Add support to find the addresses from the
RELA section to use to get the actual addresses for checking the
tracepoints and the checking variables to show if all tracepoints are
used.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 scripts/sorttable.c | 186 ++++++++++++++++++++++++++++----------------
 1 file changed, 118 insertions(+), 68 deletions(-)

diff --git a/scripts/sorttable.c b/scripts/sorttable.c
index ddcbec22ca96..edca5b06d8ce 100644
--- a/scripts/sorttable.c
+++ b/scripts/sorttable.c
@@ -578,6 +578,106 @@ static int compare_values_32(const void *a, const void *b)
 
 static int (*compare_values)(const void *a, const void *b);
 
+static char m_err[ERRSTR_MAXSZ];
+static long rela_type;
+static bool sort_reloc;
+static int reloc_shnum;
+
+/* Fill the array with the content of the relocs */
+static int fill_relocs(void *ptr, uint64_t size, Elf_Ehdr *ehdr, uint64_t start_loc)
+{
+	Elf_Shdr *shdr_start;
+	Elf_Rela *rel;
+	unsigned int shnum;
+	unsigned int count = 0;
+	int shentsize;
+	void *array_end = ptr + size;
+
+	shdr_start = (Elf_Shdr *)((char *)ehdr + ehdr_shoff(ehdr));
+	shentsize = ehdr_shentsize(ehdr);
+
+	shnum = ehdr_shnum(ehdr);
+	if (shnum == SHN_UNDEF)
+		shnum = shdr_size(shdr_start);
+
+	for (int i = 0; i < shnum; i++) {
+		Elf_Shdr *shdr = get_index(shdr_start, shentsize, i);
+		void *end;
+
+		if (shdr_type(shdr) != SHT_RELA)
+			continue;
+
+		reloc_shnum = i;
+
+		rel = (void *)ehdr + shdr_offset(shdr);
+		end = (void *)rel + shdr_size(shdr);
+
+		for (; (void *)rel < end; rel = (void *)rel + shdr_entsize(shdr)) {
+			uint64_t offset = rela_offset(rel);
+
+			if (offset >= start_loc && offset < start_loc + size) {
+				if (ptr + long_size > array_end) {
+					snprintf(m_err, ERRSTR_MAXSZ,
+						 "Too many relocations");
+					return -1;
+				}
+
+				/* Make sure this has the correct type */
+				if (rela_info(rel) != rela_type) {
+					snprintf(m_err, ERRSTR_MAXSZ,
+						"rela has type %lx but expected %lx\n",
+						(long)rela_info(rel), rela_type);
+					return -1;
+				}
+
+				if (long_size == 4)
+					*(uint32_t *)ptr = rela_addend(rel);
+				else
+					*(uint64_t *)ptr = rela_addend(rel);
+				ptr += long_size;
+				count++;
+			}
+		}
+	}
+	return count;
+}
+
+static uint64_t get_addr_reloc(Elf_Ehdr *ehdr, uint64_t addr)
+{
+	Elf_Shdr *shdr_start;
+	Elf_Shdr *shdr;
+	Elf_Rela *rel;
+	unsigned int shnum;
+	int shentsize;
+	void *end;
+
+	shdr_start = (Elf_Shdr *)((char *)ehdr + ehdr_shoff(ehdr));
+	shentsize = ehdr_shentsize(ehdr);
+
+	shnum = ehdr_shnum(ehdr);
+	if (shnum == SHN_UNDEF)
+		shnum = shdr_size(shdr_start);
+
+	shdr = get_index(shdr_start, shentsize, reloc_shnum);
+	if (shdr_type(shdr) != SHT_RELA)
+		return 0;
+
+	rel = (void *)ehdr + shdr_offset(shdr);
+	end = (void *)rel + shdr_size(shdr);
+
+	for (; (void *)rel < end; rel = (void *)rel + shdr_entsize(shdr)) {
+		uint64_t offset = rela_offset(rel);
+
+		if (offset == addr) {
+			if (long_size == 4)
+				return rela_addend(rel);
+			else
+				return rela_addend(rel);
+		}
+	}
+	return 0;
+}
+
 static int fill_addrs(void *ptr, uint64_t size, void *addrs)
 {
 	void *end = ptr + size;
@@ -696,11 +796,6 @@ static int parse_symbols(const char *fname)
 }
 
 static pthread_t mcount_sort_thread;
-static bool sort_reloc;
-
-static long rela_type;
-
-static char m_err[ERRSTR_MAXSZ];
 
 struct elf_mcount_loc {
 	Elf_Ehdr *ehdr;
@@ -708,63 +803,6 @@ struct elf_mcount_loc {
 	uint64_t stop_mcount_loc;
 };
 
-/* Fill the array with the content of the relocs */
-static int fill_relocs(void *ptr, uint64_t size, Elf_Ehdr *ehdr, uint64_t start_loc)
-{
-	Elf_Shdr *shdr_start;
-	Elf_Rela *rel;
-	unsigned int shnum;
-	unsigned int count = 0;
-	int shentsize;
-	void *array_end = ptr + size;
-
-	shdr_start = (Elf_Shdr *)((char *)ehdr + ehdr_shoff(ehdr));
-	shentsize = ehdr_shentsize(ehdr);
-
-	shnum = ehdr_shnum(ehdr);
-	if (shnum == SHN_UNDEF)
-		shnum = shdr_size(shdr_start);
-
-	for (int i = 0; i < shnum; i++) {
-		Elf_Shdr *shdr = get_index(shdr_start, shentsize, i);
-		void *end;
-
-		if (shdr_type(shdr) != SHT_RELA)
-			continue;
-
-		rel = (void *)ehdr + shdr_offset(shdr);
-		end = (void *)rel + shdr_size(shdr);
-
-		for (; (void *)rel < end; rel = (void *)rel + shdr_entsize(shdr)) {
-			uint64_t offset = rela_offset(rel);
-
-			if (offset >= start_loc && offset < start_loc + size) {
-				if (ptr + long_size > array_end) {
-					snprintf(m_err, ERRSTR_MAXSZ,
-						 "Too many relocations");
-					return -1;
-				}
-
-				/* Make sure this has the correct type */
-				if (rela_info(rel) != rela_type) {
-					snprintf(m_err, ERRSTR_MAXSZ,
-						"rela has type %lx but expected %lx\n",
-						(long)rela_info(rel), rela_type);
-					return -1;
-				}
-
-				if (long_size == 4)
-					*(uint32_t *)ptr = rela_addend(rel);
-				else
-					*(uint64_t *)ptr = rela_addend(rel);
-				ptr += long_size;
-				count++;
-			}
-		}
-	}
-	return count;
-}
-
 /* Put the sorted vals back into the relocation elements */
 static void replace_relocs(void *ptr, uint64_t size, Elf_Ehdr *ehdr, uint64_t start_loc)
 {
@@ -957,7 +995,15 @@ static void make_trace_array(struct elf_tracepoint *etrace)
 		return;
 	}
 
-	count = fill_addrs(vals, size, start);
+	if (sort_reloc) {
+		count = fill_relocs(vals, size, ehdr, etrace->start_tracepoint_check);
+		/* gcc may use relocs to save the addresses, but clang does not. */
+		if (!count) {
+			count = fill_addrs(vals, size, start);
+			sort_reloc = 0;
+		}
+	} else
+		count = fill_addrs(vals, size, start);
 
 	compare_values = long_size == 4 ? compare_values_32 : compare_values_64;
 	qsort(vals, count, long_size, compare_values);
@@ -1017,10 +1063,14 @@ static int failed_event(struct elf_tracepoint *etrace, uint64_t addr)
 	if (name_ptr > file_map_end)
 		goto bad_addr;
 
-	if (long_size == 4)
-		addr = r(name_ptr);
-	else
-		addr = r8(name_ptr);
+	if (sort_reloc) {
+		addr = get_addr_reloc(ehdr, addr);
+	} else {
+		if (long_size == 4)
+			addr = r(name_ptr);
+		else
+			addr = r8(name_ptr);
+	}
 
 	sec_addr = shdr_addr(ro_data_sec);
 	sec_offset = shdr_offset(ro_data_sec);
@@ -1473,9 +1523,9 @@ static int do_file(char const *const fname, void *addr)
 
 	switch (r2(&ehdr->e32.e_machine)) {
 	case EM_AARCH64:
-#ifdef MCOUNT_SORT_ENABLED
 		sort_reloc = true;
 		rela_type = 0x403;
+#ifdef MCOUNT_SORT_ENABLED
 		/* arm64 uses patchable function entry placing before function */
 		before_func = 8;
 #endif
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 4/5] tracepoint: Do not warn for unused event that is exported
  2025-06-12 23:58 [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
                   ` (2 preceding siblings ...)
  2025-06-12 23:58 ` [PATCH v2 3/5] tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address Steven Rostedt
@ 2025-06-12 23:58 ` Steven Rostedt
  2025-06-12 23:58 ` [PATCH v2 5/5] tracing: Call trace_ftrace_test_filter() for the event Steven Rostedt
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2025-06-12 23:58 UTC (permalink / raw)
  To: linux-arch, linux-kernel, linux-trace-kernel, linux-kbuild, llvm
  Cc: Mathieu Desnoyers, Masami Hiramatsu, Arnd Bergmann,
	Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
	Nick Desaulniers, Catalin Marinas, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

There are a few generic events that may only be used by modules. They are
defined and then set with EXPORT_TRACEPOINT*(). Mark events that are
exported as being used, even though they still waste memory in the kernel
proper.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 include/linux/tracepoint.h | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 2b96c7e94c52..8026a0659580 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -223,7 +223,8 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
 
 #ifdef CONFIG_TRACEPOINT_VERIFY_USED
 # define TRACEPOINT_CHECK(name)						\
-	static void __used __section("__tracepoint_check") *__trace_check = \
+	static void __used __section("__tracepoint_check") *		\
+	__trace_check_##name =						\
 		&__tracepoint_##name;
 #else
 # define TRACEPOINT_CHECK(name)
@@ -381,10 +382,12 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
 	__DEFINE_TRACE_EXT(_name, NULL, PARAMS(_proto), PARAMS(_args));
 
 #define EXPORT_TRACEPOINT_SYMBOL_GPL(name)				\
+	TRACEPOINT_CHECK(name)						\
 	EXPORT_SYMBOL_GPL(__tracepoint_##name);				\
 	EXPORT_SYMBOL_GPL(__traceiter_##name);				\
 	EXPORT_STATIC_CALL_GPL(tp_func_##name)
 #define EXPORT_TRACEPOINT_SYMBOL(name)					\
+	TRACEPOINT_CHECK(name)						\
 	EXPORT_SYMBOL(__tracepoint_##name);				\
 	EXPORT_SYMBOL(__traceiter_##name);				\
 	EXPORT_STATIC_CALL(tp_func_##name)
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 5/5] tracing: Call trace_ftrace_test_filter() for the event
  2025-06-12 23:58 [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
                   ` (3 preceding siblings ...)
  2025-06-12 23:58 ` [PATCH v2 4/5] tracepoint: Do not warn for unused event that is exported Steven Rostedt
@ 2025-06-12 23:58 ` Steven Rostedt
  2025-06-13  0:05 ` [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2025-06-12 23:58 UTC (permalink / raw)
  To: linux-arch, linux-kernel, linux-trace-kernel, linux-kbuild, llvm
  Cc: Mathieu Desnoyers, Masami Hiramatsu, Arnd Bergmann,
	Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
	Nick Desaulniers, Catalin Marinas, Andrew Morton

From: Steven Rostedt <rostedt@goodmis.org>

The trace event filter bootup self test tests a bunch of filter logic
against the ftrace_test_filter event, but does not actually call the
event. Work is being done to cause a warning if an event is defined but
not used. To quiet the warning call the trace event under an if statement
where it is disabled so it doesn't get optimized out.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 kernel/trace/trace_events_filter.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
index ea8b364b6818..79621f0fab05 100644
--- a/kernel/trace/trace_events_filter.c
+++ b/kernel/trace/trace_events_filter.c
@@ -2902,6 +2902,10 @@ static __init int ftrace_test_event_filter(void)
 	if (i == DATA_CNT)
 		printk(KERN_CONT "OK\n");
 
+	/* Need to call ftrace_test_filter to prevent a warning */
+	if (!trace_ftrace_test_filter_enabled())
+		trace_ftrace_test_filter(1, 2, 3, 4, 5, 6, 7, 8);
+
 	return 0;
 }
 
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events
  2025-06-12 23:58 [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
                   ` (4 preceding siblings ...)
  2025-06-12 23:58 ` [PATCH v2 5/5] tracing: Call trace_ftrace_test_filter() for the event Steven Rostedt
@ 2025-06-13  0:05 ` Steven Rostedt
  2025-06-13  3:20 ` Randy Dunlap
  2025-06-13 14:28 ` Steven Rostedt
  7 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2025-06-13  0:05 UTC (permalink / raw)
  To: linux-arch, linux-kernel, linux-trace-kernel, linux-kbuild, llvm
  Cc: Mathieu Desnoyers, Masami Hiramatsu, Arnd Bergmann,
	Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
	Nick Desaulniers, Catalin Marinas, Andrew Morton

On Thu, 12 Jun 2025 19:58:27 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> The third patch updates sorttable to work for arm64 when compiled with gcc. As
> gcc's arm64 build doesn't put addresses in their section but saves them off in
> the RELA sections. This mostly takes the work done that was needed to do the
> mcount sorting at boot up on arm64.

FYI, I built this on every architecture with their default config and
every architecture that supports tracing reported unused events. Mostly
they were the same, but depending on the defconfig it gave different
output.

-- Steve

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events
  2025-06-12 23:58 [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
                   ` (5 preceding siblings ...)
  2025-06-13  0:05 ` [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
@ 2025-06-13  3:20 ` Randy Dunlap
  2025-06-13 12:07   ` Steven Rostedt
  2025-06-13 14:28 ` Steven Rostedt
  7 siblings, 1 reply; 11+ messages in thread
From: Randy Dunlap @ 2025-06-13  3:20 UTC (permalink / raw)
  To: Steven Rostedt, linux-arch, linux-kernel, linux-trace-kernel,
	linux-kbuild, llvm
  Cc: Mathieu Desnoyers, Masami Hiramatsu, Arnd Bergmann,
	Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
	Nick Desaulniers, Catalin Marinas, Andrew Morton

Hi,

On 6/12/25 4:58 PM, Steven Rostedt wrote:
> Every trace event can take up to 5K of memory in text and meta data regardless

s/meta data/metadata/ unless you are referring to meta's data.

s/meta data/metadata/ in patches 1 and 2 also.

> if they are used or not. Trace events should not be created if they are not
> used.  Currently there's over a hundred events in the kernel that are defined
> but unused, either because their callers were removed without removing the
> trace event with it, or a config hides the trace event caller but not the
> trace event itself. And in some cases, trace events were simply added but were
> never called for whatever reason. The number of unused trace events continues
> to grow.
> 
> This patch series aims to fix this.
> 
> The first patch creates a new section called __tracepoint_check, where all
> callers of a tracepoint creates a variable that is placed in this section with
> a pointer to the tracepoint they use. Then on boot up, it iterates this
> section and will modify the tracepoint's "func" field to a value of 1 (all
> tracepoints "func" fields are initialized to NULL and is only set when they
> are registered). This takes place before any tracepoint can be registered.
> 
> Then each tracepoint is iterated on and if any tracepoint does not have its
> "func" field set to 1 a warning is triggerd and every tracepoint that doesn't

triggered

> have that field set is printed. The "func" field is then reset back to NULL.
> 
> The second patch modifies scripts/sorttable.c to read the __tracepoint_check
> section. It sorts it, and then reads the __tracepoint_ptr section that has all
> compiled in tracepoints. It makes sure that every tracepoint is found in the
> check section and if not, it prints a warning message about it. This lists the
> missing tracepoints at build time.
> 
> The third patch updates sorttable to work for arm64 when compiled with gcc. As
> gcc's arm64 build doesn't put addresses in their section but saves them off in
> the RELA sections. This mostly takes the work done that was needed to do the
> mcount sorting at boot up on arm64.
> 
> The forth patch adds EXPORT_TRACEPOINT() to the __tracepoint_check section as

fourth (or are you coding in forth?)

> well. There was several locations that adds tracepoints in the kernel proper
> that are only used in modules. It was getting quite complex trying to move
> things around that I just decided to make any tracepoint in a
> EXPORT_TRACEPOINT "used". I'm using the analogy of static and global
> functions. An unused static function gets a warning but an unused global one
> does not.
> 
> The last patch updates the trace_ftrace_test_filter boot up self test. That
> selftest creates a trace event to run a bunch of filter tests on it without
> actually calling the tracepoint. To quiet the warning, the selftest tracepoint
> is called within a if (!trace_<event>_enabled()) section, where it will not be
> optimized out, nor will it be called.
> 
> This is v2 from: https://lore.kernel.org/linux-trace-kernel/20250529130138.544ffec4@gandalf.local.home/
> which was simply the first patch. This version adds the other patches.
> 
> Steven Rostedt (5):
>       tracepoints: Add verifier that makes sure all defined tracepoints are used
>       tracing: sorttable: Add a tracepoint verification check at build time
>       tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address
>       tracepoint: Do not warn for unused event that is exported
>       tracing: Call trace_ftrace_test_filter() for the event
> 
> ----
>  include/asm-generic/vmlinux.lds.h  |   1 +
>  include/linux/tracepoint.h         |  13 ++
>  kernel/trace/Kconfig               |  31 +++
>  kernel/trace/trace_events_filter.c |   4 +
>  kernel/tracepoint.c                |  26 +++
>  scripts/Makefile                   |   4 +
>  scripts/sorttable.c                | 444 ++++++++++++++++++++++++++++++-------
>  7 files changed, 437 insertions(+), 86 deletions(-)
> 

thanks.
-- 
~Randy


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events
  2025-06-13  3:20 ` Randy Dunlap
@ 2025-06-13 12:07   ` Steven Rostedt
  0 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2025-06-13 12:07 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: linux-arch, linux-kernel, linux-trace-kernel, linux-kbuild, llvm,
	Mathieu Desnoyers, Masami Hiramatsu, Arnd Bergmann,
	Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
	Nick Desaulniers, Catalin Marinas, Andrew Morton

On Thu, 12 Jun 2025 20:20:48 -0700
Randy Dunlap <rdunlap@infradead.org> wrote:

> Hi,
> 
> On 6/12/25 4:58 PM, Steven Rostedt wrote:
> > Every trace event can take up to 5K of memory in text and meta data regardless  
> 
> s/meta data/metadata/ unless you are referring to meta's data.

Oh so I need to give Meta royalties?

> 
> s/meta data/metadata/ in patches 1 and 2 also.

I'll update when I pull them in. Note, the cover letter isn't something
that I put into git. But I'll try to remember to update if I send a v3.

> 
> > if they are used or not. Trace events should not be created if they are not
> > used.  Currently there's over a hundred events in the kernel that are defined
> > but unused, either because their callers were removed without removing the
> > trace event with it, or a config hides the trace event caller but not the
> > trace event itself. And in some cases, trace events were simply added but were
> > never called for whatever reason. The number of unused trace events continues
> > to grow.
> > 
> > This patch series aims to fix this.
> > 
> > The first patch creates a new section called __tracepoint_check, where all
> > callers of a tracepoint creates a variable that is placed in this section with
> > a pointer to the tracepoint they use. Then on boot up, it iterates this
> > section and will modify the tracepoint's "func" field to a value of 1 (all
> > tracepoints "func" fields are initialized to NULL and is only set when they
> > are registered). This takes place before any tracepoint can be registered.
> > 
> > Then each tracepoint is iterated on and if any tracepoint does not have its
> > "func" field set to 1 a warning is triggerd and every tracepoint that doesn't  
> 
> triggered

Yes I am!

> 
> > have that field set is printed. The "func" field is then reset back to NULL.
> > 
> > The second patch modifies scripts/sorttable.c to read the __tracepoint_check
> > section. It sorts it, and then reads the __tracepoint_ptr section that has all
> > compiled in tracepoints. It makes sure that every tracepoint is found in the
> > check section and if not, it prints a warning message about it. This lists the
> > missing tracepoints at build time.
> > 
> > The third patch updates sorttable to work for arm64 when compiled with gcc. As
> > gcc's arm64 build doesn't put addresses in their section but saves them off in
> > the RELA sections. This mostly takes the work done that was needed to do the
> > mcount sorting at boot up on arm64.
> > 
> > The forth patch adds EXPORT_TRACEPOINT() to the __tracepoint_check section as  
> 
> fourth (or are you coding in forth?)

oops

-- Steve

> 
> > well. There was several locations that adds tracepoints in the kernel proper
> > that are only used in modules. It was getting quite complex trying to move
> > things around that I just decided to make any tracepoint in a
> > EXPORT_TRACEPOINT "used". I'm using the analogy of static and global
> > functions. An unused static function gets a warning but an unused global one
> > does not.
> > 
> > The last patch updates the trace_ftrace_test_filter boot up self test. That
> > selftest creates a trace event to run a bunch of filter tests on it without
> > actually calling the tracepoint. To quiet the warning, the selftest tracepoint
> > is called within a if (!trace_<event>_enabled()) section, where it will not be
> > optimized out, nor will it be called.
> > 
> > This is v2 from: https://lore.kernel.org/linux-trace-kernel/20250529130138.544ffec4@gandalf.local.home/
> > which was simply the first patch. This version adds the other patches.
> > 
> > Steven Rostedt (5):
> >       tracepoints: Add verifier that makes sure all defined tracepoints are used
> >       tracing: sorttable: Add a tracepoint verification check at build time
> >       tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address
> >       tracepoint: Do not warn for unused event that is exported
> >       tracing: Call trace_ftrace_test_filter() for the event
> > 
> > ----
> >  include/asm-generic/vmlinux.lds.h  |   1 +
> >  include/linux/tracepoint.h         |  13 ++
> >  kernel/trace/Kconfig               |  31 +++
> >  kernel/trace/trace_events_filter.c |   4 +
> >  kernel/tracepoint.c                |  26 +++
> >  scripts/Makefile                   |   4 +
> >  scripts/sorttable.c                | 444 ++++++++++++++++++++++++++++++-------
> >  7 files changed, 437 insertions(+), 86 deletions(-)
> >   
> 
> thanks.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events
  2025-06-12 23:58 [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
                   ` (6 preceding siblings ...)
  2025-06-13  3:20 ` Randy Dunlap
@ 2025-06-13 14:28 ` Steven Rostedt
  2025-06-13 14:42   ` Steven Rostedt
  7 siblings, 1 reply; 11+ messages in thread
From: Steven Rostedt @ 2025-06-13 14:28 UTC (permalink / raw)
  To: linux-arch, linux-kernel, linux-trace-kernel, linux-kbuild, llvm
  Cc: Mathieu Desnoyers, Masami Hiramatsu, Arnd Bergmann,
	Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
	Nick Desaulniers, Catalin Marinas, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 3070 bytes --]

On Thu, 12 Jun 2025 19:58:27 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> Every trace event can take up to 5K of memory in text and meta data regardless
> if they are used or not. Trace events should not be created if they are not
> used.  Currently there's over a hundred events in the kernel that are defined
> but unused, either because their callers were removed without removing the
> trace event with it, or a config hides the trace event caller but not the
> trace event itself. And in some cases, trace events were simply added but were
> never called for whatever reason. The number of unused trace events continues
> to grow.

Now it's been a while since I looked at the actual sizes, so I decided
to see what they are again.

So I created a trace header with 10 events (attached file), that had this:

TRACE_EVENT(size_event_1,
        TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
        TP_ARGS(A, B),
        TP_STRUCT__entry(
                __field(        unsigned long,  Aa)
                __field(        unsigned long,  Ab)
                __field(        unsigned long,  Ba)
                __field(        unsigned long,  Bb)
        ),
        TP_fast_assign(
                __entry->Aa = A->a;
                __entry->Ab = A->b;
                __entry->Ba = B->a;
                __entry->Bb = B->b;
        ),
        TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
                __entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);

And I created 9 more by just renaming the event name (size_event_2, etc).

I also looked at how well DEFINE_EVENT() works (note a TRACE_EVENT()
macro is just a DECLARE_EVENT_CLASS() followed by a DEFINE_EVENT() with
the same name as the class, so I could use the first TRACE_EVENT as a
class and the first event).

DEFINE_EVENT(size_event_1, size_event_2,
        TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
        TP_ARGS(A, B));

The module is simply:

echo '#include <linux/module.h>

#define CREATE_TRACE_POINTS
#include "size_events.h"

static __init int size_init(void)
{
        return 0;
}

static __exit void size_exit(void)
{
}

module_init(size_init);
module_exit(size_exit);

MODULE_AUTHOR("Steven Rostedt");
MODULE_DESCRIPTION("Test the size of trace event");
MODULE_LICENSE("GPL");' > event-mod.c

The results are (renaming the module to what they did):

   text    data     bss     dec     hex filename
    629    1440       0    2069     815 no-events.ko
  44837   15424       0   60261    eb65 trace-events.ko
  11495    8064       0   19559    4c67 define-events.ko

With no events, the size is 2069.
With full trace events it jumped to 60261
With One DECLARE_EVENT_CLASS() and 9 DEFINE_EVENT(), it changed to 19559

That means each TRACE_EVENT() is approximately 5819 bytes.
  (60261 - 2069) / 10

And each DEFINE_EVENT() is approximately 1296 bytes.
  ((19559 - 2069) - 5819) / 9

Now I do have a bit of debugging options enabled which could cause this
to bloat even more. But yeah, trace events do take up a bit of memory.

-- Steve

[-- Attachment #2: size_events.h --]
[-- Type: text/x-chdr, Size: 6597 bytes --]


/* SPDX-License-Identifier: GPL-2.0 */
#undef TRACE_SYSTEM
#define TRACE_SYSTEM event-sizes 

#undef TRACE_SYSTEM_VAR
#define TRACE_SYSTEM_VAR event_sizes

#if !defined(_SIZE_EVENT_H) || defined(TRACE_HEADER_MULTI_READ)
#define _SIZE_EVENT_H

#include <linux/tracepoint.h>

#ifndef SIZE_EVENT_DEFINED
#define SIZE_EVENT_DEFINED
struct size_event_struct {
	unsigned long a;
	unsigned long b;
};
#endif

#define DEFINE_EVENT_SIZES 1
#define DEFINE_FULL_EVENTS 0

#if DEFINE_EVENT_SIZES

TRACE_EVENT(size_event_1,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);

#if DEFINE_FULL_EVENTS
TRACE_EVENT(size_event_2,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_3,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_4,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_5,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_6,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_7,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_8,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_9,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_10,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
#else /* !DEFINE_FULL_EVENTS */
DEFINE_EVENT(size_event_1, size_event_2,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_3,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_4,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_5,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_6,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_7,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_8,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_9,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_10,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

#endif /* !DEFINE_FULL_EVENTS */
#endif /* DEFINE_EVENT_SIZES */

#endif

/***** NOTICE! The #if protection ends here. *****/


#undef TRACE_INCLUDE_PATH
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_PATH .
#define TRACE_INCLUDE_FILE size_events 
#include <trace/define_trace.h>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events
  2025-06-13 14:28 ` Steven Rostedt
@ 2025-06-13 14:42   ` Steven Rostedt
  0 siblings, 0 replies; 11+ messages in thread
From: Steven Rostedt @ 2025-06-13 14:42 UTC (permalink / raw)
  To: linux-arch, linux-kernel, linux-trace-kernel, linux-kbuild, llvm
  Cc: Mathieu Desnoyers, Masami Hiramatsu, Arnd Bergmann,
	Masahiro Yamada, Nathan Chancellor, Nicolas Schier,
	Nick Desaulniers, Catalin Marinas, Andrew Morton

[-- Attachment #1: Type: text/plain, Size: 1028 bytes --]

On Fri, 13 Jun 2025 10:28:34 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> And each DEFINE_EVENT() is approximately 1296 bytes.
>   ((19559 - 2069) - 5819) / 9

Interesting enough, just raw tracepoints are not much better. I updated
the header file (attached) to have 10 of these, and compiled that.

DECLARE_TRACE(size_event_1,
        TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
        TP_ARGS(A, B));

The result is:

   text    data     bss     dec     hex filename
    629    1440       0    2069     815 no-events.ko
  44837   15424       0   60261    eb65 trace-events.ko
  11495    8064       0   19559    4c67 define-events.ko
  10865    4408       0   15273    3ba9 declare-trace.ko

Where each DECLARE_TRACE() ends up being 1320 bytes.
  (15273 - 2069) / 10

This is slightly bigger than a DEFINE_EVENT(), but that's also because
the DEFINE_EVENT() shares some of the tracepoint creation in the
DECLARE_EVENT_CLASS(), where as that work is done fully in the
DECLARE_TRACE().

-- Steve

[-- Attachment #2: size_events.h --]
[-- Type: text/x-chdr, Size: 7840 bytes --]


/* SPDX-License-Identifier: GPL-2.0 */
#undef TRACE_SYSTEM
#define TRACE_SYSTEM event-sizes 

#undef TRACE_SYSTEM_VAR
#define TRACE_SYSTEM_VAR event_sizes

#if !defined(_SIZE_EVENT_H) || defined(TRACE_HEADER_MULTI_READ)
#define _SIZE_EVENT_H

#include <linux/tracepoint.h>

#ifndef SIZE_EVENT_DEFINED
#define SIZE_EVENT_DEFINED
struct size_event_struct {
	unsigned long a;
	unsigned long b;
};
#endif

#define DEFINE_EVENT_SIZES 0
#define DEFINE_FULL_EVENTS 0
#define DEFINE_JUST_TRACEPOINTS 1

#if DEFINE_EVENT_SIZES

TRACE_EVENT(size_event_1,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);

#if DEFINE_FULL_EVENTS
TRACE_EVENT(size_event_2,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_3,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_4,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_5,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_6,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_7,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_8,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_9,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
TRACE_EVENT(size_event_10,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B),
	TP_STRUCT__entry(
		__field(	unsigned long,	Aa)
		__field(	unsigned long,	Ab)
		__field(	unsigned long,	Ba)
		__field(	unsigned long,	Bb)
	),
	TP_fast_assign(
		__entry->Aa = A->a;
		__entry->Ab = A->b;
		__entry->Ba = B->a;
		__entry->Bb = B->b;
	),
	TP_printk("Aa=%ld Ab=%ld Ba=%ld Bb=%ld",
		__entry->Aa, __entry->Ab, __entry->Ba, __entry->Bb)
);
#else /* !DEFINE_FULL_EVENTS */
DEFINE_EVENT(size_event_1, size_event_2,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_3,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_4,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_5,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_6,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_7,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_8,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_9,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DEFINE_EVENT(size_event_1, size_event_10,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

#endif /* !DEFINE_FULL_EVENTS */

#elif DEFINE_JUST_TRACEPOINTS

DECLARE_TRACE(size_event_1,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DECLARE_TRACE(size_event_2,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DECLARE_TRACE(size_event_3,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DECLARE_TRACE(size_event_4,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DECLARE_TRACE(size_event_5,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DECLARE_TRACE(size_event_6,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DECLARE_TRACE(size_event_7,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DECLARE_TRACE(size_event_8,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DECLARE_TRACE(size_event_9,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));

DECLARE_TRACE(size_event_10,
	TP_PROTO(struct size_event_struct *A, struct size_event_struct *B),
	TP_ARGS(A, B));
#endif /* DEFINE_EVENT_SIZES && DEFINE_JUST_TRACEPOINTS */

#endif

/***** NOTICE! The #if protection ends here. *****/


#undef TRACE_INCLUDE_PATH
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_PATH .
#define TRACE_INCLUDE_FILE size_events 
#include <trace/define_trace.h>

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2025-06-13 14:42 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-12 23:58 [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
2025-06-12 23:58 ` [PATCH v2 1/5] tracepoints: Add verifier that makes sure all defined tracepoints are used Steven Rostedt
2025-06-12 23:58 ` [PATCH v2 2/5] tracing: sorttable: Add a tracepoint verification check at build time Steven Rostedt
2025-06-12 23:58 ` [PATCH v2 3/5] tracing: sorttable: Find unused tracepoints for arm64 that uses reloc for address Steven Rostedt
2025-06-12 23:58 ` [PATCH v2 4/5] tracepoint: Do not warn for unused event that is exported Steven Rostedt
2025-06-12 23:58 ` [PATCH v2 5/5] tracing: Call trace_ftrace_test_filter() for the event Steven Rostedt
2025-06-13  0:05 ` [PATCH v2 0/5] tracepoints: Add warnings for unused tracepoints and trace events Steven Rostedt
2025-06-13  3:20 ` Randy Dunlap
2025-06-13 12:07   ` Steven Rostedt
2025-06-13 14:28 ` Steven Rostedt
2025-06-13 14:42   ` Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).