* [PATCH 0/4] scripts/sorttable: ftrace: Fix some bugs with sorttable and ARM 64
@ 2025-02-25 18:20 Steven Rostedt
2025-02-25 18:20 ` [PATCH 1/4] ftrace: Test mcount_loc addr before calling ftrace_call_addr() Steven Rostedt
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Steven Rostedt @ 2025-02-25 18:20 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-arm-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Masahiro Yamada, Catalin Marinas, Will Deacon, Nathan Chancellor,
Arnd Bergmann, Mark Brown
A few bugs with ARM 64 has been reported with the removal of the unused
weak functions code.
One was that kaslr_offset() may not be defined by all architectures and
it's reference would cause the build to fail. This was fixed by removing
the kaslr_offset() to check valid mcount_loc addresses and use the
is_kernel_text() instead.
Another was that clang doesn't do the trick of storing the mcount_loc
addresses in the Elf_Rela sections like gcc does. Clang does it like
other achitectures do. To handle this, the Elf_Rela is first used
but no functions were found there, it then falls back to the same
code that all the other architectures use.
When reading the mcount_loc and creating the ftrace descriptors, the
architecture specific function ftrace_call_addr() is called on the
address from the mcount_loc. But because the unused weak functions were
zeroed out, but KASLR can still modify them, it can make the address
invalid. The ftrace_call_addr() from ARM 64 will crash if the address
passed in is invalid. Have the valid tests done before calling that
function.
On bug that was found while debugging this but was not reported was that
the test against the nm output to determine if a function is an unused
weak function or not was triggering false postives for all functions.
That's because the address in mcount_loc for ARM 64 is just before
the function entry. The check against nm would see if the address was
within the function text, but 8 bytes before is not in the function text
and this would cause all the functions to be considered unused weak
functions and there would be no function left to trace.
Steven Rostedt (4):
ftrace: Test mcount_loc addr before calling ftrace_call_addr()
ftrace: Check against is_kernel_text() instead of kaslr_offset()
scripts/sorttable: Use normal sort if there's no relocs in the mcount section
scripts/sorttable: Allow matches to functions before function entry
----
kernel/trace/ftrace.c | 23 +++++++++++++++++------
scripts/sorttable.c | 16 +++++++++++++---
2 files changed, 30 insertions(+), 9 deletions(-)
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 1/4] ftrace: Test mcount_loc addr before calling ftrace_call_addr()
2025-02-25 18:20 [PATCH 0/4] scripts/sorttable: ftrace: Fix some bugs with sorttable and ARM 64 Steven Rostedt
@ 2025-02-25 18:20 ` Steven Rostedt
2025-02-25 18:20 ` [PATCH 2/4] ftrace: Check against is_kernel_text() instead of kaslr_offset() Steven Rostedt
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Steven Rostedt @ 2025-02-25 18:20 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-arm-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Masahiro Yamada, Catalin Marinas, Will Deacon, Nathan Chancellor,
Arnd Bergmann, Mark Brown
From: Steven Rostedt <rostedt@goodmis.org>
The addresses in the mcount_loc can be zeroed and then moved by KASLR
making them invalid addresses. ftrace_call_addr() for ARM 64 expects a
valid address to kernel text. If the addr read from the mcount_loc section
is invalid, it must not call ftrace_call_addr(). Move the addr check
before calling ftrace_call_addr() in ftrace_process_locs().
Fixes: ef378c3b8233 ("scripts/sorttable: Zero out weak functions in mcount_loc table")
Reported-by: Nathan Chancellor <nathan@kernel.org>
Reported-by: "Arnd Bergmann" <arnd@arndb.de>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Closes: https://lore.kernel.org/all/20250225025631.GA271248@ax162/
Closes: https://lore.kernel.org/all/91523154-072b-437b-bbdc-0b70e9783fd0@app.fastmail.com/
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/ftrace.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 27c8def2139d..183f72cf15ed 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -7063,7 +7063,9 @@ static int ftrace_process_locs(struct module *mod,
pg = start_pg;
while (p < end) {
unsigned long end_offset;
- addr = ftrace_call_adjust(*p++);
+
+ addr = *p++;
+
/*
* Some architecture linkers will pad between
* the different mcount_loc sections of different
@@ -7075,6 +7077,8 @@ static int ftrace_process_locs(struct module *mod,
continue;
}
+ addr = ftrace_call_adjust(addr);
+
end_offset = (pg->index+1) * sizeof(pg->records[0]);
if (end_offset > PAGE_SIZE << pg->order) {
/* We should have allocated enough */
--
2.47.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 2/4] ftrace: Check against is_kernel_text() instead of kaslr_offset()
2025-02-25 18:20 [PATCH 0/4] scripts/sorttable: ftrace: Fix some bugs with sorttable and ARM 64 Steven Rostedt
2025-02-25 18:20 ` [PATCH 1/4] ftrace: Test mcount_loc addr before calling ftrace_call_addr() Steven Rostedt
@ 2025-02-25 18:20 ` Steven Rostedt
2025-02-25 18:20 ` [PATCH 3/4] scripts/sorttable: Use normal sort if theres no relocs in the mcount section Steven Rostedt
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Steven Rostedt @ 2025-02-25 18:20 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-arm-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Masahiro Yamada, Catalin Marinas, Will Deacon, Nathan Chancellor,
Arnd Bergmann, Mark Brown
From: Steven Rostedt <rostedt@goodmis.org>
As kaslr_offset() is architecture dependent and also may not be defined by
all architectures, when zeroing out unused weak functions, do not check
against kaslr_offset(), but instead check if the address is within the
kernel text sections. If KASLR added a shift to the zeroed out function,
it would still not be located in the kernel text. This is a more robust
way to test if the text is valid or not.
Fixes: ef378c3b8233 ("scripts/sorttable: Zero out weak functions in mcount_loc table")
Reported-by: Nathan Chancellor <nathan@kernel.org>
Reported-by: Mark Brown <broonie@kernel.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Closes: https://lore.kernel.org/all/20250224180805.GA1536711@ax162/
Closes: https://lore.kernel.org/all/5225b07b-a9b2-4558-9d5f-aa60b19f6317@sirena.org.uk/
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
kernel/trace/ftrace.c | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 183f72cf15ed..bec7b5dbdb3b 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -7004,7 +7004,6 @@ static int ftrace_process_locs(struct module *mod,
unsigned long count;
unsigned long *p;
unsigned long addr;
- unsigned long kaslr;
unsigned long flags = 0; /* Shut up gcc */
unsigned long pages;
int ret = -ENOMEM;
@@ -7056,9 +7055,6 @@ static int ftrace_process_locs(struct module *mod,
ftrace_pages->next = start_pg;
}
- /* For zeroed locations that were shifted for core kernel */
- kaslr = !mod ? kaslr_offset() : 0;
-
p = start;
pg = start_pg;
while (p < end) {
@@ -7072,7 +7068,18 @@ static int ftrace_process_locs(struct module *mod,
* object files to satisfy alignments.
* Skip any NULL pointers.
*/
- if (!addr || addr == kaslr) {
+ if (!addr) {
+ skipped++;
+ continue;
+ }
+
+ /*
+ * If this is core kernel, make sure the address is in core
+ * or inittext, as weak functions get zeroed and KASLR can
+ * move them to something other than zero. It just will not
+ * move it to an area where kernel text is.
+ */
+ if (!mod && !(is_kernel_text(addr) || is_kernel_inittext(addr))) {
skipped++;
continue;
}
--
2.47.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 3/4] scripts/sorttable: Use normal sort if theres no relocs in the mcount section
2025-02-25 18:20 [PATCH 0/4] scripts/sorttable: ftrace: Fix some bugs with sorttable and ARM 64 Steven Rostedt
2025-02-25 18:20 ` [PATCH 1/4] ftrace: Test mcount_loc addr before calling ftrace_call_addr() Steven Rostedt
2025-02-25 18:20 ` [PATCH 2/4] ftrace: Check against is_kernel_text() instead of kaslr_offset() Steven Rostedt
@ 2025-02-25 18:20 ` Steven Rostedt
2025-02-25 18:20 ` [PATCH 4/4] scripts/sorttable: Allow matches to functions before function entry Steven Rostedt
2025-02-25 18:35 ` [PATCH 0/4] scripts/sorttable: ftrace: Fix some bugs with sorttable and ARM 64 Steven Rostedt
4 siblings, 0 replies; 6+ messages in thread
From: Steven Rostedt @ 2025-02-25 18:20 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-arm-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Masahiro Yamada, Catalin Marinas, Will Deacon, Nathan Chancellor,
Arnd Bergmann, Mark Brown
From: Steven Rostedt <rostedt@goodmis.org>
When ARM 64 is compiled with gcc, the mcount_loc section will be filled
with zeros and the addresses will be located in the Elf_Rela sections. To
sort the mcount_loc section, the addresses from the Elf_Rela need to be
placed into an array and that is sorted.
But when ARM 64 is compiled with clang, it does it the same way as other
architectures and leaves the addresses as is in the mcount_loc section.
To handle both cases, ARM 64 will first try to sort the Elf_Rela section,
and if it doesn't find any functions, it will then fall back to the
sorting of the addresses in the mcount_loc section itself.
Fixes: b3d09d06e052 ("arm64: scripts/sorttable: Implement sorting mcount_loc at boot for arm64")
Reported-by: "Arnd Bergmann" <arnd@arndb.de>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Closes: https://lore.kernel.org/all/893cd8f1-8585-4d25-bf0f-4197bf872465@app.fastmail.com/
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
scripts/sorttable.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/scripts/sorttable.c b/scripts/sorttable.c
index 23c7e0e6c024..07ad8116bc8d 100644
--- a/scripts/sorttable.c
+++ b/scripts/sorttable.c
@@ -827,9 +827,14 @@ static void *sort_mcount_loc(void *arg)
pthread_exit(m_err);
}
- if (sort_reloc)
+ if (sort_reloc) {
count = fill_relocs(vals, size, ehdr, emloc->start_mcount_loc);
- else
+ /* gcc may use relocs to save the addresses, but clang does not. */
+ if (!count) {
+ count = fill_addrs(vals, size, start_loc);
+ sort_reloc = 0;
+ }
+ } else
count = fill_addrs(vals, size, start_loc);
if (count < 0) {
--
2.47.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 4/4] scripts/sorttable: Allow matches to functions before function entry
2025-02-25 18:20 [PATCH 0/4] scripts/sorttable: ftrace: Fix some bugs with sorttable and ARM 64 Steven Rostedt
` (2 preceding siblings ...)
2025-02-25 18:20 ` [PATCH 3/4] scripts/sorttable: Use normal sort if theres no relocs in the mcount section Steven Rostedt
@ 2025-02-25 18:20 ` Steven Rostedt
2025-02-25 18:35 ` [PATCH 0/4] scripts/sorttable: ftrace: Fix some bugs with sorttable and ARM 64 Steven Rostedt
4 siblings, 0 replies; 6+ messages in thread
From: Steven Rostedt @ 2025-02-25 18:20 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-arm-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Masahiro Yamada, Catalin Marinas, Will Deacon, Nathan Chancellor,
Arnd Bergmann, Mark Brown
From: Steven Rostedt <rostedt@goodmis.org>
ARM 64 uses -fpatchable-function-entry=4,2 which adds padding before the
function and the addresses in the mcount_loc point there instead of the
function entry that is returned by nm. In order to find a function from nm
to make sure it's not an unused weak function, the entries in the
mcount_loc section needs to match the entries from nm. Since it can be an
instruction before the entry, add a before_func variable that ARM 64 can
set to 8, and if the mcount_loc entry is within 8 bytes of the nm function
entry, then it will be considered a match.
Fixes: ef378c3b82338 ("scripts/sorttable: Zero out weak functions in mcount_loc table")
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
scripts/sorttable.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/scripts/sorttable.c b/scripts/sorttable.c
index 07ad8116bc8d..7b4b3714b1af 100644
--- a/scripts/sorttable.c
+++ b/scripts/sorttable.c
@@ -611,13 +611,16 @@ static int add_field(uint64_t addr, uint64_t size)
return 0;
}
+/* Used for when mcount/fentry is before the function entry */
+static int before_func;
+
/* Only return match if the address lies inside the function size */
static int cmp_func_addr(const void *K, const void *A)
{
uint64_t key = *(const uint64_t *)K;
const struct func_info *a = A;
- if (key < a->addr)
+ if (key + before_func < a->addr)
return -1;
return key >= a->addr + a->size;
}
@@ -1253,6 +1256,8 @@ static int do_file(char const *const fname, void *addr)
#ifdef MCOUNT_SORT_ENABLED
sort_reloc = true;
rela_type = 0x403;
+ /* arm64 uses patchable function entry placing before function */
+ before_func = 8;
#endif
/* fallthrough */
case EM_386:
--
2.47.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 0/4] scripts/sorttable: ftrace: Fix some bugs with sorttable and ARM 64
2025-02-25 18:20 [PATCH 0/4] scripts/sorttable: ftrace: Fix some bugs with sorttable and ARM 64 Steven Rostedt
` (3 preceding siblings ...)
2025-02-25 18:20 ` [PATCH 4/4] scripts/sorttable: Allow matches to functions before function entry Steven Rostedt
@ 2025-02-25 18:35 ` Steven Rostedt
4 siblings, 0 replies; 6+ messages in thread
From: Steven Rostedt @ 2025-02-25 18:35 UTC (permalink / raw)
To: linux-kernel, linux-trace-kernel, linux-arm-kernel
Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
Masahiro Yamada, Catalin Marinas, Will Deacon, Nathan Chancellor,
Arnd Bergmann, Mark Brown
On Tue, 25 Feb 2025 13:20:04 -0500
Steven Rostedt <rostedt@goodmis.org> wrote:
> Steven Rostedt (4):
> ftrace: Test mcount_loc addr before calling ftrace_call_addr()
> ftrace: Check against is_kernel_text() instead of kaslr_offset()
> scripts/sorttable: Use normal sort if there's no relocs in the mcount section
> scripts/sorttable: Allow matches to functions before function entry
I just kicked off my test suite to test these patches. If they all pass,
I'll push them to linux-next tonight, so hopefully this doesn't cause
issues for others testing linux-next.
-- Steve
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-02-25 18:34 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-25 18:20 [PATCH 0/4] scripts/sorttable: ftrace: Fix some bugs with sorttable and ARM 64 Steven Rostedt
2025-02-25 18:20 ` [PATCH 1/4] ftrace: Test mcount_loc addr before calling ftrace_call_addr() Steven Rostedt
2025-02-25 18:20 ` [PATCH 2/4] ftrace: Check against is_kernel_text() instead of kaslr_offset() Steven Rostedt
2025-02-25 18:20 ` [PATCH 3/4] scripts/sorttable: Use normal sort if theres no relocs in the mcount section Steven Rostedt
2025-02-25 18:20 ` [PATCH 4/4] scripts/sorttable: Allow matches to functions before function entry Steven Rostedt
2025-02-25 18:35 ` [PATCH 0/4] scripts/sorttable: ftrace: Fix some bugs with sorttable and ARM 64 Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).