* [PATCH v3 01/21] klp-build: Reject patches to init/*.c
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 03/21] arm64: Fix EFI linking with -fdata-sections Josh Poimboeuf
` (20 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
init/Makefile hard-codes -fno-function-sections and -fno-data-sections,
overriding the klp-build flags needed for patch generation.
Don't allow any changes to those files; being init code they aren't
really patchable anyway.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
scripts/livepatch/klp-build | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/livepatch/klp-build b/scripts/livepatch/klp-build
index c4a7acf8edc3f..911ada05673c2 100755
--- a/scripts/livepatch/klp-build
+++ b/scripts/livepatch/klp-build
@@ -362,7 +362,7 @@ check_unsupported_patches() {
for file in "${files[@]}"; do
case "$file" in
- lib/*|*/vdso/*|*/realmode/rm/*|*.S)
+ lib/*|*/vdso/*|*/realmode/rm/*|init/*|*.S)
die "${patch}: unsupported patch to $file"
;;
esac
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 01/21] klp-build: Reject patches to init/*.c
2026-05-13 3:33 ` [PATCH v3 01/21] klp-build: Reject patches to init/*.c Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
init/Makefile hard-codes -fno-function-sections and -fno-data-sections,
overriding the klp-build flags needed for patch generation.
Don't allow any changes to those files; being init code they aren't
really patchable anyway.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
scripts/livepatch/klp-build | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/livepatch/klp-build b/scripts/livepatch/klp-build
index c4a7acf8edc3f..911ada05673c2 100755
--- a/scripts/livepatch/klp-build
+++ b/scripts/livepatch/klp-build
@@ -362,7 +362,7 @@ check_unsupported_patches() {
for file in "${files[@]}"; do
case "$file" in
- lib/*|*/vdso/*|*/realmode/rm/*|*.S)
+ lib/*|*/vdso/*|*/realmode/rm/*|init/*|*.S)
die "${patch}: unsupported patch to $file"
;;
esac
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 03/21] arm64: Fix EFI linking with -fdata-sections
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 01/21] klp-build: Reject patches to init/*.c Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 05/21] arm64: vdso: Discard .discard.* sections Josh Poimboeuf
` (19 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
When building with -fdata-sections, the .init.bss section gets split up
into a bunch of .init.bss.<var> sections. Make sure they get linked
into .init.data.
Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/arm64/kernel/vmlinux.lds.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index e1ac876200a3d..1ad7e3dba460a 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -281,7 +281,7 @@ SECTIONS
INIT_CALLS
CON_INITCALL
INIT_RAM_FS
- *(.init.altinstructions .init.bss) /* from the EFI stub */
+ *(.init.altinstructions .init.bss .init.bss.*) /* from the EFI stub */
}
.exit.data : {
EXIT_DATA
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 03/21] arm64: Fix EFI linking with -fdata-sections
2026-05-13 3:33 ` [PATCH v3 03/21] arm64: Fix EFI linking with -fdata-sections Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
When building with -fdata-sections, the .init.bss section gets split up
into a bunch of .init.bss.<var> sections. Make sure they get linked
into .init.data.
Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/arm64/kernel/vmlinux.lds.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index e1ac876200a3d..1ad7e3dba460a 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -281,7 +281,7 @@ SECTIONS
INIT_CALLS
CON_INITCALL
INIT_RAM_FS
- *(.init.altinstructions .init.bss) /* from the EFI stub */
+ *(.init.altinstructions .init.bss .init.bss.*) /* from the EFI stub */
}
.exit.data : {
EXIT_DATA
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 05/21] arm64: vdso: Discard .discard.* sections
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 01/21] klp-build: Reject patches to init/*.c Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 03/21] arm64: Fix EFI linking with -fdata-sections Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 06/21] arm64: Annotate special section entries Josh Poimboeuf
` (18 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
In preparation for enabling objtool on arm64, add .discard.* to the
vDSO's /DISCARD/ section so objtool annotations don't cause orphan
section warnings or leak into the final vDSO binary.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/arm64/kernel/vdso/vdso.lds.S | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/vdso/vdso.lds.S b/arch/arm64/kernel/vdso/vdso.lds.S
index 52314be291912..d5f96fa17e605 100644
--- a/arch/arm64/kernel/vdso/vdso.lds.S
+++ b/arch/arm64/kernel/vdso/vdso.lds.S
@@ -39,6 +39,7 @@ SECTIONS
/DISCARD/ : {
*(.note.GNU-stack .note.gnu.property)
*(.ARM.attributes)
+ *(.discard.*)
}
.note : { *(.note.*) } :text :note
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 05/21] arm64: vdso: Discard .discard.* sections
2026-05-13 3:33 ` [PATCH v3 05/21] arm64: vdso: Discard .discard.* sections Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
In preparation for enabling objtool on arm64, add .discard.* to the
vDSO's /DISCARD/ section so objtool annotations don't cause orphan
section warnings or leak into the final vDSO binary.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/arm64/kernel/vdso/vdso.lds.S | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kernel/vdso/vdso.lds.S b/arch/arm64/kernel/vdso/vdso.lds.S
index 52314be291912..d5f96fa17e605 100644
--- a/arch/arm64/kernel/vdso/vdso.lds.S
+++ b/arch/arm64/kernel/vdso/vdso.lds.S
@@ -39,6 +39,7 @@ SECTIONS
/DISCARD/ : {
*(.note.GNU-stack .note.gnu.property)
*(.ARM.attributes)
+ *(.discard.*)
}
.note : { *(.note.*) } :text :note
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 06/21] arm64: Annotate special section entries
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (2 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 05/21] arm64: vdso: Discard .discard.* sections Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 07/21] crypto: arm64: Move data to .rodata Josh Poimboeuf
` (17 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
In preparation for adding arm64 support for "objtool klp checksum/diff"
to enable livepatch module generation, annotate special section entries.
This will allow objtool to determine the size and location of the
entries and to extract them when needed.
A new ANNOTATE_DATA_SPECIAL_END annotation is added to mark the end of
special data blocks, which is needed because arm64's replacement
instructions are emitted in .text rather than .altinstr_replacement, so
there's otherwise no way to determine where the last replacement block
ends.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/arm64/include/asm/alternative-macros.h | 27 ++++++++++++++++-----
arch/arm64/include/asm/asm-bug.h | 2 ++
arch/arm64/include/asm/asm-extable.h | 21 ++++++++++------
arch/arm64/include/asm/jump_label.h | 2 ++
arch/arm64/kernel/asm-offsets.c | 5 ++++
include/linux/annotate.h | 14 ++++++++++-
include/linux/objtool_types.h | 1 +
tools/include/linux/objtool_types.h | 1 +
tools/objtool/klp-diff.c | 5 +++-
9 files changed, 62 insertions(+), 16 deletions(-)
diff --git a/arch/arm64/include/asm/alternative-macros.h b/arch/arm64/include/asm/alternative-macros.h
index 8624166248528..ba86d655af1d7 100644
--- a/arch/arm64/include/asm/alternative-macros.h
+++ b/arch/arm64/include/asm/alternative-macros.h
@@ -3,11 +3,16 @@
#define __ASM_ALTERNATIVE_MACROS_H
#include <linux/const.h>
+#include <linux/annotate.h>
#include <vdso/bits.h>
#include <asm/cpucaps.h>
#include <asm/insn-def.h>
+#ifndef COMPILE_OFFSETS
+#include <asm/asm-offsets.h>
+#endif
+
/*
* Binutils 2.27.0 can't handle a 'UL' suffix on constants, so for the assembly
* macros below we must use we must use `(1 << ARM64_CB_SHIFT)`.
@@ -58,15 +63,18 @@
"661:\n\t" \
oldinstr "\n" \
"662:\n" \
- ".pushsection .altinstructions,\"a\"\n" \
+ ".pushsection .altinstructions,\"aM\", @progbits, " \
+ __stringify(ALT_INSTR_SIZE) "\n" \
ALTINSTR_ENTRY(cpucap) \
".popsection\n" \
".subsection 1\n" \
+ ANNOTATE_DATA_SPECIAL "\n" \
"663:\n\t" \
newinstr "\n" \
"664:\n\t" \
".org . - (664b-663b) + (662b-661b)\n\t" \
".org . - (662b-661b) + (664b-663b)\n\t" \
+ ANNOTATE_DATA_SPECIAL_END "\n\t" \
".previous\n" \
".endif\n"
@@ -75,7 +83,8 @@
"661:\n\t" \
oldinstr "\n" \
"662:\n" \
- ".pushsection .altinstructions,\"a\"\n" \
+ ".pushsection .altinstructions,\"aM\", @progbits, " \
+ __stringify(ALT_INSTR_SIZE) "\n" \
ALTINSTR_ENTRY_CB(cpucap, cb) \
".popsection\n" \
"663:\n\t" \
@@ -102,13 +111,15 @@
.macro alternative_insn insn1, insn2, cap, enable = 1
.if \enable
661: \insn1
-662: .pushsection .altinstructions, "a"
+662: .pushsection .altinstructions, "aM", @progbits, ALT_INSTR_SIZE
altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
.popsection
.subsection 1
+ ANNOTATE_DATA_SPECIAL
663: \insn2
664: .org . - (664b-663b) + (662b-661b)
.org . - (662b-661b) + (664b-663b)
+ ANNOTATE_DATA_SPECIAL_END
.previous
.endif
.endm
@@ -137,7 +148,7 @@
*/
.macro alternative_if_not cap
.set .Lasm_alt_mode, 0
- .pushsection .altinstructions, "a"
+ .pushsection .altinstructions, "aM", @progbits, ALT_INSTR_SIZE
altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
.popsection
661:
@@ -145,17 +156,18 @@
.macro alternative_if cap
.set .Lasm_alt_mode, 1
- .pushsection .altinstructions, "a"
+ .pushsection .altinstructions, "aM", @progbits, ALT_INSTR_SIZE
altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
.popsection
.subsection 1
.align 2 /* So GAS knows label 661 is suitably aligned */
+ ANNOTATE_DATA_SPECIAL
661:
.endm
.macro alternative_cb cap, cb
.set .Lasm_alt_mode, 0
- .pushsection .altinstructions, "a"
+ .pushsection .altinstructions, "aM", @progbits, ALT_INSTR_SIZE
altinstruction_entry 661f, \cb, (1 << ARM64_CB_SHIFT) | \cap, 662f-661f, 0
.popsection
661:
@@ -168,7 +180,9 @@
662:
.if .Lasm_alt_mode==0
.subsection 1
+ ANNOTATE_DATA_SPECIAL
.else
+ ANNOTATE_DATA_SPECIAL_END
.previous
.endif
663:
@@ -182,6 +196,7 @@
.org . - (664b-663b) + (662b-661b)
.org . - (662b-661b) + (664b-663b)
.if .Lasm_alt_mode==0
+ ANNOTATE_DATA_SPECIAL_END
.previous
.endif
.endm
diff --git a/arch/arm64/include/asm/asm-bug.h b/arch/arm64/include/asm/asm-bug.h
index a5f13801b7840..22e1a9df9851d 100644
--- a/arch/arm64/include/asm/asm-bug.h
+++ b/arch/arm64/include/asm/asm-bug.h
@@ -5,6 +5,7 @@
*/
#define __ASM_ASM_BUG_H
+#include <linux/annotate.h>
#include <asm/brk-imm.h>
#ifdef CONFIG_DEBUG_BUGVERBOSE
@@ -24,6 +25,7 @@
#define __BUG_ENTRY_START \
.pushsection __bug_table,"aw"; \
.align 2; \
+ __ANNOTATE_DATA_SPECIAL; \
14470: .long 14471f - .; \
#define __BUG_ENTRY_END \
diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index d67e2fdd1aee5..e81700edbb936 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -5,6 +5,10 @@
#include <linux/bits.h>
#include <asm/gpr-num.h>
+#ifndef COMPILE_OFFSETS
+#include <asm/asm-offsets.h>
+#endif
+
#define EX_TYPE_NONE 0
#define EX_TYPE_BPF 1
#define EX_TYPE_UACCESS_ERR_ZERO 2
@@ -29,13 +33,13 @@
#ifdef __ASSEMBLER__
-#define __ASM_EXTABLE_RAW(insn, fixup, type, data) \
- .pushsection __ex_table, "a"; \
- .align 2; \
- .long ((insn) - .); \
- .long ((fixup) - .); \
- .short (type); \
- .short (data); \
+#define __ASM_EXTABLE_RAW(insn, fixup, type, data) \
+ .pushsection __ex_table, "aM", @progbits, EXTABLE_SIZE; \
+ .align 2; \
+ .long ((insn) - .); \
+ .long ((fixup) - .); \
+ .short (type); \
+ .short (data); \
.popsection;
#define EX_DATA_REG(reg, gpr) \
@@ -82,7 +86,8 @@
#include <linux/stringify.h>
#define __ASM_EXTABLE_RAW(insn, fixup, type, data) \
- ".pushsection __ex_table, \"a\"\n" \
+ ".pushsection __ex_table, \"aM\", @progbits, "\
+ __stringify(EXTABLE_SIZE) "\n" \
".align 2\n" \
".long ((" insn ") - .)\n" \
".long ((" fixup ") - .)\n" \
diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
index 0cb211d3607d3..4dacb28641d72 100644
--- a/arch/arm64/include/asm/jump_label.h
+++ b/arch/arm64/include/asm/jump_label.h
@@ -11,6 +11,7 @@
#ifndef __ASSEMBLER__
#include <linux/types.h>
+#include <linux/annotate.h>
#include <asm/insn.h>
#define HAVE_JUMP_LABEL_BATCH
@@ -19,6 +20,7 @@
#define JUMP_TABLE_ENTRY(key, label) \
".pushsection __jump_table, \"aw\"\n\t" \
".align 3\n\t" \
+ ANNOTATE_DATA_SPECIAL "\n\t" \
".long 1b - ., " label " - .\n\t" \
".quad " key " - .\n\t" \
".popsection\n\t"
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 44b92f582c127..76251586e31c7 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -23,6 +23,8 @@
#include <asm/suspend.h>
#include <linux/kbuild.h>
#include <linux/arm-smccc.h>
+#include <asm/alternative.h>
+#include <asm/extable.h>
int main(void)
{
@@ -185,5 +187,8 @@ int main(void)
#endif
DEFINE(PIE_E0_ASM, PIE_E0);
DEFINE(PIE_E1_ASM, PIE_E1);
+ BLANK();
+ DEFINE(ALT_INSTR_SIZE, sizeof(struct alt_instr));
+ DEFINE(EXTABLE_SIZE, sizeof(struct exception_table_entry));
return 0;
}
diff --git a/include/linux/annotate.h b/include/linux/annotate.h
index 2f1599c9e5732..7f5aa15f353d6 100644
--- a/include/linux/annotate.h
+++ b/include/linux/annotate.h
@@ -3,6 +3,7 @@
#define _LINUX_ANNOTATE_H
#include <linux/objtool_types.h>
+#include <linux/stringify.h>
#ifdef CONFIG_OBJTOOL
@@ -11,6 +12,10 @@
.long label - ., type; \
.popsection
+#define __ASM_ANNOTATE_DATA(type) \
+912: \
+ __ASM_ANNOTATE(.discard.annotate_data, 912b, type)
+
#ifndef __ASSEMBLY__
#define ASM_ANNOTATE_LABEL(label, type) \
@@ -39,6 +44,9 @@
#endif /* __ASSEMBLY__ */
#else /* !CONFIG_OBJTOOL */
+
+#define __ASM_ANNOTATE_DATA(type)
+
#ifndef __ASSEMBLY__
#define ASM_ANNOTATE_LABEL(label, type) ""
#define ASM_ANNOTATE(type)
@@ -106,10 +114,12 @@
#define ANNOTATE_NOCFI_SYM(sym) asm(ASM_ANNOTATE_LABEL(sym, ANNOTYPE_NOCFI))
/*
- * Annotate a special section entry. This emables livepatch module generation
+ * Annotate a special section entry. This enables livepatch module generation
* to find and extract individual special section entries as needed.
*/
#define ANNOTATE_DATA_SPECIAL ASM_ANNOTATE_DATA(ANNOTYPE_DATA_SPECIAL)
+#define __ANNOTATE_DATA_SPECIAL __ASM_ANNOTATE_DATA(ANNOTYPE_DATA_SPECIAL)
+#define ANNOTATE_DATA_SPECIAL_END ASM_ANNOTATE_DATA(ANNOTYPE_DATA_SPECIAL_END)
#else /* __ASSEMBLY__ */
#define ANNOTATE_NOENDBR ANNOTATE type=ANNOTYPE_NOENDBR
@@ -122,6 +132,8 @@
#define ANNOTATE_REACHABLE ANNOTATE type=ANNOTYPE_REACHABLE
#define ANNOTATE_NOCFI_SYM ANNOTATE type=ANNOTYPE_NOCFI
#define ANNOTATE_DATA_SPECIAL ANNOTATE_DATA type=ANNOTYPE_DATA_SPECIAL
+#define __ANNOTATE_DATA_SPECIAL __ASM_ANNOTATE_DATA(ANNOTYPE_DATA_SPECIAL)
+#define ANNOTATE_DATA_SPECIAL_END ANNOTATE_DATA type=ANNOTYPE_DATA_SPECIAL_END
#endif /* __ASSEMBLY__ */
#endif /* _LINUX_ANNOTATE_H */
diff --git a/include/linux/objtool_types.h b/include/linux/objtool_types.h
index c6def4049b1ae..744118ffd025f 100644
--- a/include/linux/objtool_types.h
+++ b/include/linux/objtool_types.h
@@ -68,5 +68,6 @@ struct unwind_hint {
#define ANNOTYPE_NOCFI 9
#define ANNOTYPE_DATA_SPECIAL 1
+#define ANNOTYPE_DATA_SPECIAL_END 2
#endif /* _LINUX_OBJTOOL_TYPES_H */
diff --git a/tools/include/linux/objtool_types.h b/tools/include/linux/objtool_types.h
index c6def4049b1ae..744118ffd025f 100644
--- a/tools/include/linux/objtool_types.h
+++ b/tools/include/linux/objtool_types.h
@@ -68,5 +68,6 @@ struct unwind_hint {
#define ANNOTYPE_NOCFI 9
#define ANNOTYPE_DATA_SPECIAL 1
+#define ANNOTYPE_DATA_SPECIAL_END 2
#endif /* _LINUX_OBJTOOL_TYPES_H */
diff --git a/tools/objtool/klp-diff.c b/tools/objtool/klp-diff.c
index f8787d7d14547..6a1cec57dc6a3 100644
--- a/tools/objtool/klp-diff.c
+++ b/tools/objtool/klp-diff.c
@@ -1667,7 +1667,10 @@ static int create_fake_symbols(struct elf *elf)
size = 0;
next_reloc = reloc;
for_each_reloc_continue(sec->rsec, next_reloc) {
- if (annotype(elf, sec, next_reloc) != ANNOTYPE_DATA_SPECIAL ||
+ unsigned int next_type = annotype(elf, sec, next_reloc);
+
+ if ((next_type != ANNOTYPE_DATA_SPECIAL &&
+ next_type != ANNOTYPE_DATA_SPECIAL_END) ||
next_reloc->sym->sec != reloc->sym->sec)
continue;
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 06/21] arm64: Annotate special section entries
2026-05-13 3:33 ` [PATCH v3 06/21] arm64: Annotate special section entries Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
In preparation for adding arm64 support for "objtool klp checksum/diff"
to enable livepatch module generation, annotate special section entries.
This will allow objtool to determine the size and location of the
entries and to extract them when needed.
A new ANNOTATE_DATA_SPECIAL_END annotation is added to mark the end of
special data blocks, which is needed because arm64's replacement
instructions are emitted in .text rather than .altinstr_replacement, so
there's otherwise no way to determine where the last replacement block
ends.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/arm64/include/asm/alternative-macros.h | 27 ++++++++++++++++-----
arch/arm64/include/asm/asm-bug.h | 2 ++
arch/arm64/include/asm/asm-extable.h | 21 ++++++++++------
arch/arm64/include/asm/jump_label.h | 2 ++
arch/arm64/kernel/asm-offsets.c | 5 ++++
include/linux/annotate.h | 14 ++++++++++-
include/linux/objtool_types.h | 1 +
tools/include/linux/objtool_types.h | 1 +
tools/objtool/klp-diff.c | 5 +++-
9 files changed, 62 insertions(+), 16 deletions(-)
diff --git a/arch/arm64/include/asm/alternative-macros.h b/arch/arm64/include/asm/alternative-macros.h
index 8624166248528..ba86d655af1d7 100644
--- a/arch/arm64/include/asm/alternative-macros.h
+++ b/arch/arm64/include/asm/alternative-macros.h
@@ -3,11 +3,16 @@
#define __ASM_ALTERNATIVE_MACROS_H
#include <linux/const.h>
+#include <linux/annotate.h>
#include <vdso/bits.h>
#include <asm/cpucaps.h>
#include <asm/insn-def.h>
+#ifndef COMPILE_OFFSETS
+#include <asm/asm-offsets.h>
+#endif
+
/*
* Binutils 2.27.0 can't handle a 'UL' suffix on constants, so for the assembly
* macros below we must use we must use `(1 << ARM64_CB_SHIFT)`.
@@ -58,15 +63,18 @@
"661:\n\t" \
oldinstr "\n" \
"662:\n" \
- ".pushsection .altinstructions,\"a\"\n" \
+ ".pushsection .altinstructions,\"aM\", @progbits, " \
+ __stringify(ALT_INSTR_SIZE) "\n" \
ALTINSTR_ENTRY(cpucap) \
".popsection\n" \
".subsection 1\n" \
+ ANNOTATE_DATA_SPECIAL "\n" \
"663:\n\t" \
newinstr "\n" \
"664:\n\t" \
".org . - (664b-663b) + (662b-661b)\n\t" \
".org . - (662b-661b) + (664b-663b)\n\t" \
+ ANNOTATE_DATA_SPECIAL_END "\n\t" \
".previous\n" \
".endif\n"
@@ -75,7 +83,8 @@
"661:\n\t" \
oldinstr "\n" \
"662:\n" \
- ".pushsection .altinstructions,\"a\"\n" \
+ ".pushsection .altinstructions,\"aM\", @progbits, " \
+ __stringify(ALT_INSTR_SIZE) "\n" \
ALTINSTR_ENTRY_CB(cpucap, cb) \
".popsection\n" \
"663:\n\t" \
@@ -102,13 +111,15 @@
.macro alternative_insn insn1, insn2, cap, enable = 1
.if \enable
661: \insn1
-662: .pushsection .altinstructions, "a"
+662: .pushsection .altinstructions, "aM", @progbits, ALT_INSTR_SIZE
altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
.popsection
.subsection 1
+ ANNOTATE_DATA_SPECIAL
663: \insn2
664: .org . - (664b-663b) + (662b-661b)
.org . - (662b-661b) + (664b-663b)
+ ANNOTATE_DATA_SPECIAL_END
.previous
.endif
.endm
@@ -137,7 +148,7 @@
*/
.macro alternative_if_not cap
.set .Lasm_alt_mode, 0
- .pushsection .altinstructions, "a"
+ .pushsection .altinstructions, "aM", @progbits, ALT_INSTR_SIZE
altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f
.popsection
661:
@@ -145,17 +156,18 @@
.macro alternative_if cap
.set .Lasm_alt_mode, 1
- .pushsection .altinstructions, "a"
+ .pushsection .altinstructions, "aM", @progbits, ALT_INSTR_SIZE
altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f
.popsection
.subsection 1
.align 2 /* So GAS knows label 661 is suitably aligned */
+ ANNOTATE_DATA_SPECIAL
661:
.endm
.macro alternative_cb cap, cb
.set .Lasm_alt_mode, 0
- .pushsection .altinstructions, "a"
+ .pushsection .altinstructions, "aM", @progbits, ALT_INSTR_SIZE
altinstruction_entry 661f, \cb, (1 << ARM64_CB_SHIFT) | \cap, 662f-661f, 0
.popsection
661:
@@ -168,7 +180,9 @@
662:
.if .Lasm_alt_mode==0
.subsection 1
+ ANNOTATE_DATA_SPECIAL
.else
+ ANNOTATE_DATA_SPECIAL_END
.previous
.endif
663:
@@ -182,6 +196,7 @@
.org . - (664b-663b) + (662b-661b)
.org . - (662b-661b) + (664b-663b)
.if .Lasm_alt_mode==0
+ ANNOTATE_DATA_SPECIAL_END
.previous
.endif
.endm
diff --git a/arch/arm64/include/asm/asm-bug.h b/arch/arm64/include/asm/asm-bug.h
index a5f13801b7840..22e1a9df9851d 100644
--- a/arch/arm64/include/asm/asm-bug.h
+++ b/arch/arm64/include/asm/asm-bug.h
@@ -5,6 +5,7 @@
*/
#define __ASM_ASM_BUG_H
+#include <linux/annotate.h>
#include <asm/brk-imm.h>
#ifdef CONFIG_DEBUG_BUGVERBOSE
@@ -24,6 +25,7 @@
#define __BUG_ENTRY_START \
.pushsection __bug_table,"aw"; \
.align 2; \
+ __ANNOTATE_DATA_SPECIAL; \
14470: .long 14471f - .; \
#define __BUG_ENTRY_END \
diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index d67e2fdd1aee5..e81700edbb936 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -5,6 +5,10 @@
#include <linux/bits.h>
#include <asm/gpr-num.h>
+#ifndef COMPILE_OFFSETS
+#include <asm/asm-offsets.h>
+#endif
+
#define EX_TYPE_NONE 0
#define EX_TYPE_BPF 1
#define EX_TYPE_UACCESS_ERR_ZERO 2
@@ -29,13 +33,13 @@
#ifdef __ASSEMBLER__
-#define __ASM_EXTABLE_RAW(insn, fixup, type, data) \
- .pushsection __ex_table, "a"; \
- .align 2; \
- .long ((insn) - .); \
- .long ((fixup) - .); \
- .short (type); \
- .short (data); \
+#define __ASM_EXTABLE_RAW(insn, fixup, type, data) \
+ .pushsection __ex_table, "aM", @progbits, EXTABLE_SIZE; \
+ .align 2; \
+ .long ((insn) - .); \
+ .long ((fixup) - .); \
+ .short (type); \
+ .short (data); \
.popsection;
#define EX_DATA_REG(reg, gpr) \
@@ -82,7 +86,8 @@
#include <linux/stringify.h>
#define __ASM_EXTABLE_RAW(insn, fixup, type, data) \
- ".pushsection __ex_table, \"a\"\n" \
+ ".pushsection __ex_table, \"aM\", @progbits, "\
+ __stringify(EXTABLE_SIZE) "\n" \
".align 2\n" \
".long ((" insn ") - .)\n" \
".long ((" fixup ") - .)\n" \
diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
index 0cb211d3607d3..4dacb28641d72 100644
--- a/arch/arm64/include/asm/jump_label.h
+++ b/arch/arm64/include/asm/jump_label.h
@@ -11,6 +11,7 @@
#ifndef __ASSEMBLER__
#include <linux/types.h>
+#include <linux/annotate.h>
#include <asm/insn.h>
#define HAVE_JUMP_LABEL_BATCH
@@ -19,6 +20,7 @@
#define JUMP_TABLE_ENTRY(key, label) \
".pushsection __jump_table, \"aw\"\n\t" \
".align 3\n\t" \
+ ANNOTATE_DATA_SPECIAL "\n\t" \
".long 1b - ., " label " - .\n\t" \
".quad " key " - .\n\t" \
".popsection\n\t"
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 44b92f582c127..76251586e31c7 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -23,6 +23,8 @@
#include <asm/suspend.h>
#include <linux/kbuild.h>
#include <linux/arm-smccc.h>
+#include <asm/alternative.h>
+#include <asm/extable.h>
int main(void)
{
@@ -185,5 +187,8 @@ int main(void)
#endif
DEFINE(PIE_E0_ASM, PIE_E0);
DEFINE(PIE_E1_ASM, PIE_E1);
+ BLANK();
+ DEFINE(ALT_INSTR_SIZE, sizeof(struct alt_instr));
+ DEFINE(EXTABLE_SIZE, sizeof(struct exception_table_entry));
return 0;
}
diff --git a/include/linux/annotate.h b/include/linux/annotate.h
index 2f1599c9e5732..7f5aa15f353d6 100644
--- a/include/linux/annotate.h
+++ b/include/linux/annotate.h
@@ -3,6 +3,7 @@
#define _LINUX_ANNOTATE_H
#include <linux/objtool_types.h>
+#include <linux/stringify.h>
#ifdef CONFIG_OBJTOOL
@@ -11,6 +12,10 @@
.long label - ., type; \
.popsection
+#define __ASM_ANNOTATE_DATA(type) \
+912: \
+ __ASM_ANNOTATE(.discard.annotate_data, 912b, type)
+
#ifndef __ASSEMBLY__
#define ASM_ANNOTATE_LABEL(label, type) \
@@ -39,6 +44,9 @@
#endif /* __ASSEMBLY__ */
#else /* !CONFIG_OBJTOOL */
+
+#define __ASM_ANNOTATE_DATA(type)
+
#ifndef __ASSEMBLY__
#define ASM_ANNOTATE_LABEL(label, type) ""
#define ASM_ANNOTATE(type)
@@ -106,10 +114,12 @@
#define ANNOTATE_NOCFI_SYM(sym) asm(ASM_ANNOTATE_LABEL(sym, ANNOTYPE_NOCFI))
/*
- * Annotate a special section entry. This emables livepatch module generation
+ * Annotate a special section entry. This enables livepatch module generation
* to find and extract individual special section entries as needed.
*/
#define ANNOTATE_DATA_SPECIAL ASM_ANNOTATE_DATA(ANNOTYPE_DATA_SPECIAL)
+#define __ANNOTATE_DATA_SPECIAL __ASM_ANNOTATE_DATA(ANNOTYPE_DATA_SPECIAL)
+#define ANNOTATE_DATA_SPECIAL_END ASM_ANNOTATE_DATA(ANNOTYPE_DATA_SPECIAL_END)
#else /* __ASSEMBLY__ */
#define ANNOTATE_NOENDBR ANNOTATE type=ANNOTYPE_NOENDBR
@@ -122,6 +132,8 @@
#define ANNOTATE_REACHABLE ANNOTATE type=ANNOTYPE_REACHABLE
#define ANNOTATE_NOCFI_SYM ANNOTATE type=ANNOTYPE_NOCFI
#define ANNOTATE_DATA_SPECIAL ANNOTATE_DATA type=ANNOTYPE_DATA_SPECIAL
+#define __ANNOTATE_DATA_SPECIAL __ASM_ANNOTATE_DATA(ANNOTYPE_DATA_SPECIAL)
+#define ANNOTATE_DATA_SPECIAL_END ANNOTATE_DATA type=ANNOTYPE_DATA_SPECIAL_END
#endif /* __ASSEMBLY__ */
#endif /* _LINUX_ANNOTATE_H */
diff --git a/include/linux/objtool_types.h b/include/linux/objtool_types.h
index c6def4049b1ae..744118ffd025f 100644
--- a/include/linux/objtool_types.h
+++ b/include/linux/objtool_types.h
@@ -68,5 +68,6 @@ struct unwind_hint {
#define ANNOTYPE_NOCFI 9
#define ANNOTYPE_DATA_SPECIAL 1
+#define ANNOTYPE_DATA_SPECIAL_END 2
#endif /* _LINUX_OBJTOOL_TYPES_H */
diff --git a/tools/include/linux/objtool_types.h b/tools/include/linux/objtool_types.h
index c6def4049b1ae..744118ffd025f 100644
--- a/tools/include/linux/objtool_types.h
+++ b/tools/include/linux/objtool_types.h
@@ -68,5 +68,6 @@ struct unwind_hint {
#define ANNOTYPE_NOCFI 9
#define ANNOTYPE_DATA_SPECIAL 1
+#define ANNOTYPE_DATA_SPECIAL_END 2
#endif /* _LINUX_OBJTOOL_TYPES_H */
diff --git a/tools/objtool/klp-diff.c b/tools/objtool/klp-diff.c
index f8787d7d14547..6a1cec57dc6a3 100644
--- a/tools/objtool/klp-diff.c
+++ b/tools/objtool/klp-diff.c
@@ -1667,7 +1667,10 @@ static int create_fake_symbols(struct elf *elf)
size = 0;
next_reloc = reloc;
for_each_reloc_continue(sec->rsec, next_reloc) {
- if (annotype(elf, sec, next_reloc) != ANNOTYPE_DATA_SPECIAL ||
+ unsigned int next_type = annotype(elf, sec, next_reloc);
+
+ if ((next_type != ANNOTYPE_DATA_SPECIAL &&
+ next_type != ANNOTYPE_DATA_SPECIAL_END) ||
next_reloc->sym->sec != reloc->sym->sec)
continue;
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 07/21] crypto: arm64: Move data to .rodata
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (3 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 06/21] arm64: Annotate special section entries Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 08/21] objtool: Allow setting --mnop without --mcount Josh Poimboeuf
` (16 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Data embedded in .text pollutes i-cache and confuses objtool and other
tools that try to disassemble it. Move it to .rodata.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
lib/crypto/arm64/sha2-armv8.pl | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/lib/crypto/arm64/sha2-armv8.pl b/lib/crypto/arm64/sha2-armv8.pl
index 35ec9ae99fe16..e0ee2d5367e72 100644
--- a/lib/crypto/arm64/sha2-armv8.pl
+++ b/lib/crypto/arm64/sha2-armv8.pl
@@ -207,12 +207,13 @@ $func:
___
$code.=<<___ if ($SZ==4);
#ifndef __KERNEL__
+ adrp x17,.LOPENSSL_armcap_P
+ add x17,x17,:lo12:.LOPENSSL_armcap_P
# ifdef __ILP32__
- ldrsw x16,.LOPENSSL_armcap_P
+ ldrsw x16,[x17]
# else
- ldr x16,.LOPENSSL_armcap_P
+ ldr x16,[x17]
# endif
- adr x17,.LOPENSSL_armcap_P
add x16,x16,x17
ldr w16,[x16]
tst w16,#ARMV8_SHA256
@@ -237,7 +238,8 @@ $code.=<<___;
ldp $E,$F,[$ctx,#4*$SZ]
add $num,$inp,$num,lsl#`log(16*$SZ)/log(2)` // end of input
ldp $G,$H,[$ctx,#6*$SZ]
- adr $Ktbl,.LK$BITS
+ adrp $Ktbl,.LK$BITS
+ add $Ktbl,$Ktbl,:lo12:.LK$BITS
stp $ctx,$num,[x29,#96]
.Loop:
@@ -286,6 +288,7 @@ $code.=<<___;
ret
.size $func,.-$func
+.pushsection .rodata
.align 6
.type .LK$BITS,%object
.LK$BITS:
@@ -365,6 +368,7 @@ $code.=<<___;
#endif
.asciz "SHA$BITS block transform for ARMv8, CRYPTOGAMS by <appro\@openssl.org>"
.align 2
+.popsection
___
if ($SZ==4) {
@@ -385,7 +389,8 @@ sha256_block_armv8:
add x29,sp,#0
ld1.32 {$ABCD,$EFGH},[$ctx]
- adr $Ktbl,.LK256
+ adrp $Ktbl,.LK256
+ add $Ktbl,$Ktbl,:lo12:.LK256
.Loop_hw:
ld1 {@MSG[0]-@MSG[3]},[$inp],#64
@@ -648,7 +653,8 @@ sha256_block_neon:
mov x29, sp
sub sp,sp,#16*4
- adr $Ktbl,.LK256
+ adrp $Ktbl,.LK256
+ add $Ktbl,$Ktbl,:lo12:.LK256
add $num,$inp,$num,lsl#6 // len to point at the end of inp
ld1.8 {@X[0]},[$inp], #16
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 07/21] crypto: arm64: Move data to .rodata
2026-05-13 3:33 ` [PATCH v3 07/21] crypto: arm64: Move data to .rodata Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Data embedded in .text pollutes i-cache and confuses objtool and other
tools that try to disassemble it. Move it to .rodata.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
lib/crypto/arm64/sha2-armv8.pl | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/lib/crypto/arm64/sha2-armv8.pl b/lib/crypto/arm64/sha2-armv8.pl
index 35ec9ae99fe16..e0ee2d5367e72 100644
--- a/lib/crypto/arm64/sha2-armv8.pl
+++ b/lib/crypto/arm64/sha2-armv8.pl
@@ -207,12 +207,13 @@ $func:
___
$code.=<<___ if ($SZ==4);
#ifndef __KERNEL__
+ adrp x17,.LOPENSSL_armcap_P
+ add x17,x17,:lo12:.LOPENSSL_armcap_P
# ifdef __ILP32__
- ldrsw x16,.LOPENSSL_armcap_P
+ ldrsw x16,[x17]
# else
- ldr x16,.LOPENSSL_armcap_P
+ ldr x16,[x17]
# endif
- adr x17,.LOPENSSL_armcap_P
add x16,x16,x17
ldr w16,[x16]
tst w16,#ARMV8_SHA256
@@ -237,7 +238,8 @@ $code.=<<___;
ldp $E,$F,[$ctx,#4*$SZ]
add $num,$inp,$num,lsl#`log(16*$SZ)/log(2)` // end of input
ldp $G,$H,[$ctx,#6*$SZ]
- adr $Ktbl,.LK$BITS
+ adrp $Ktbl,.LK$BITS
+ add $Ktbl,$Ktbl,:lo12:.LK$BITS
stp $ctx,$num,[x29,#96]
.Loop:
@@ -286,6 +288,7 @@ $code.=<<___;
ret
.size $func,.-$func
+.pushsection .rodata
.align 6
.type .LK$BITS,%object
.LK$BITS:
@@ -365,6 +368,7 @@ $code.=<<___;
#endif
.asciz "SHA$BITS block transform for ARMv8, CRYPTOGAMS by <appro\@openssl.org>"
.align 2
+.popsection
___
if ($SZ==4) {
@@ -385,7 +389,8 @@ sha256_block_armv8:
add x29,sp,#0
ld1.32 {$ABCD,$EFGH},[$ctx]
- adr $Ktbl,.LK256
+ adrp $Ktbl,.LK256
+ add $Ktbl,$Ktbl,:lo12:.LK256
.Loop_hw:
ld1 {@MSG[0]-@MSG[3]},[$inp],#64
@@ -648,7 +653,8 @@ sha256_block_neon:
mov x29, sp
sub sp,sp,#16*4
- adr $Ktbl,.LK256
+ adrp $Ktbl,.LK256
+ add $Ktbl,$Ktbl,:lo12:.LK256
add $num,$inp,$num,lsl#6 // len to point at the end of inp
ld1.8 {@X[0]},[$inp], #16
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 08/21] objtool: Allow setting --mnop without --mcount
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (4 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 07/21] crypto: arm64: Move data to .rodata Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 09/21] kbuild: Only run objtool if there is at least one command Josh Poimboeuf
` (15 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Instead of returning an error for --mnop without --mcount, just silently
ignore it. This will help simplify kbuild's handling of objtool args.
Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/builtin-check.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c
index 118c3de2f293e..bd84f5b7c9ee9 100644
--- a/tools/objtool/builtin-check.c
+++ b/tools/objtool/builtin-check.c
@@ -144,11 +144,6 @@ int cmd_parse_options(int argc, const char **argv, const char * const usage[])
static bool opts_valid(void)
{
- if (opts.mnop && !opts.mcount) {
- ERROR("--mnop requires --mcount");
- return false;
- }
-
if (opts.noinstr && !opts.link) {
ERROR("--noinstr requires --link");
return false;
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 08/21] objtool: Allow setting --mnop without --mcount
2026-05-13 3:33 ` [PATCH v3 08/21] objtool: Allow setting --mnop without --mcount Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Instead of returning an error for --mnop without --mcount, just silently
ignore it. This will help simplify kbuild's handling of objtool args.
Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/builtin-check.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c
index 118c3de2f293e..bd84f5b7c9ee9 100644
--- a/tools/objtool/builtin-check.c
+++ b/tools/objtool/builtin-check.c
@@ -144,11 +144,6 @@ int cmd_parse_options(int argc, const char **argv, const char * const usage[])
static bool opts_valid(void)
{
- if (opts.mnop && !opts.mcount) {
- ERROR("--mnop requires --mcount");
- return false;
- }
-
if (opts.noinstr && !opts.link) {
ERROR("--noinstr requires --link");
return false;
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 09/21] kbuild: Only run objtool if there is at least one command
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (5 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 08/21] objtool: Allow setting --mnop without --mcount Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 12/21] objtool: Refactor elf_add_data() to use a growable data buffer Josh Poimboeuf
` (14 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek, Nathan Chancellor,
Nicolas Schier
Split the objtool args into commands and options, such that if no
commands have been enabled, objtool doesn't run.
This is in preparation for enabling objtool and klp-build for arm64.
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Nicolas Schier <nsc@kernel.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/x86/boot/startup/Makefile | 2 +-
scripts/Makefile.build | 4 +--
scripts/Makefile.lib | 52 ++++++++++++++++++----------------
scripts/Makefile.vmlinux_o | 15 ++++------
4 files changed, 36 insertions(+), 37 deletions(-)
diff --git a/arch/x86/boot/startup/Makefile b/arch/x86/boot/startup/Makefile
index 5e499cfb29b5c..a08297829fc63 100644
--- a/arch/x86/boot/startup/Makefile
+++ b/arch/x86/boot/startup/Makefile
@@ -36,7 +36,7 @@ $(patsubst %.o,$(obj)/%.o,$(lib-y)): OBJECT_FILES_NON_STANDARD := y
# relocations, even if other objtool actions are being deferred.
#
$(pi-objs): objtool-enabled = 1
-$(pi-objs): objtool-args = $(if $(delay-objtool),--dry-run,$(objtool-args-y)) --noabs
+$(pi-objs): objtool-args = $(if $(delay-objtool),--dry-run,$(objtool-cmds-y) $(objtool-opts-y)) --noabs
#
# Confine the startup code by prefixing all symbols with __pi_ (for position
diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 3498d25b15e85..c4accfcd177d4 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -277,7 +277,7 @@ endif # CONFIG_FTRACE_MCOUNT_USE_RECORDMCOUNT
is-standard-object = $(if $(filter-out y%, $(OBJECT_FILES_NON_STANDARD_$(target-stem).o)$(OBJECT_FILES_NON_STANDARD)n),$(is-kernel-object))
ifdef CONFIG_OBJTOOL
-$(obj)/%.o: private objtool-enabled = $(if $(is-standard-object),$(if $(delay-objtool),$(is-single-obj-m),y))
+$(obj)/%.o: private objtool-enabled = $(and $(is-standard-object),$(objtool-cmds-y),$(if $(delay-objtool),$(is-single-obj-m),y))
endif
ifneq ($(findstring 1, $(KBUILD_EXTRA_WARN)),)
@@ -501,7 +501,7 @@ define rule_ld_multi_m
$(call cmd,gen_objtooldep)
endef
-$(multi-obj-m): private objtool-enabled := $(delay-objtool)
+$(multi-obj-m): private objtool-enabled := $(if $(objtool-cmds-y),$(delay-objtool))
$(multi-obj-m): private part-of-module := y
$(multi-obj-m): %.o: %.mod FORCE
$(call if_changed_rule,ld_multi_m)
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 7e216d82e9887..7f803796d20cf 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -183,30 +183,34 @@ ifdef CONFIG_OBJTOOL
objtool := $(objtree)/tools/objtool/objtool
-objtool-args-$(CONFIG_HAVE_JUMP_LABEL_HACK) += --hacks=jump_label
-objtool-args-$(CONFIG_HAVE_NOINSTR_HACK) += --hacks=noinstr
-objtool-args-$(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) += --hacks=skylake
-objtool-args-$(CONFIG_X86_KERNEL_IBT) += --ibt
-objtool-args-$(CONFIG_CALL_PADDING) += --prefix=$(CONFIG_FUNCTION_PADDING_BYTES)
-ifdef CONFIG_CALL_PADDING
-objtool-args-$(CONFIG_CFI) += --cfi
-objtool-args-$(CONFIG_FINEIBT) += --fineibt
-endif
-objtool-args-$(CONFIG_FTRACE_MCOUNT_USE_OBJTOOL) += --mcount
-ifdef CONFIG_FTRACE_MCOUNT_USE_OBJTOOL
-objtool-args-$(CONFIG_HAVE_OBJTOOL_NOP_MCOUNT) += --mnop
-endif
-objtool-args-$(CONFIG_UNWINDER_ORC) += --orc
-objtool-args-$(CONFIG_MITIGATION_RETPOLINE) += --retpoline
-objtool-args-$(CONFIG_MITIGATION_RETHUNK) += --rethunk
-objtool-args-$(CONFIG_MITIGATION_SLS) += --sls
-objtool-args-$(CONFIG_STACK_VALIDATION) += --stackval
-objtool-args-$(CONFIG_HAVE_STATIC_CALL_INLINE) += --static-call
-objtool-args-$(CONFIG_HAVE_UACCESS_VALIDATION) += --uaccess
-objtool-args-$(or $(CONFIG_GCOV_KERNEL),$(CONFIG_KCOV)) += --no-unreachable
-objtool-args-$(CONFIG_OBJTOOL_WERROR) += --werror
+# objtool commands
+objtool-cmds-$(CONFIG_HAVE_JUMP_LABEL_HACK) += --hacks=jump_label
+objtool-cmds-$(CONFIG_HAVE_NOINSTR_HACK) += --hacks=noinstr
+objtool-cmds-$(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) += --hacks=skylake
+objtool-cmds-$(CONFIG_X86_KERNEL_IBT) += --ibt
+objtool-cmds-$(CONFIG_CALL_PADDING) += --prefix=$(CONFIG_FUNCTION_PADDING_BYTES)
+objtool-cmds-$(CONFIG_FTRACE_MCOUNT_USE_OBJTOOL) += --mcount
+objtool-cmds-$(CONFIG_UNWINDER_ORC) += --orc
+objtool-cmds-$(CONFIG_MITIGATION_RETPOLINE) += --retpoline
+objtool-cmds-$(CONFIG_MITIGATION_RETHUNK) += --rethunk
+objtool-cmds-$(CONFIG_MITIGATION_SLS) += --sls
+objtool-cmds-$(CONFIG_STACK_VALIDATION) += --stackval
+objtool-cmds-$(CONFIG_HAVE_STATIC_CALL_INLINE) += --static-call
+objtool-cmds-$(CONFIG_HAVE_UACCESS_VALIDATION) += --uaccess
+objtool-cmds-y += $(OBJTOOL_ARGS)
-objtool-args = $(objtool-args-y) \
+# objtool options
+ifdef CONFIG_CALL_PADDING
+objtool-opts-$(CONFIG_CFI) += --cfi
+objtool-opts-$(CONFIG_FINEIBT) += --fineibt
+endif
+ifdef CONFIG_FTRACE_MCOUNT_USE_OBJTOOL
+objtool-opts-$(CONFIG_HAVE_OBJTOOL_NOP_MCOUNT) += --mnop
+endif
+objtool-opts-$(or $(CONFIG_GCOV_KERNEL),$(CONFIG_KCOV)) += --no-unreachable
+objtool-opts-$(CONFIG_OBJTOOL_WERROR) += --werror
+
+objtool-args = $(objtool-cmds-y) $(objtool-opts-y) \
$(if $(delay-objtool), --link) \
$(if $(part-of-module), --module)
@@ -215,7 +219,7 @@ delay-objtool := $(or $(CONFIG_LTO_CLANG),$(CONFIG_X86_KERNEL_IBT),$(CONFIG_KLP_
cmd_objtool = $(if $(objtool-enabled), ; $(objtool) $(objtool-args) $@)
cmd_gen_objtooldep = $(if $(objtool-enabled), { echo ; echo '$@: $$(wildcard $(objtool))' ; } >> $(dot-target).cmd)
-objtool-enabled := y
+objtool-enabled = $(if $(objtool-cmds-y),y)
endif # CONFIG_OBJTOOL
diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
index 527352c222ff6..09af33203bd8d 100644
--- a/scripts/Makefile.vmlinux_o
+++ b/scripts/Makefile.vmlinux_o
@@ -36,18 +36,13 @@ endif
# For !delay-objtool + CONFIG_NOINSTR_VALIDATION, it runs on both translation
# units and vmlinux.o, with the latter only used for noinstr/unret validation.
-objtool-enabled := $(or $(delay-objtool),$(CONFIG_NOINSTR_VALIDATION))
-
-ifeq ($(delay-objtool),y)
-vmlinux-objtool-args-y += $(objtool-args-y)
-else
-vmlinux-objtool-args-$(CONFIG_OBJTOOL_WERROR) += --werror
+ifneq ($(delay-objtool),y)
+objtool-cmds-y =
+objtool-opts-y += --link
endif
-vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION) += --noinstr \
- $(if $(or $(CONFIG_MITIGATION_UNRET_ENTRY),$(CONFIG_MITIGATION_SRSO)), --unret)
-
-objtool-args = $(vmlinux-objtool-args-y) --link
+objtool-cmds-$(CONFIG_NOINSTR_VALIDATION) += --noinstr \
+ $(if $(or $(CONFIG_MITIGATION_UNRET_ENTRY),$(CONFIG_MITIGATION_SRSO)), --unret)
# Link of vmlinux.o used for section mismatch analysis
# ---------------------------------------------------------------------------
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 09/21] kbuild: Only run objtool if there is at least one command
2026-05-13 3:33 ` [PATCH v3 09/21] kbuild: Only run objtool if there is at least one command Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek, Nathan Chancellor,
Nicolas Schier
Split the objtool args into commands and options, such that if no
commands have been enabled, objtool doesn't run.
This is in preparation for enabling objtool and klp-build for arm64.
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Nicolas Schier <nsc@kernel.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/x86/boot/startup/Makefile | 2 +-
scripts/Makefile.build | 4 +--
scripts/Makefile.lib | 52 ++++++++++++++++++----------------
scripts/Makefile.vmlinux_o | 15 ++++------
4 files changed, 36 insertions(+), 37 deletions(-)
diff --git a/arch/x86/boot/startup/Makefile b/arch/x86/boot/startup/Makefile
index 5e499cfb29b5c..a08297829fc63 100644
--- a/arch/x86/boot/startup/Makefile
+++ b/arch/x86/boot/startup/Makefile
@@ -36,7 +36,7 @@ $(patsubst %.o,$(obj)/%.o,$(lib-y)): OBJECT_FILES_NON_STANDARD := y
# relocations, even if other objtool actions are being deferred.
#
$(pi-objs): objtool-enabled = 1
-$(pi-objs): objtool-args = $(if $(delay-objtool),--dry-run,$(objtool-args-y)) --noabs
+$(pi-objs): objtool-args = $(if $(delay-objtool),--dry-run,$(objtool-cmds-y) $(objtool-opts-y)) --noabs
#
# Confine the startup code by prefixing all symbols with __pi_ (for position
diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 3498d25b15e85..c4accfcd177d4 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -277,7 +277,7 @@ endif # CONFIG_FTRACE_MCOUNT_USE_RECORDMCOUNT
is-standard-object = $(if $(filter-out y%, $(OBJECT_FILES_NON_STANDARD_$(target-stem).o)$(OBJECT_FILES_NON_STANDARD)n),$(is-kernel-object))
ifdef CONFIG_OBJTOOL
-$(obj)/%.o: private objtool-enabled = $(if $(is-standard-object),$(if $(delay-objtool),$(is-single-obj-m),y))
+$(obj)/%.o: private objtool-enabled = $(and $(is-standard-object),$(objtool-cmds-y),$(if $(delay-objtool),$(is-single-obj-m),y))
endif
ifneq ($(findstring 1, $(KBUILD_EXTRA_WARN)),)
@@ -501,7 +501,7 @@ define rule_ld_multi_m
$(call cmd,gen_objtooldep)
endef
-$(multi-obj-m): private objtool-enabled := $(delay-objtool)
+$(multi-obj-m): private objtool-enabled := $(if $(objtool-cmds-y),$(delay-objtool))
$(multi-obj-m): private part-of-module := y
$(multi-obj-m): %.o: %.mod FORCE
$(call if_changed_rule,ld_multi_m)
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 7e216d82e9887..7f803796d20cf 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -183,30 +183,34 @@ ifdef CONFIG_OBJTOOL
objtool := $(objtree)/tools/objtool/objtool
-objtool-args-$(CONFIG_HAVE_JUMP_LABEL_HACK) += --hacks=jump_label
-objtool-args-$(CONFIG_HAVE_NOINSTR_HACK) += --hacks=noinstr
-objtool-args-$(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) += --hacks=skylake
-objtool-args-$(CONFIG_X86_KERNEL_IBT) += --ibt
-objtool-args-$(CONFIG_CALL_PADDING) += --prefix=$(CONFIG_FUNCTION_PADDING_BYTES)
-ifdef CONFIG_CALL_PADDING
-objtool-args-$(CONFIG_CFI) += --cfi
-objtool-args-$(CONFIG_FINEIBT) += --fineibt
-endif
-objtool-args-$(CONFIG_FTRACE_MCOUNT_USE_OBJTOOL) += --mcount
-ifdef CONFIG_FTRACE_MCOUNT_USE_OBJTOOL
-objtool-args-$(CONFIG_HAVE_OBJTOOL_NOP_MCOUNT) += --mnop
-endif
-objtool-args-$(CONFIG_UNWINDER_ORC) += --orc
-objtool-args-$(CONFIG_MITIGATION_RETPOLINE) += --retpoline
-objtool-args-$(CONFIG_MITIGATION_RETHUNK) += --rethunk
-objtool-args-$(CONFIG_MITIGATION_SLS) += --sls
-objtool-args-$(CONFIG_STACK_VALIDATION) += --stackval
-objtool-args-$(CONFIG_HAVE_STATIC_CALL_INLINE) += --static-call
-objtool-args-$(CONFIG_HAVE_UACCESS_VALIDATION) += --uaccess
-objtool-args-$(or $(CONFIG_GCOV_KERNEL),$(CONFIG_KCOV)) += --no-unreachable
-objtool-args-$(CONFIG_OBJTOOL_WERROR) += --werror
+# objtool commands
+objtool-cmds-$(CONFIG_HAVE_JUMP_LABEL_HACK) += --hacks=jump_label
+objtool-cmds-$(CONFIG_HAVE_NOINSTR_HACK) += --hacks=noinstr
+objtool-cmds-$(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) += --hacks=skylake
+objtool-cmds-$(CONFIG_X86_KERNEL_IBT) += --ibt
+objtool-cmds-$(CONFIG_CALL_PADDING) += --prefix=$(CONFIG_FUNCTION_PADDING_BYTES)
+objtool-cmds-$(CONFIG_FTRACE_MCOUNT_USE_OBJTOOL) += --mcount
+objtool-cmds-$(CONFIG_UNWINDER_ORC) += --orc
+objtool-cmds-$(CONFIG_MITIGATION_RETPOLINE) += --retpoline
+objtool-cmds-$(CONFIG_MITIGATION_RETHUNK) += --rethunk
+objtool-cmds-$(CONFIG_MITIGATION_SLS) += --sls
+objtool-cmds-$(CONFIG_STACK_VALIDATION) += --stackval
+objtool-cmds-$(CONFIG_HAVE_STATIC_CALL_INLINE) += --static-call
+objtool-cmds-$(CONFIG_HAVE_UACCESS_VALIDATION) += --uaccess
+objtool-cmds-y += $(OBJTOOL_ARGS)
-objtool-args = $(objtool-args-y) \
+# objtool options
+ifdef CONFIG_CALL_PADDING
+objtool-opts-$(CONFIG_CFI) += --cfi
+objtool-opts-$(CONFIG_FINEIBT) += --fineibt
+endif
+ifdef CONFIG_FTRACE_MCOUNT_USE_OBJTOOL
+objtool-opts-$(CONFIG_HAVE_OBJTOOL_NOP_MCOUNT) += --mnop
+endif
+objtool-opts-$(or $(CONFIG_GCOV_KERNEL),$(CONFIG_KCOV)) += --no-unreachable
+objtool-opts-$(CONFIG_OBJTOOL_WERROR) += --werror
+
+objtool-args = $(objtool-cmds-y) $(objtool-opts-y) \
$(if $(delay-objtool), --link) \
$(if $(part-of-module), --module)
@@ -215,7 +219,7 @@ delay-objtool := $(or $(CONFIG_LTO_CLANG),$(CONFIG_X86_KERNEL_IBT),$(CONFIG_KLP_
cmd_objtool = $(if $(objtool-enabled), ; $(objtool) $(objtool-args) $@)
cmd_gen_objtooldep = $(if $(objtool-enabled), { echo ; echo '$@: $$(wildcard $(objtool))' ; } >> $(dot-target).cmd)
-objtool-enabled := y
+objtool-enabled = $(if $(objtool-cmds-y),y)
endif # CONFIG_OBJTOOL
diff --git a/scripts/Makefile.vmlinux_o b/scripts/Makefile.vmlinux_o
index 527352c222ff6..09af33203bd8d 100644
--- a/scripts/Makefile.vmlinux_o
+++ b/scripts/Makefile.vmlinux_o
@@ -36,18 +36,13 @@ endif
# For !delay-objtool + CONFIG_NOINSTR_VALIDATION, it runs on both translation
# units and vmlinux.o, with the latter only used for noinstr/unret validation.
-objtool-enabled := $(or $(delay-objtool),$(CONFIG_NOINSTR_VALIDATION))
-
-ifeq ($(delay-objtool),y)
-vmlinux-objtool-args-y += $(objtool-args-y)
-else
-vmlinux-objtool-args-$(CONFIG_OBJTOOL_WERROR) += --werror
+ifneq ($(delay-objtool),y)
+objtool-cmds-y =
+objtool-opts-y += --link
endif
-vmlinux-objtool-args-$(CONFIG_NOINSTR_VALIDATION) += --noinstr \
- $(if $(or $(CONFIG_MITIGATION_UNRET_ENTRY),$(CONFIG_MITIGATION_SRSO)), --unret)
-
-objtool-args = $(vmlinux-objtool-args-y) --link
+objtool-cmds-$(CONFIG_NOINSTR_VALIDATION) += --noinstr \
+ $(if $(or $(CONFIG_MITIGATION_UNRET_ENTRY),$(CONFIG_MITIGATION_SRSO)), --unret)
# Link of vmlinux.o used for section mismatch analysis
# ---------------------------------------------------------------------------
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 12/21] objtool: Refactor elf_add_data() to use a growable data buffer
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (6 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 09/21] kbuild: Only run objtool if there is at least one command Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 13/21] objtool: Reuse string references Josh Poimboeuf
` (13 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Instead of calling elf_newdata() for each new piece of data with its own
separate buffer, keep it all in the same growable buffer so the
section's entire data can be accessed if needed.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/elf.c | 123 ++++++++++++++--------------
tools/objtool/include/objtool/elf.h | 13 ++-
2 files changed, 71 insertions(+), 65 deletions(-)
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index 33c95a74a51bd..e09bb0a63be35 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -1134,9 +1134,6 @@ static int read_relocs(struct elf *elf)
rsec->base->rsec = rsec;
- /* nr_alloc_relocs=0: libelf owns d_buf */
- rsec->nr_alloc_relocs = 0;
-
rsec->relocs = calloc(sec_num_entries(rsec), sizeof(*reloc));
if (!rsec->relocs) {
ERROR_GLIBC("calloc");
@@ -1395,7 +1392,7 @@ unsigned int elf_add_string(struct elf *elf, struct section *strtab, const char
void *elf_add_data(struct elf *elf, struct section *sec, const void *data, size_t size)
{
- unsigned long offset;
+ unsigned long offset, size_old, size_new, alloc_size_old, alloc_size_new;
Elf_Scn *s;
if (!sec->sh.sh_addralign) {
@@ -1409,30 +1406,55 @@ void *elf_add_data(struct elf *elf, struct section *sec, const void *data, size_
return NULL;
}
- sec->data = elf_newdata(s);
if (!sec->data) {
- ERROR_ELF("elf_newdata");
- return NULL;
+ sec->data = elf_newdata(s);
+ if (!sec->data) {
+ ERROR_ELF("elf_newdata");
+ return NULL;
+ }
+
+ sec->data->d_align = sec->sh.sh_addralign;
}
- sec->data->d_buf = calloc(1, size);
- if (!sec->data->d_buf) {
- ERROR_GLIBC("calloc");
- return NULL;
+ size_old = sec->data->d_size;
+ offset = ALIGN(size_old, sec->sh.sh_addralign);
+ size_new = offset + size;
+
+ if (!sec->data_overallocated)
+ alloc_size_old = size_old;
+ else
+ alloc_size_old = max(64UL, roundup_pow_of_two(size_old ? : 1));
+
+ alloc_size_new = max(64UL, roundup_pow_of_two(size_new ? : 1));
+
+ if (alloc_size_new > alloc_size_old) {
+ void *orig_buf = sec->data->d_buf;
+
+ sec->data->d_buf = calloc(1, alloc_size_new);
+ if (!sec->data->d_buf) {
+ ERROR_GLIBC("calloc");
+ return NULL;
+ }
+
+ if (size_old)
+ memcpy(sec->data->d_buf, orig_buf, size_old);
+
+ if (orig_buf && sec->data_owned)
+ free(orig_buf);
+
+ sec->data_owned = 1;
+ sec->data_overallocated = 1;
}
if (data)
- memcpy(sec->data->d_buf, data, size);
-
- sec->data->d_size = size;
- sec->data->d_align = sec->sh.sh_addralign;
-
- offset = ALIGN(sec_size(sec), sec->sh.sh_addralign);
- sec->sh.sh_size = offset + size;
+ memcpy(sec->data->d_buf + offset, data, size);
+ else
+ memset(sec->data->d_buf + offset, 0, size);
+ sec->data->d_size = size_new;
+ sec->sh.sh_size = size_new;
mark_sec_changed(elf, sec, true);
-
- return sec->data->d_buf;
+ return sec->data->d_buf + offset;
}
struct section *elf_create_section(struct elf *elf, const char *name,
@@ -1483,6 +1505,8 @@ struct section *elf_create_section(struct elf *elf, const char *name,
ERROR_GLIBC("calloc");
return NULL;
}
+
+ sec->data_owned = 1;
}
if (!gelf_getshdr(s, &sec->sh)) {
@@ -1533,60 +1557,33 @@ static int elf_alloc_reloc(struct elf *elf, struct section *rsec)
struct reloc *old_relocs, *old_relocs_end, *new_relocs;
unsigned int nr_relocs_old = sec_num_entries(rsec);
unsigned int nr_relocs_new = nr_relocs_old + 1;
- unsigned long nr_alloc;
+ unsigned long nr_alloc_old = 0, nr_alloc_new;
struct symbol *sym;
- if (!rsec->data) {
- rsec->data = elf_newdata(elf_getscn(elf->elf, rsec->idx));
- if (!rsec->data) {
- ERROR_ELF("elf_newdata");
- return -1;
- }
+ if (!elf_add_data(elf, rsec, NULL, elf_rela_size(elf)))
+ return -1;
- rsec->data->d_align = 1;
- rsec->data->d_type = ELF_T_RELA;
- rsec->data->d_buf = NULL;
- }
+ rsec->data->d_type = ELF_T_RELA;
- rsec->data->d_size = nr_relocs_new * elf_rela_size(elf);
- rsec->sh.sh_size = rsec->data->d_size;
+ if (rsec->relocs_overallocated)
+ nr_alloc_old = max(64UL, roundup_pow_of_two(nr_relocs_old ? : 1));
+ else
+ nr_alloc_old = nr_relocs_old;
- nr_alloc = max(64UL, roundup_pow_of_two(nr_relocs_new));
- if (nr_alloc <= rsec->nr_alloc_relocs)
+ nr_alloc_new = max(64UL, roundup_pow_of_two(nr_relocs_new ? : 1));
+
+ if (nr_alloc_old == nr_alloc_new)
return 0;
- if (rsec->data->d_buf && !rsec->nr_alloc_relocs) {
- void *orig_buf = rsec->data->d_buf;
-
- /*
- * The original d_buf is owned by libelf so it can't be
- * realloced.
- */
- rsec->data->d_buf = malloc(nr_alloc * elf_rela_size(elf));
- if (!rsec->data->d_buf) {
- ERROR_GLIBC("malloc");
- return -1;
- }
- memcpy(rsec->data->d_buf, orig_buf,
- nr_relocs_old * elf_rela_size(elf));
- } else {
- rsec->data->d_buf = realloc(rsec->data->d_buf,
- nr_alloc * elf_rela_size(elf));
- if (!rsec->data->d_buf) {
- ERROR_GLIBC("realloc");
- return -1;
- }
- }
-
- rsec->nr_alloc_relocs = nr_alloc;
-
- old_relocs = rsec->relocs;
- new_relocs = calloc(nr_alloc, sizeof(struct reloc));
+ new_relocs = calloc(nr_alloc_new, sizeof(struct reloc));
if (!new_relocs) {
ERROR_GLIBC("calloc");
return -1;
}
+ rsec->relocs_overallocated = 1;
+
+ old_relocs = rsec->relocs;
if (!old_relocs)
goto done;
@@ -1631,6 +1628,7 @@ static int elf_alloc_reloc(struct elf *elf, struct section *rsec)
}
free(old_relocs);
+
done:
rsec->relocs = new_relocs;
return 0;
@@ -1660,7 +1658,6 @@ struct section *elf_create_rela_section(struct elf *elf, struct section *sec,
if (nr_relocs) {
rsec->data->d_type = ELF_T_RELA;
- rsec->nr_alloc_relocs = nr_relocs;
rsec->relocs = calloc(nr_relocs, sizeof(struct reloc));
if (!rsec->relocs) {
ERROR_GLIBC("calloc");
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index d9c44df9cc76a..0801fcad516bb 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -58,9 +58,18 @@ struct section {
Elf_Data *data;
const char *name;
int idx;
- bool _changed, text, rodata, noinstr, init, truncate;
+ u32 _changed : 1,
+ text : 1,
+ rodata : 1,
+ noinstr : 1,
+ init : 1,
+ truncate : 1,
+ data_owned : 1,
+ data_overallocated : 1,
+ relocs_overallocated : 1;
+ /* 23 bit hole */
+
struct reloc *relocs;
- unsigned long nr_alloc_relocs;
struct section *twin;
};
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 12/21] objtool: Refactor elf_add_data() to use a growable data buffer
2026-05-13 3:33 ` [PATCH v3 12/21] objtool: Refactor elf_add_data() to use a growable data buffer Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Instead of calling elf_newdata() for each new piece of data with its own
separate buffer, keep it all in the same growable buffer so the
section's entire data can be accessed if needed.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/elf.c | 123 ++++++++++++++--------------
tools/objtool/include/objtool/elf.h | 13 ++-
2 files changed, 71 insertions(+), 65 deletions(-)
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index 33c95a74a51bd..e09bb0a63be35 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -1134,9 +1134,6 @@ static int read_relocs(struct elf *elf)
rsec->base->rsec = rsec;
- /* nr_alloc_relocs=0: libelf owns d_buf */
- rsec->nr_alloc_relocs = 0;
-
rsec->relocs = calloc(sec_num_entries(rsec), sizeof(*reloc));
if (!rsec->relocs) {
ERROR_GLIBC("calloc");
@@ -1395,7 +1392,7 @@ unsigned int elf_add_string(struct elf *elf, struct section *strtab, const char
void *elf_add_data(struct elf *elf, struct section *sec, const void *data, size_t size)
{
- unsigned long offset;
+ unsigned long offset, size_old, size_new, alloc_size_old, alloc_size_new;
Elf_Scn *s;
if (!sec->sh.sh_addralign) {
@@ -1409,30 +1406,55 @@ void *elf_add_data(struct elf *elf, struct section *sec, const void *data, size_
return NULL;
}
- sec->data = elf_newdata(s);
if (!sec->data) {
- ERROR_ELF("elf_newdata");
- return NULL;
+ sec->data = elf_newdata(s);
+ if (!sec->data) {
+ ERROR_ELF("elf_newdata");
+ return NULL;
+ }
+
+ sec->data->d_align = sec->sh.sh_addralign;
}
- sec->data->d_buf = calloc(1, size);
- if (!sec->data->d_buf) {
- ERROR_GLIBC("calloc");
- return NULL;
+ size_old = sec->data->d_size;
+ offset = ALIGN(size_old, sec->sh.sh_addralign);
+ size_new = offset + size;
+
+ if (!sec->data_overallocated)
+ alloc_size_old = size_old;
+ else
+ alloc_size_old = max(64UL, roundup_pow_of_two(size_old ? : 1));
+
+ alloc_size_new = max(64UL, roundup_pow_of_two(size_new ? : 1));
+
+ if (alloc_size_new > alloc_size_old) {
+ void *orig_buf = sec->data->d_buf;
+
+ sec->data->d_buf = calloc(1, alloc_size_new);
+ if (!sec->data->d_buf) {
+ ERROR_GLIBC("calloc");
+ return NULL;
+ }
+
+ if (size_old)
+ memcpy(sec->data->d_buf, orig_buf, size_old);
+
+ if (orig_buf && sec->data_owned)
+ free(orig_buf);
+
+ sec->data_owned = 1;
+ sec->data_overallocated = 1;
}
if (data)
- memcpy(sec->data->d_buf, data, size);
-
- sec->data->d_size = size;
- sec->data->d_align = sec->sh.sh_addralign;
-
- offset = ALIGN(sec_size(sec), sec->sh.sh_addralign);
- sec->sh.sh_size = offset + size;
+ memcpy(sec->data->d_buf + offset, data, size);
+ else
+ memset(sec->data->d_buf + offset, 0, size);
+ sec->data->d_size = size_new;
+ sec->sh.sh_size = size_new;
mark_sec_changed(elf, sec, true);
-
- return sec->data->d_buf;
+ return sec->data->d_buf + offset;
}
struct section *elf_create_section(struct elf *elf, const char *name,
@@ -1483,6 +1505,8 @@ struct section *elf_create_section(struct elf *elf, const char *name,
ERROR_GLIBC("calloc");
return NULL;
}
+
+ sec->data_owned = 1;
}
if (!gelf_getshdr(s, &sec->sh)) {
@@ -1533,60 +1557,33 @@ static int elf_alloc_reloc(struct elf *elf, struct section *rsec)
struct reloc *old_relocs, *old_relocs_end, *new_relocs;
unsigned int nr_relocs_old = sec_num_entries(rsec);
unsigned int nr_relocs_new = nr_relocs_old + 1;
- unsigned long nr_alloc;
+ unsigned long nr_alloc_old = 0, nr_alloc_new;
struct symbol *sym;
- if (!rsec->data) {
- rsec->data = elf_newdata(elf_getscn(elf->elf, rsec->idx));
- if (!rsec->data) {
- ERROR_ELF("elf_newdata");
- return -1;
- }
+ if (!elf_add_data(elf, rsec, NULL, elf_rela_size(elf)))
+ return -1;
- rsec->data->d_align = 1;
- rsec->data->d_type = ELF_T_RELA;
- rsec->data->d_buf = NULL;
- }
+ rsec->data->d_type = ELF_T_RELA;
- rsec->data->d_size = nr_relocs_new * elf_rela_size(elf);
- rsec->sh.sh_size = rsec->data->d_size;
+ if (rsec->relocs_overallocated)
+ nr_alloc_old = max(64UL, roundup_pow_of_two(nr_relocs_old ? : 1));
+ else
+ nr_alloc_old = nr_relocs_old;
- nr_alloc = max(64UL, roundup_pow_of_two(nr_relocs_new));
- if (nr_alloc <= rsec->nr_alloc_relocs)
+ nr_alloc_new = max(64UL, roundup_pow_of_two(nr_relocs_new ? : 1));
+
+ if (nr_alloc_old == nr_alloc_new)
return 0;
- if (rsec->data->d_buf && !rsec->nr_alloc_relocs) {
- void *orig_buf = rsec->data->d_buf;
-
- /*
- * The original d_buf is owned by libelf so it can't be
- * realloced.
- */
- rsec->data->d_buf = malloc(nr_alloc * elf_rela_size(elf));
- if (!rsec->data->d_buf) {
- ERROR_GLIBC("malloc");
- return -1;
- }
- memcpy(rsec->data->d_buf, orig_buf,
- nr_relocs_old * elf_rela_size(elf));
- } else {
- rsec->data->d_buf = realloc(rsec->data->d_buf,
- nr_alloc * elf_rela_size(elf));
- if (!rsec->data->d_buf) {
- ERROR_GLIBC("realloc");
- return -1;
- }
- }
-
- rsec->nr_alloc_relocs = nr_alloc;
-
- old_relocs = rsec->relocs;
- new_relocs = calloc(nr_alloc, sizeof(struct reloc));
+ new_relocs = calloc(nr_alloc_new, sizeof(struct reloc));
if (!new_relocs) {
ERROR_GLIBC("calloc");
return -1;
}
+ rsec->relocs_overallocated = 1;
+
+ old_relocs = rsec->relocs;
if (!old_relocs)
goto done;
@@ -1631,6 +1628,7 @@ static int elf_alloc_reloc(struct elf *elf, struct section *rsec)
}
free(old_relocs);
+
done:
rsec->relocs = new_relocs;
return 0;
@@ -1660,7 +1658,6 @@ struct section *elf_create_rela_section(struct elf *elf, struct section *sec,
if (nr_relocs) {
rsec->data->d_type = ELF_T_RELA;
- rsec->nr_alloc_relocs = nr_relocs;
rsec->relocs = calloc(nr_relocs, sizeof(struct reloc));
if (!rsec->relocs) {
ERROR_GLIBC("calloc");
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index d9c44df9cc76a..0801fcad516bb 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -58,9 +58,18 @@ struct section {
Elf_Data *data;
const char *name;
int idx;
- bool _changed, text, rodata, noinstr, init, truncate;
+ u32 _changed : 1,
+ text : 1,
+ rodata : 1,
+ noinstr : 1,
+ init : 1,
+ truncate : 1,
+ data_owned : 1,
+ data_overallocated : 1,
+ relocs_overallocated : 1;
+ /* 23 bit hole */
+
struct reloc *relocs;
- unsigned long nr_alloc_relocs;
struct section *twin;
};
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 13/21] objtool: Reuse string references
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (7 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 12/21] objtool: Refactor elf_add_data() to use a growable data buffer Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 14/21] objtool: Prevent kCFI hashes from being decoded as instructions Josh Poimboeuf
` (12 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
For duplicate strings, elf_add_string() just blindly adds duplicates.
That can be a problem for arm64 which often uses two consecutive
instructions (and corresponding relocations) to put an address into a
register, like:
d8: 90000001 adrp x1, 0 <meminfo_proc_show> d8: R_AARCH64_ADR_PREL_PG_HI21 .rodata.meminfo_proc_show.str1.8
dc: 91000021 add x1, x1, #0x0 dc: R_AARCH64_ADD_ABS_LO12_NC .rodata.meminfo_proc_show.str1.8
Referencing two different addresses in the ADRP+ADD pair would corrupt
the memory access. Avoid that by detecting and reusing duplicates when
cloning string relocs.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/elf.c | 29 +++++++++++++++++++++++------
tools/objtool/include/objtool/elf.h | 3 ++-
tools/objtool/klp-diff.c | 4 +++-
3 files changed, 28 insertions(+), 8 deletions(-)
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index e09bb0a63be35..065ccfeb98288 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -1366,9 +1366,27 @@ struct elf *elf_create_file(GElf_Ehdr *ehdr, const char *name)
return elf;
}
-unsigned int elf_add_string(struct elf *elf, struct section *strtab, const char *str)
+int elf_find_string(struct elf *elf, struct section *strtab, const char *str)
{
- unsigned int offset;
+ char *d_buf;
+ int i;
+
+ if (!strtab->data)
+ return -1;
+
+ d_buf = strtab->data->d_buf;
+
+ for (i = 0; i < strtab->data->d_size; i += strlen(d_buf + i) + 1) {
+ if (!strcmp(d_buf + i, str))
+ return i;
+ }
+
+ return -1;
+}
+
+int elf_add_string(struct elf *elf, struct section *strtab, const char *str)
+{
+ void *data;
if (!strtab)
strtab = find_section_by_name(elf, ".strtab");
@@ -1382,12 +1400,11 @@ unsigned int elf_add_string(struct elf *elf, struct section *strtab, const char
return -1;
}
- offset = ALIGN(sec_size(strtab), strtab->sh.sh_addralign);
-
- if (!elf_add_data(elf, strtab, str, strlen(str) + 1))
+ data = elf_add_data(elf, strtab, str, strlen(str) + 1);
+ if (!data)
return -1;
- return offset;
+ return data - strtab->data->d_buf;
}
void *elf_add_data(struct elf *elf, struct section *sec, const void *data, size_t size)
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index 0801fcad516bb..d895023674673 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -187,7 +187,8 @@ struct symbol *elf_create_section_symbol(struct elf *elf, struct section *sec);
void *elf_add_data(struct elf *elf, struct section *sec, const void *data,
size_t size);
-unsigned int elf_add_string(struct elf *elf, struct section *strtab, const char *str);
+int elf_find_string(struct elf *elf, struct section *strtab, const char *str);
+int elf_add_string(struct elf *elf, struct section *strtab, const char *str);
struct reloc *elf_create_reloc(struct elf *elf, struct section *sec,
unsigned long offset, struct symbol *sym,
diff --git a/tools/objtool/klp-diff.c b/tools/objtool/klp-diff.c
index 6a1cec57dc6a3..6957292e455e4 100644
--- a/tools/objtool/klp-diff.c
+++ b/tools/objtool/klp-diff.c
@@ -1509,7 +1509,9 @@ static int clone_reloc(struct elfs *e, struct reloc *patched_reloc,
__dbg_clone("\"%s\"", escape_str(str));
- addend = elf_add_string(e->out, out_sym->sec, str);
+ addend = elf_find_string(e->out, out_sym->sec, str);
+ if (addend == -1)
+ addend = elf_add_string(e->out, out_sym->sec, str);
if (addend == -1)
return -1;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 13/21] objtool: Reuse string references
2026-05-13 3:33 ` [PATCH v3 13/21] objtool: Reuse string references Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
For duplicate strings, elf_add_string() just blindly adds duplicates.
That can be a problem for arm64 which often uses two consecutive
instructions (and corresponding relocations) to put an address into a
register, like:
d8: 90000001 adrp x1, 0 <meminfo_proc_show> d8: R_AARCH64_ADR_PREL_PG_HI21 .rodata.meminfo_proc_show.str1.8
dc: 91000021 add x1, x1, #0x0 dc: R_AARCH64_ADD_ABS_LO12_NC .rodata.meminfo_proc_show.str1.8
Referencing two different addresses in the ADRP+ADD pair would corrupt
the memory access. Avoid that by detecting and reusing duplicates when
cloning string relocs.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/elf.c | 29 +++++++++++++++++++++++------
tools/objtool/include/objtool/elf.h | 3 ++-
tools/objtool/klp-diff.c | 4 +++-
3 files changed, 28 insertions(+), 8 deletions(-)
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index e09bb0a63be35..065ccfeb98288 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -1366,9 +1366,27 @@ struct elf *elf_create_file(GElf_Ehdr *ehdr, const char *name)
return elf;
}
-unsigned int elf_add_string(struct elf *elf, struct section *strtab, const char *str)
+int elf_find_string(struct elf *elf, struct section *strtab, const char *str)
{
- unsigned int offset;
+ char *d_buf;
+ int i;
+
+ if (!strtab->data)
+ return -1;
+
+ d_buf = strtab->data->d_buf;
+
+ for (i = 0; i < strtab->data->d_size; i += strlen(d_buf + i) + 1) {
+ if (!strcmp(d_buf + i, str))
+ return i;
+ }
+
+ return -1;
+}
+
+int elf_add_string(struct elf *elf, struct section *strtab, const char *str)
+{
+ void *data;
if (!strtab)
strtab = find_section_by_name(elf, ".strtab");
@@ -1382,12 +1400,11 @@ unsigned int elf_add_string(struct elf *elf, struct section *strtab, const char
return -1;
}
- offset = ALIGN(sec_size(strtab), strtab->sh.sh_addralign);
-
- if (!elf_add_data(elf, strtab, str, strlen(str) + 1))
+ data = elf_add_data(elf, strtab, str, strlen(str) + 1);
+ if (!data)
return -1;
- return offset;
+ return data - strtab->data->d_buf;
}
void *elf_add_data(struct elf *elf, struct section *sec, const void *data, size_t size)
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index 0801fcad516bb..d895023674673 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -187,7 +187,8 @@ struct symbol *elf_create_section_symbol(struct elf *elf, struct section *sec);
void *elf_add_data(struct elf *elf, struct section *sec, const void *data,
size_t size);
-unsigned int elf_add_string(struct elf *elf, struct section *strtab, const char *str);
+int elf_find_string(struct elf *elf, struct section *strtab, const char *str);
+int elf_add_string(struct elf *elf, struct section *strtab, const char *str);
struct reloc *elf_create_reloc(struct elf *elf, struct section *sec,
unsigned long offset, struct symbol *sym,
diff --git a/tools/objtool/klp-diff.c b/tools/objtool/klp-diff.c
index 6a1cec57dc6a3..6957292e455e4 100644
--- a/tools/objtool/klp-diff.c
+++ b/tools/objtool/klp-diff.c
@@ -1509,7 +1509,9 @@ static int clone_reloc(struct elfs *e, struct reloc *patched_reloc,
__dbg_clone("\"%s\"", escape_str(str));
- addend = elf_add_string(e->out, out_sym->sec, str);
+ addend = elf_find_string(e->out, out_sym->sec, str);
+ if (addend == -1)
+ addend = elf_add_string(e->out, out_sym->sec, str);
if (addend == -1)
return -1;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 14/21] objtool: Prevent kCFI hashes from being decoded as instructions
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (8 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 13/21] objtool: Reuse string references Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 15/21] objtool/klp: Add arm64 support for prefix/PFE detection Josh Poimboeuf
` (11 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
On arm64 with CONFIG_CFI=y, Clang places a 4-byte kCFI type hash
immediately before each address-taken function entry. Since these
hashes are in the text section, objtool tries to decode them, leading to
unpredictable results (e.g., "unannotated intra-function call").
arm64 uses mapping symbols to annotate where code ends and data begins
(and vice versa). Use those to just mark such "instructions" as NOP so
objtool will ignore them.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/check.c | 15 +++++++++++++++
tools/objtool/include/objtool/elf.h | 3 +++
2 files changed, 18 insertions(+)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index e05dc7a93dc1e..2b03a2d6fc952 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -25,6 +25,7 @@
#include <linux/kernel.h>
#include <linux/static_call_types.h>
#include <linux/string.h>
+#include <linux/kconfig.h>
static unsigned long nr_cfi, nr_cfi_reused, nr_cfi_cache;
@@ -428,6 +429,8 @@ static int decode_instructions(struct objtool_file *file)
for_each_sec(file->elf, sec) {
struct instruction *insns = NULL;
+ struct symbol *map_sym;
+ bool is_data = false;
u8 prev_len = 0;
u8 idx = 0;
@@ -454,6 +457,8 @@ static int decode_instructions(struct objtool_file *file)
if (!strcmp(sec->name, ".init.text") && !opts.module)
sec->init = true;
+ map_sym = list_first_entry(&sec->symbol_list, struct symbol, list);
+
for (offset = 0; offset < sec_size(sec); offset += insn->len) {
if (!insns || idx == INSN_CHUNK_MAX) {
insns = calloc(INSN_CHUNK_SIZE, sizeof(*insn));
@@ -478,6 +483,16 @@ static int decode_instructions(struct objtool_file *file)
prev_len = insn->len;
+ /* Use mapping symbols to skip data in text sections */
+ sec_for_each_sym_from(sec, map_sym) {
+ if (map_sym->offset > offset)
+ break;
+ if (is_mapping_sym(map_sym))
+ is_data = is_data_mapping_sym(map_sym);
+ }
+ if (is_data)
+ insn->type = INSN_NOP;
+
/*
* By default, "ud2" is a dead end unless otherwise
* annotated, because GCC 7 inserts it for certain
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index d895023674673..9d36b14f420e2 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -507,6 +507,9 @@ static inline void set_sym_next_reloc(struct reloc *reloc, struct reloc *next)
#define sec_for_each_sym(sec, sym) \
list_for_each_entry(sym, &sec->symbol_list, list)
+#define sec_for_each_sym_from(sec, sym) \
+ list_for_each_entry_from(sym, &sec->symbol_list, list)
+
#define sec_prev_sym(sym) \
sym->sec && sym->list.prev != &sym->sec->symbol_list ? \
list_prev_entry(sym, list) : NULL
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 14/21] objtool: Prevent kCFI hashes from being decoded as instructions
2026-05-13 3:33 ` [PATCH v3 14/21] objtool: Prevent kCFI hashes from being decoded as instructions Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
On arm64 with CONFIG_CFI=y, Clang places a 4-byte kCFI type hash
immediately before each address-taken function entry. Since these
hashes are in the text section, objtool tries to decode them, leading to
unpredictable results (e.g., "unannotated intra-function call").
arm64 uses mapping symbols to annotate where code ends and data begins
(and vice versa). Use those to just mark such "instructions" as NOP so
objtool will ignore them.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/check.c | 15 +++++++++++++++
tools/objtool/include/objtool/elf.h | 3 +++
2 files changed, 18 insertions(+)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index e05dc7a93dc1e..2b03a2d6fc952 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -25,6 +25,7 @@
#include <linux/kernel.h>
#include <linux/static_call_types.h>
#include <linux/string.h>
+#include <linux/kconfig.h>
static unsigned long nr_cfi, nr_cfi_reused, nr_cfi_cache;
@@ -428,6 +429,8 @@ static int decode_instructions(struct objtool_file *file)
for_each_sec(file->elf, sec) {
struct instruction *insns = NULL;
+ struct symbol *map_sym;
+ bool is_data = false;
u8 prev_len = 0;
u8 idx = 0;
@@ -454,6 +457,8 @@ static int decode_instructions(struct objtool_file *file)
if (!strcmp(sec->name, ".init.text") && !opts.module)
sec->init = true;
+ map_sym = list_first_entry(&sec->symbol_list, struct symbol, list);
+
for (offset = 0; offset < sec_size(sec); offset += insn->len) {
if (!insns || idx == INSN_CHUNK_MAX) {
insns = calloc(INSN_CHUNK_SIZE, sizeof(*insn));
@@ -478,6 +483,16 @@ static int decode_instructions(struct objtool_file *file)
prev_len = insn->len;
+ /* Use mapping symbols to skip data in text sections */
+ sec_for_each_sym_from(sec, map_sym) {
+ if (map_sym->offset > offset)
+ break;
+ if (is_mapping_sym(map_sym))
+ is_data = is_data_mapping_sym(map_sym);
+ }
+ if (is_data)
+ insn->type = INSN_NOP;
+
/*
* By default, "ud2" is a dead end unless otherwise
* annotated, because GCC 7 inserts it for certain
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index d895023674673..9d36b14f420e2 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -507,6 +507,9 @@ static inline void set_sym_next_reloc(struct reloc *reloc, struct reloc *next)
#define sec_for_each_sym(sec, sym) \
list_for_each_entry(sym, &sec->symbol_list, list)
+#define sec_for_each_sym_from(sec, sym) \
+ list_for_each_entry_from(sym, &sec->symbol_list, list)
+
#define sec_prev_sym(sym) \
sym->sec && sym->list.prev != &sym->sec->symbol_list ? \
list_prev_entry(sym, list) : NULL
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 15/21] objtool/klp: Add arm64 support for prefix/PFE detection
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (9 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 14/21] objtool: Prevent kCFI hashes from being decoded as instructions Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 16/21] objtool/klp: Filter arm64 mapping symbols in find_symbol_by_offset() Josh Poimboeuf
` (10 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Add arm64 support for detecting prefixed areas before functions (for
kCFI or ftrace with call ops), and __patchable_function_entries (for
ftrace with call ops or args).
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/arch/x86/include/arch/elf.h | 2 +
tools/objtool/elf.c | 13 ++
tools/objtool/include/objtool/elf.h | 22 +++
tools/objtool/klp-diff.c | 166 ++++++++++++++++++++--
4 files changed, 192 insertions(+), 11 deletions(-)
diff --git a/tools/objtool/arch/x86/include/arch/elf.h b/tools/objtool/arch/x86/include/arch/elf.h
index 7131f7f51a4e8..5ee0ccda7db18 100644
--- a/tools/objtool/arch/x86/include/arch/elf.h
+++ b/tools/objtool/arch/x86/include/arch/elf.h
@@ -10,4 +10,6 @@
#define R_TEXT32 R_X86_64_PC32
#define R_TEXT64 R_X86_64_PC32
+#define ARCH_HAS_PREFIX_SYMBOLS 1
+
#endif /* _OBJTOOL_ARCH_ELF */
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index 065ccfeb98288..9d5a926934dc2 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -274,6 +274,19 @@ struct symbol *find_func_containing(struct section *sec, unsigned long offset)
return NULL;
}
+struct symbol *find_data_mapping_sym(struct section *sec, unsigned long offset)
+{
+ struct rb_root_cached *tree = (struct rb_root_cached *)&sec->symbol_tree;
+ struct symbol *sym;
+
+ __sym_for_each(sym, tree, offset, offset) {
+ if (sym->offset == offset && is_data_mapping_sym(sym))
+ return sym;
+ }
+
+ return NULL;
+}
+
struct symbol *find_symbol_by_name(const struct elf *elf, const char *name)
{
struct symbol *sym;
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index 9d36b14f420e2..ab1d53ed23189 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -130,6 +130,7 @@ struct elf {
struct list_head sections;
struct list_head symbols;
unsigned long num_relocs;
+ int pfe_offset;
int symbol_bits;
int symbol_name_bits;
@@ -229,6 +230,7 @@ struct reloc *find_reloc_by_dest(const struct elf *elf, struct section *sec, uns
struct reloc *find_reloc_by_dest_range(const struct elf *elf, struct section *sec,
unsigned long offset, unsigned int len);
struct symbol *find_func_containing(struct section *sec, unsigned long offset);
+struct symbol *find_data_mapping_sym(struct section *sec, unsigned long offset);
/*
* Try to see if it's a whole archive (vmlinux.o or module).
@@ -295,6 +297,26 @@ static inline bool is_notype_sym(struct symbol *sym)
return sym->type == STT_NOTYPE;
}
+/*
+ * ARM64 mapping symbols ($d, $x, $a, __pi_$d, etc) which mark transitions
+ * between code and data.
+ */
+static inline bool is_mapping_sym(struct symbol *sym)
+{
+ return is_notype_sym(sym) && strchr(sym->name, '$');
+}
+
+static inline bool is_data_mapping_sym(struct symbol *sym)
+{
+ const char *dollar;
+
+ if (!is_mapping_sym(sym))
+ return false;
+
+ dollar = strchr(sym->name, '$');
+ return dollar && dollar[1] == 'd';
+}
+
static inline bool is_global_sym(struct symbol *sym)
{
return sym->bind == STB_GLOBAL;
diff --git a/tools/objtool/klp-diff.c b/tools/objtool/klp-diff.c
index 6957292e455e4..eb21f3bf3120b 100644
--- a/tools/objtool/klp-diff.c
+++ b/tools/objtool/klp-diff.c
@@ -20,6 +20,7 @@
#include <linux/stringify.h>
#include <linux/string.h>
#include <linux/jhash.h>
+#include <linux/kconfig.h>
#define sizeof_field(TYPE, MEMBER) sizeof((((TYPE *)0)->MEMBER))
@@ -213,6 +214,98 @@ static int read_sym_checksums(struct elf *elf)
return 0;
}
+/*
+ * For non-x86, detect the offset from the function entry point to its
+ * __patchable_function_entries (PFE) relocation target. x86 doesn't need this,
+ * it clones the __cfi/__pfx symbol instead.
+ *
+ * offset < 0 (before function entry):
+ *
+ * CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS (arm64)
+ *
+ * offset == 0 (at function entry):
+ *
+ * CONFIG_DYNAMIC_FTRACE_WITH_ARGS without BTI (arm64)
+ *
+ * offset > 0 (after function entry):
+ *
+ * CONFIG_DYNAMIC_FTRACE_WITH_ARGS with BTI (arm64)
+ */
+static int read_pfe_offset(struct elf *elf)
+{
+ bool has_pfe = false;
+ struct section *sec;
+
+ if (__is_defined(ARCH_HAS_PREFIX_SYMBOLS))
+ return 0;
+
+ for_each_sec(elf, sec) {
+ struct reloc *reloc;
+
+ if (strcmp(sec->name, "__patchable_function_entries"))
+ continue;
+ if (!sec->rsec)
+ continue;
+
+ has_pfe = true;
+
+ for_each_reloc(sec->rsec, reloc) {
+ unsigned long target = reloc->sym->offset + reloc_addend(reloc);
+ struct symbol *func;
+
+ /* arm64 func */
+ func = find_func_containing(reloc->sym->sec, target);
+ if (func) {
+ elf->pfe_offset = target - func->offset;
+ return 0;
+ }
+
+ /* arm64 CALL_OPS */
+ func = find_func_by_offset(reloc->sym->sec, target + 8);
+ if (func) {
+ elf->pfe_offset = -8;
+ return 0;
+ }
+ }
+ }
+
+ if (has_pfe) {
+ ERROR("can't find __patchable_function_entries offset");
+ return -1;
+ }
+
+ return 0;
+}
+
+/*
+ * Detect the size of the area before a function's entry point. This prefix
+ * area is used for CFI type hashes or ftrace call ops.
+ *
+ * $d mapping symbol (arm64):
+ *
+ * CONFIG_CFI
+ *
+ * PFE before function entry, no symbol (arm64):
+ *
+ * CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS
+ */
+static unsigned long func_pfx_size(struct elf *elf, struct symbol *func)
+{
+ if (__is_defined(ARCH_HAS_PREFIX_SYMBOLS))
+ return 0;
+
+ /* arm64 kCFI $d data mapping symbol */
+ if (func->offset >= 4 &&
+ find_data_mapping_sym(func->sec, func->offset - 4))
+ return 4;
+
+ /* arm64 CALL_OPS (mutually exclusive with kCFI) */
+ if (elf->pfe_offset < 0 && func->offset >= -elf->pfe_offset)
+ return -elf->pfe_offset;
+
+ return 0;
+}
+
static struct symbol *first_file_symbol(struct elf *elf)
{
struct symbol *sym;
@@ -302,6 +395,9 @@ static bool is_special_section(struct section *sec)
"__ex_table",
"__jump_table",
"__mcount_loc",
+#ifndef ARCH_HAS_PREFIX_SYMBOLS
+ "__patchable_function_entries",
+#endif
/*
* Extract .static_call_sites here to inherit non-module
@@ -931,7 +1027,7 @@ static struct symbol *__clone_symbol(struct elf *elf, struct symbol *patched_sym
bool data_too)
{
struct section *out_sec = NULL;
- unsigned long offset = 0;
+ unsigned long offset = 0, pfx_size = 0;
struct symbol *out_sym;
if (data_too && !is_undef_sym(patched_sym)) {
@@ -963,17 +1059,22 @@ static struct symbol *__clone_symbol(struct elf *elf, struct symbol *patched_sym
void *data = NULL;
size_t size;
+ /* Clone (non-x86) function prefix area */
+ pfx_size = is_func_sym(patched_sym) ? func_pfx_size(elf, patched_sym) : 0;
+
/* bss doesn't have data */
if (patched_sym->sec->data && patched_sym->sec->data->d_buf)
- data = patched_sym->sec->data->d_buf + patched_sym->offset;
+ data = patched_sym->sec->data->d_buf + patched_sym->offset - pfx_size;
if (is_sec_sym(patched_sym))
size = sec_size(patched_sym->sec);
else
- size = patched_sym->len;
+ size = patched_sym->len + pfx_size;
if (!elf_add_data(elf, out_sec, data, size))
return NULL;
+
+ offset += pfx_size;
}
}
@@ -1251,21 +1352,41 @@ static int convert_reloc_sym_to_secsym(struct elf *elf, struct reloc *reloc)
return 0;
}
+/*
+ * __patchable_function_entries relocs point to the patchable entry NOPs,
+ * which are at 'pfe_offset' bytes from the function symbol.
+ *
+ * Some entries (e.g., removed weak functions, syscall -ENOSYS stubs) don't
+ * have a corresponding function symbol. Skip those with a return value of 1.
+ */
+static int convert_pfe_reloc(struct elf *elf, struct reloc *reloc)
+{
+ struct symbol *func;
+
+ func = find_func_by_offset(reloc->sym->sec,
+ reloc->sym->offset +
+ reloc_addend(reloc) - elf->pfe_offset);
+ if (!func)
+ return 1;
+
+ reloc->sym = func;
+ set_reloc_sym(elf, reloc, func->idx);
+ set_reloc_addend(elf, reloc, elf->pfe_offset);
+ return 0;
+}
+
/* Return -1 error, 0 success, 1 skip */
static int convert_reloc_secsym_to_sym(struct elf *elf, struct reloc *reloc)
{
struct symbol *sym = reloc->sym;
struct section *sec = sym->sec;
+ if (!strcmp(reloc->sec->name, ".rela__patchable_function_entries"))
+ return convert_pfe_reloc(elf, reloc);
+
if (!is_sec_sym(sym))
return 0;
- /* If the symbol has a dedicated section, it's easy to find */
- sym = find_symbol_by_offset(sec, 0);
- if (sym && sym->len == sec_size(sec))
- goto found_sym;
-
- /* No dedicated section; find the symbol manually */
sym = find_symbol_containing_inclusive(sec, arch_adjusted_addend(reloc));
if (!sym) {
/*
@@ -1293,7 +1414,6 @@ static int convert_reloc_secsym_to_sym(struct elf *elf, struct reloc *reloc)
return -1;
}
-found_sym:
reloc->sym = sym;
set_reloc_sym(elf, reloc, sym->idx);
set_reloc_addend(elf, reloc, reloc_addend(reloc) - sym->offset);
@@ -1856,6 +1976,9 @@ static int validate_special_section_klp_reloc(struct elfs *e, struct symbol *sym
static int clone_special_section(struct elfs *e, struct section *patched_sec)
{
+ bool is_pfe = !strcmp(patched_sec->name, "__patchable_function_entries");
+ struct section *out_sec = NULL;
+ struct reloc *patched_reloc;
struct symbol *patched_sym;
/*
@@ -1863,6 +1986,7 @@ static int clone_special_section(struct elfs *e, struct section *patched_sec)
* reference included functions.
*/
sec_for_each_sym(patched_sec, patched_sym) {
+ struct symbol *out_sym;
int ret;
if (!is_object_sym(patched_sym))
@@ -1877,8 +2001,23 @@ static int clone_special_section(struct elfs *e, struct section *patched_sec)
if (ret > 0)
continue;
- if (!clone_symbol(e, patched_sym, true))
+ out_sym = clone_symbol(e, patched_sym, true);
+ if (!out_sym)
return -1;
+
+ if (!is_pfe || (out_sec && out_sec->sh.sh_link))
+ continue;
+
+ /*
+ * For reasons, the patched object has multiple PFE sections,
+ * but we only need to create one combined section for the
+ * output. Link the single PFE ouput section to a random text
+ * section to satisfy the linker for SHF_LINK_ORDER.
+ */
+ out_sec = out_sym->sec;
+ patched_reloc = find_reloc_by_dest(e->patched, patched_sec,
+ patched_sym->offset);
+ out_sec->sh.sh_link = patched_reloc->sym->clone->sec->idx;
}
return 0;
@@ -2175,6 +2314,9 @@ int cmd_klp_diff(int argc, const char **argv)
if (read_sym_checksums(e.patched))
return -1;
+ if (read_pfe_offset(e.patched))
+ return -1;
+
if (correlate_symbols(&e))
return -1;
@@ -2188,6 +2330,8 @@ int cmd_klp_diff(int argc, const char **argv)
if (!e.out)
return -1;
+ e.out->pfe_offset = e.patched->pfe_offset;
+
/*
* Special section fake symbols are needed so that individual special
* section entries can be extracted by clone_special_sections().
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 15/21] objtool/klp: Add arm64 support for prefix/PFE detection
2026-05-13 3:33 ` [PATCH v3 15/21] objtool/klp: Add arm64 support for prefix/PFE detection Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Add arm64 support for detecting prefixed areas before functions (for
kCFI or ftrace with call ops), and __patchable_function_entries (for
ftrace with call ops or args).
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/arch/x86/include/arch/elf.h | 2 +
tools/objtool/elf.c | 13 ++
tools/objtool/include/objtool/elf.h | 22 +++
tools/objtool/klp-diff.c | 166 ++++++++++++++++++++--
4 files changed, 192 insertions(+), 11 deletions(-)
diff --git a/tools/objtool/arch/x86/include/arch/elf.h b/tools/objtool/arch/x86/include/arch/elf.h
index 7131f7f51a4e8..5ee0ccda7db18 100644
--- a/tools/objtool/arch/x86/include/arch/elf.h
+++ b/tools/objtool/arch/x86/include/arch/elf.h
@@ -10,4 +10,6 @@
#define R_TEXT32 R_X86_64_PC32
#define R_TEXT64 R_X86_64_PC32
+#define ARCH_HAS_PREFIX_SYMBOLS 1
+
#endif /* _OBJTOOL_ARCH_ELF */
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index 065ccfeb98288..9d5a926934dc2 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -274,6 +274,19 @@ struct symbol *find_func_containing(struct section *sec, unsigned long offset)
return NULL;
}
+struct symbol *find_data_mapping_sym(struct section *sec, unsigned long offset)
+{
+ struct rb_root_cached *tree = (struct rb_root_cached *)&sec->symbol_tree;
+ struct symbol *sym;
+
+ __sym_for_each(sym, tree, offset, offset) {
+ if (sym->offset == offset && is_data_mapping_sym(sym))
+ return sym;
+ }
+
+ return NULL;
+}
+
struct symbol *find_symbol_by_name(const struct elf *elf, const char *name)
{
struct symbol *sym;
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index 9d36b14f420e2..ab1d53ed23189 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -130,6 +130,7 @@ struct elf {
struct list_head sections;
struct list_head symbols;
unsigned long num_relocs;
+ int pfe_offset;
int symbol_bits;
int symbol_name_bits;
@@ -229,6 +230,7 @@ struct reloc *find_reloc_by_dest(const struct elf *elf, struct section *sec, uns
struct reloc *find_reloc_by_dest_range(const struct elf *elf, struct section *sec,
unsigned long offset, unsigned int len);
struct symbol *find_func_containing(struct section *sec, unsigned long offset);
+struct symbol *find_data_mapping_sym(struct section *sec, unsigned long offset);
/*
* Try to see if it's a whole archive (vmlinux.o or module).
@@ -295,6 +297,26 @@ static inline bool is_notype_sym(struct symbol *sym)
return sym->type == STT_NOTYPE;
}
+/*
+ * ARM64 mapping symbols ($d, $x, $a, __pi_$d, etc) which mark transitions
+ * between code and data.
+ */
+static inline bool is_mapping_sym(struct symbol *sym)
+{
+ return is_notype_sym(sym) && strchr(sym->name, '$');
+}
+
+static inline bool is_data_mapping_sym(struct symbol *sym)
+{
+ const char *dollar;
+
+ if (!is_mapping_sym(sym))
+ return false;
+
+ dollar = strchr(sym->name, '$');
+ return dollar && dollar[1] == 'd';
+}
+
static inline bool is_global_sym(struct symbol *sym)
{
return sym->bind == STB_GLOBAL;
diff --git a/tools/objtool/klp-diff.c b/tools/objtool/klp-diff.c
index 6957292e455e4..eb21f3bf3120b 100644
--- a/tools/objtool/klp-diff.c
+++ b/tools/objtool/klp-diff.c
@@ -20,6 +20,7 @@
#include <linux/stringify.h>
#include <linux/string.h>
#include <linux/jhash.h>
+#include <linux/kconfig.h>
#define sizeof_field(TYPE, MEMBER) sizeof((((TYPE *)0)->MEMBER))
@@ -213,6 +214,98 @@ static int read_sym_checksums(struct elf *elf)
return 0;
}
+/*
+ * For non-x86, detect the offset from the function entry point to its
+ * __patchable_function_entries (PFE) relocation target. x86 doesn't need this,
+ * it clones the __cfi/__pfx symbol instead.
+ *
+ * offset < 0 (before function entry):
+ *
+ * CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS (arm64)
+ *
+ * offset == 0 (at function entry):
+ *
+ * CONFIG_DYNAMIC_FTRACE_WITH_ARGS without BTI (arm64)
+ *
+ * offset > 0 (after function entry):
+ *
+ * CONFIG_DYNAMIC_FTRACE_WITH_ARGS with BTI (arm64)
+ */
+static int read_pfe_offset(struct elf *elf)
+{
+ bool has_pfe = false;
+ struct section *sec;
+
+ if (__is_defined(ARCH_HAS_PREFIX_SYMBOLS))
+ return 0;
+
+ for_each_sec(elf, sec) {
+ struct reloc *reloc;
+
+ if (strcmp(sec->name, "__patchable_function_entries"))
+ continue;
+ if (!sec->rsec)
+ continue;
+
+ has_pfe = true;
+
+ for_each_reloc(sec->rsec, reloc) {
+ unsigned long target = reloc->sym->offset + reloc_addend(reloc);
+ struct symbol *func;
+
+ /* arm64 func */
+ func = find_func_containing(reloc->sym->sec, target);
+ if (func) {
+ elf->pfe_offset = target - func->offset;
+ return 0;
+ }
+
+ /* arm64 CALL_OPS */
+ func = find_func_by_offset(reloc->sym->sec, target + 8);
+ if (func) {
+ elf->pfe_offset = -8;
+ return 0;
+ }
+ }
+ }
+
+ if (has_pfe) {
+ ERROR("can't find __patchable_function_entries offset");
+ return -1;
+ }
+
+ return 0;
+}
+
+/*
+ * Detect the size of the area before a function's entry point. This prefix
+ * area is used for CFI type hashes or ftrace call ops.
+ *
+ * $d mapping symbol (arm64):
+ *
+ * CONFIG_CFI
+ *
+ * PFE before function entry, no symbol (arm64):
+ *
+ * CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS
+ */
+static unsigned long func_pfx_size(struct elf *elf, struct symbol *func)
+{
+ if (__is_defined(ARCH_HAS_PREFIX_SYMBOLS))
+ return 0;
+
+ /* arm64 kCFI $d data mapping symbol */
+ if (func->offset >= 4 &&
+ find_data_mapping_sym(func->sec, func->offset - 4))
+ return 4;
+
+ /* arm64 CALL_OPS (mutually exclusive with kCFI) */
+ if (elf->pfe_offset < 0 && func->offset >= -elf->pfe_offset)
+ return -elf->pfe_offset;
+
+ return 0;
+}
+
static struct symbol *first_file_symbol(struct elf *elf)
{
struct symbol *sym;
@@ -302,6 +395,9 @@ static bool is_special_section(struct section *sec)
"__ex_table",
"__jump_table",
"__mcount_loc",
+#ifndef ARCH_HAS_PREFIX_SYMBOLS
+ "__patchable_function_entries",
+#endif
/*
* Extract .static_call_sites here to inherit non-module
@@ -931,7 +1027,7 @@ static struct symbol *__clone_symbol(struct elf *elf, struct symbol *patched_sym
bool data_too)
{
struct section *out_sec = NULL;
- unsigned long offset = 0;
+ unsigned long offset = 0, pfx_size = 0;
struct symbol *out_sym;
if (data_too && !is_undef_sym(patched_sym)) {
@@ -963,17 +1059,22 @@ static struct symbol *__clone_symbol(struct elf *elf, struct symbol *patched_sym
void *data = NULL;
size_t size;
+ /* Clone (non-x86) function prefix area */
+ pfx_size = is_func_sym(patched_sym) ? func_pfx_size(elf, patched_sym) : 0;
+
/* bss doesn't have data */
if (patched_sym->sec->data && patched_sym->sec->data->d_buf)
- data = patched_sym->sec->data->d_buf + patched_sym->offset;
+ data = patched_sym->sec->data->d_buf + patched_sym->offset - pfx_size;
if (is_sec_sym(patched_sym))
size = sec_size(patched_sym->sec);
else
- size = patched_sym->len;
+ size = patched_sym->len + pfx_size;
if (!elf_add_data(elf, out_sec, data, size))
return NULL;
+
+ offset += pfx_size;
}
}
@@ -1251,21 +1352,41 @@ static int convert_reloc_sym_to_secsym(struct elf *elf, struct reloc *reloc)
return 0;
}
+/*
+ * __patchable_function_entries relocs point to the patchable entry NOPs,
+ * which are at 'pfe_offset' bytes from the function symbol.
+ *
+ * Some entries (e.g., removed weak functions, syscall -ENOSYS stubs) don't
+ * have a corresponding function symbol. Skip those with a return value of 1.
+ */
+static int convert_pfe_reloc(struct elf *elf, struct reloc *reloc)
+{
+ struct symbol *func;
+
+ func = find_func_by_offset(reloc->sym->sec,
+ reloc->sym->offset +
+ reloc_addend(reloc) - elf->pfe_offset);
+ if (!func)
+ return 1;
+
+ reloc->sym = func;
+ set_reloc_sym(elf, reloc, func->idx);
+ set_reloc_addend(elf, reloc, elf->pfe_offset);
+ return 0;
+}
+
/* Return -1 error, 0 success, 1 skip */
static int convert_reloc_secsym_to_sym(struct elf *elf, struct reloc *reloc)
{
struct symbol *sym = reloc->sym;
struct section *sec = sym->sec;
+ if (!strcmp(reloc->sec->name, ".rela__patchable_function_entries"))
+ return convert_pfe_reloc(elf, reloc);
+
if (!is_sec_sym(sym))
return 0;
- /* If the symbol has a dedicated section, it's easy to find */
- sym = find_symbol_by_offset(sec, 0);
- if (sym && sym->len == sec_size(sec))
- goto found_sym;
-
- /* No dedicated section; find the symbol manually */
sym = find_symbol_containing_inclusive(sec, arch_adjusted_addend(reloc));
if (!sym) {
/*
@@ -1293,7 +1414,6 @@ static int convert_reloc_secsym_to_sym(struct elf *elf, struct reloc *reloc)
return -1;
}
-found_sym:
reloc->sym = sym;
set_reloc_sym(elf, reloc, sym->idx);
set_reloc_addend(elf, reloc, reloc_addend(reloc) - sym->offset);
@@ -1856,6 +1976,9 @@ static int validate_special_section_klp_reloc(struct elfs *e, struct symbol *sym
static int clone_special_section(struct elfs *e, struct section *patched_sec)
{
+ bool is_pfe = !strcmp(patched_sec->name, "__patchable_function_entries");
+ struct section *out_sec = NULL;
+ struct reloc *patched_reloc;
struct symbol *patched_sym;
/*
@@ -1863,6 +1986,7 @@ static int clone_special_section(struct elfs *e, struct section *patched_sec)
* reference included functions.
*/
sec_for_each_sym(patched_sec, patched_sym) {
+ struct symbol *out_sym;
int ret;
if (!is_object_sym(patched_sym))
@@ -1877,8 +2001,23 @@ static int clone_special_section(struct elfs *e, struct section *patched_sec)
if (ret > 0)
continue;
- if (!clone_symbol(e, patched_sym, true))
+ out_sym = clone_symbol(e, patched_sym, true);
+ if (!out_sym)
return -1;
+
+ if (!is_pfe || (out_sec && out_sec->sh.sh_link))
+ continue;
+
+ /*
+ * For reasons, the patched object has multiple PFE sections,
+ * but we only need to create one combined section for the
+ * output. Link the single PFE ouput section to a random text
+ * section to satisfy the linker for SHF_LINK_ORDER.
+ */
+ out_sec = out_sym->sec;
+ patched_reloc = find_reloc_by_dest(e->patched, patched_sec,
+ patched_sym->offset);
+ out_sec->sh.sh_link = patched_reloc->sym->clone->sec->idx;
}
return 0;
@@ -2175,6 +2314,9 @@ int cmd_klp_diff(int argc, const char **argv)
if (read_sym_checksums(e.patched))
return -1;
+ if (read_pfe_offset(e.patched))
+ return -1;
+
if (correlate_symbols(&e))
return -1;
@@ -2188,6 +2330,8 @@ int cmd_klp_diff(int argc, const char **argv)
if (!e.out)
return -1;
+ e.out->pfe_offset = e.patched->pfe_offset;
+
/*
* Special section fake symbols are needed so that individual special
* section entries can be extracted by clone_special_sections().
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 16/21] objtool/klp: Filter arm64 mapping symbols in find_symbol_by_offset()
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (10 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 15/21] objtool/klp: Add arm64 support for prefix/PFE detection Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 17/21] objtool/klp: Don't correlate arm64 mapping symbols Josh Poimboeuf
` (9 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
ARM64 ELF objects contain $d/$x mapping symbols (STT_NOTYPE) at offset 0
in data/text sections. These aren't "real" symbols so filter them from
find_symbol_by_offset(), consistent with the existing section symbol
filter.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/elf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index 9d5a926934dc2..a4d9afa3a079c 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -159,7 +159,7 @@ struct symbol *find_symbol_by_offset(struct section *sec, unsigned long offset)
struct symbol *sym;
__sym_for_each(sym, tree, offset, offset) {
- if (sym->offset == offset && !is_sec_sym(sym))
+ if (sym->offset == offset && !is_sec_sym(sym) && !is_mapping_sym(sym))
return sym->alias;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 16/21] objtool/klp: Filter arm64 mapping symbols in find_symbol_by_offset()
2026-05-13 3:33 ` [PATCH v3 16/21] objtool/klp: Filter arm64 mapping symbols in find_symbol_by_offset() Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
ARM64 ELF objects contain $d/$x mapping symbols (STT_NOTYPE) at offset 0
in data/text sections. These aren't "real" symbols so filter them from
find_symbol_by_offset(), consistent with the existing section symbol
filter.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/elf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index 9d5a926934dc2..a4d9afa3a079c 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -159,7 +159,7 @@ struct symbol *find_symbol_by_offset(struct section *sec, unsigned long offset)
struct symbol *sym;
__sym_for_each(sym, tree, offset, offset) {
- if (sym->offset == offset && !is_sec_sym(sym))
+ if (sym->offset == offset && !is_sec_sym(sym) && !is_mapping_sym(sym))
return sym->alias;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 17/21] objtool/klp: Don't correlate arm64 mapping symbols
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (11 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 16/21] objtool/klp: Filter arm64 mapping symbols in find_symbol_by_offset() Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 18/21] objtool/klp: Clone inline alternative replacements Josh Poimboeuf
` (8 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
ARM64 ELF files contain mapping symbols ($d, $x, $a, etc.) which mark
transitions between code and data. There are thousands of them per
object file, all sharing the same few names.
They aren't "real" symbols so there's no need to correlate them.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/klp-diff.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/objtool/klp-diff.c b/tools/objtool/klp-diff.c
index eb21f3bf3120b..e1d4d94c9d77c 100644
--- a/tools/objtool/klp-diff.c
+++ b/tools/objtool/klp-diff.c
@@ -501,6 +501,7 @@ static bool dont_correlate(struct symbol *sym)
is_prefix_func(sym) ||
is_uncorrelated_static_local(sym) ||
is_local_label(sym) ||
+ is_mapping_sym(sym) ||
is_string_sec(sym->sec) ||
is_anonymous_rodata(sym) ||
is_initcall_sym(sym) ||
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 17/21] objtool/klp: Don't correlate arm64 mapping symbols
2026-05-13 3:33 ` [PATCH v3 17/21] objtool/klp: Don't correlate arm64 mapping symbols Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
ARM64 ELF files contain mapping symbols ($d, $x, $a, etc.) which mark
transitions between code and data. There are thousands of them per
object file, all sharing the same few names.
They aren't "real" symbols so there's no need to correlate them.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/klp-diff.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/objtool/klp-diff.c b/tools/objtool/klp-diff.c
index eb21f3bf3120b..e1d4d94c9d77c 100644
--- a/tools/objtool/klp-diff.c
+++ b/tools/objtool/klp-diff.c
@@ -501,6 +501,7 @@ static bool dont_correlate(struct symbol *sym)
is_prefix_func(sym) ||
is_uncorrelated_static_local(sym) ||
is_local_label(sym) ||
+ is_mapping_sym(sym) ||
is_string_sec(sym->sec) ||
is_anonymous_rodata(sym) ||
is_initcall_sym(sym) ||
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 18/21] objtool/klp: Clone inline alternative replacements
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (12 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 17/21] objtool/klp: Don't correlate arm64 mapping symbols Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 19/21] objtool/klp: Introduce objtool for arm64 Josh Poimboeuf
` (7 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Unlike x86-64, arm64 places alternative replacement instructions in
.text, immediately after the affected function.
So if the replacement instructions have PC-relative branches without
relocations, their offsets relative to the function have to remain
constant.
Achieve that by cloning the function's alternative replacements
immediately after cloning the function itself.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/elf.c | 9 +++--
tools/objtool/include/objtool/elf.h | 7 +++-
tools/objtool/klp-diff.c | 63 ++++++++++++++++++++++++-----
3 files changed, 65 insertions(+), 14 deletions(-)
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index a4d9afa3a079c..a5b2929ea0fa9 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -1413,14 +1413,15 @@ int elf_add_string(struct elf *elf, struct section *strtab, const char *str)
return -1;
}
- data = elf_add_data(elf, strtab, str, strlen(str) + 1);
+ data = elf_add_data(elf, strtab, str, strlen(str) + 1, true);
if (!data)
return -1;
return data - strtab->data->d_buf;
}
-void *elf_add_data(struct elf *elf, struct section *sec, const void *data, size_t size)
+void *elf_add_data(struct elf *elf, struct section *sec, const void *data,
+ size_t size, bool align)
{
unsigned long offset, size_old, size_new, alloc_size_old, alloc_size_new;
Elf_Scn *s;
@@ -1447,7 +1448,7 @@ void *elf_add_data(struct elf *elf, struct section *sec, const void *data, size_
}
size_old = sec->data->d_size;
- offset = ALIGN(size_old, sec->sh.sh_addralign);
+ offset = ALIGN(size_old, align ? sec->sh.sh_addralign : 1);
size_new = offset + size;
if (!sec->data_overallocated)
@@ -1590,7 +1591,7 @@ static int elf_alloc_reloc(struct elf *elf, struct section *rsec)
unsigned long nr_alloc_old = 0, nr_alloc_new;
struct symbol *sym;
- if (!elf_add_data(elf, rsec, NULL, elf_rela_size(elf)))
+ if (!elf_add_data(elf, rsec, NULL, elf_rela_size(elf), true))
return -1;
rsec->data->d_type = ELF_T_RELA;
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index ab1d53ed23189..fba0a0e08f8b6 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -106,6 +106,8 @@ struct symbol {
u8 included : 1;
u8 klp : 1;
u8 dont_correlate : 1;
+ u8 fake : 1;
+ u8 unalign : 1;
struct list_head pv_target;
struct reloc *relocs;
struct section *group_sec;
@@ -186,7 +188,7 @@ struct symbol *elf_create_symbol(struct elf *elf, const char *name,
struct symbol *elf_create_section_symbol(struct elf *elf, struct section *sec);
void *elf_add_data(struct elf *elf, struct section *sec, const void *data,
- size_t size);
+ size_t size, bool align);
int elf_find_string(struct elf *elf, struct section *strtab, const char *str);
int elf_add_string(struct elf *elf, struct section *strtab, const char *str);
@@ -532,6 +534,9 @@ static inline void set_sym_next_reloc(struct reloc *reloc, struct reloc *next)
#define sec_for_each_sym_from(sec, sym) \
list_for_each_entry_from(sym, &sec->symbol_list, list)
+#define sec_for_each_sym_continue(sec, sym) \
+ list_for_each_entry_continue(sym, &sec->symbol_list, list)
+
#define sec_prev_sym(sym) \
sym->sec && sym->list.prev != &sym->sec->symbol_list ? \
list_prev_entry(sym, list) : NULL
diff --git a/tools/objtool/klp-diff.c b/tools/objtool/klp-diff.c
index e1d4d94c9d77c..b9624bd9439b9 100644
--- a/tools/objtool/klp-diff.c
+++ b/tools/objtool/klp-diff.c
@@ -1027,8 +1027,9 @@ static int clone_sym_relocs(struct elfs *e, struct symbol *patched_sym);
static struct symbol *__clone_symbol(struct elf *elf, struct symbol *patched_sym,
bool data_too)
{
- struct section *out_sec = NULL;
unsigned long offset = 0, pfx_size = 0;
+ bool align = !patched_sym->unalign;
+ struct section *out_sec = NULL;
struct symbol *out_sym;
if (data_too && !is_undef_sym(patched_sym)) {
@@ -1054,7 +1055,7 @@ static struct symbol *__clone_symbol(struct elf *elf, struct symbol *patched_sym
}
if (!is_sec_sym(patched_sym))
- offset = ALIGN(sec_size(out_sec), out_sec->sh.sh_addralign);
+ offset = ALIGN(sec_size(out_sec), align ? out_sec->sh.sh_addralign : 1);
if (patched_sym->len || is_sec_sym(patched_sym)) {
void *data = NULL;
@@ -1072,7 +1073,7 @@ static struct symbol *__clone_symbol(struct elf *elf, struct symbol *patched_sym
else
size = patched_sym->len + pfx_size;
- if (!elf_add_data(elf, out_sec, data, size))
+ if (!elf_add_data(elf, out_sec, data, size, align))
return NULL;
offset += pfx_size;
@@ -1114,6 +1115,37 @@ static const char *sym_bind(struct symbol *sym)
}
}
+static struct symbol *clone_symbol(struct elfs *e, struct symbol *patched_sym,
+ bool data_too);
+
+/*
+ * For arm64 alternatives, the replacement instructions come immediately after
+ * the function. Clone any such blocks of instructions in place to preserve
+ * their offsets relative to the function in case they have hard-coded PC
+ * relative branches.
+ */
+static int clone_inline_alternatives(struct elfs *e, struct symbol *patched_sym)
+{
+ struct symbol *next;
+
+ if (!__is_defined(ARCH_HAS_INLINE_ALTS) || !is_func_sym(patched_sym))
+ return 0;
+
+ next = patched_sym;
+ sec_for_each_sym_continue(patched_sym->sec, next) {
+ if (next->offset < (patched_sym->offset + patched_sym->len) ||
+ is_mapping_sym(next))
+ continue;
+ if (!next->fake)
+ break;
+ next->unalign = 1;
+ if (!clone_symbol(e, next, true))
+ return -1;
+ }
+
+ return 0;
+}
+
/*
* Copy a symbol to the output object, optionally including its data and
* relocations.
@@ -1138,7 +1170,13 @@ static struct symbol *clone_symbol(struct elfs *e, struct symbol *patched_sym,
if (!__clone_symbol(e->out, patched_sym, data_too))
return NULL;
- if (data_too && clone_sym_relocs(e, patched_sym))
+ if (!data_too || is_undef_sym(patched_sym))
+ return patched_sym->clone;
+
+ if (clone_sym_relocs(e, patched_sym))
+ return NULL;
+
+ if (clone_inline_alternatives(e, patched_sym))
return NULL;
return patched_sym->clone;
@@ -1551,7 +1589,7 @@ static int clone_reloc_klp(struct elfs *e, struct reloc *patched_reloc,
memset(&klp_reloc, 0, sizeof(klp_reloc));
klp_reloc.type = reloc_type(patched_reloc);
- if (!elf_add_data(e->out, klp_relocs, &klp_reloc, sizeof(klp_reloc)))
+ if (!elf_add_data(e->out, klp_relocs, &klp_reloc, sizeof(klp_reloc), true))
return -1;
/* klp_reloc.offset */
@@ -1714,6 +1752,7 @@ static int create_fake_symbol(struct elf *elf, struct section *sec,
unsigned long offset, size_t size)
{
char name[SYM_NAME_LEN];
+ struct symbol *sym;
unsigned int type;
static int ctr;
char *c;
@@ -1730,7 +1769,13 @@ static int create_fake_symbol(struct elf *elf, struct section *sec,
* while still allowing objdump to disassemble it.
*/
type = is_text_sec(sec) ? STT_NOTYPE : STT_OBJECT;
- return elf_create_symbol(elf, name, sec, STB_LOCAL, type, offset, size) ? 0 : -1;
+
+ sym = elf_create_symbol(elf, name, sec, STB_LOCAL, type, offset, size);
+ if (!sym)
+ return -1;
+
+ sym->fake = 1;
+ return 0;
}
/*
@@ -2095,7 +2140,7 @@ static int create_klp_sections(struct elfs *e)
return -1;
/* allocate klp_object_ext */
- obj_data = elf_add_data(e->out, obj_sec, NULL, obj_size);
+ obj_data = elf_add_data(e->out, obj_sec, NULL, obj_size, true);
if (!obj_data)
return -1;
@@ -2130,7 +2175,7 @@ static int create_klp_sections(struct elfs *e)
continue;
/* allocate klp_func_ext */
- func_data = elf_add_data(e->out, funcs_sec, NULL, func_size);
+ func_data = elf_add_data(e->out, funcs_sec, NULL, func_size, true);
if (!func_data)
return -1;
@@ -2276,7 +2321,7 @@ static int copy_import_ns(struct elfs *e)
}
}
- if (!elf_add_data(e->out, out_sec, import_ns, strlen(import_ns) + 1))
+ if (!elf_add_data(e->out, out_sec, import_ns, strlen(import_ns) + 1, true))
return -1;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 18/21] objtool/klp: Clone inline alternative replacements
2026-05-13 3:33 ` [PATCH v3 18/21] objtool/klp: Clone inline alternative replacements Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Unlike x86-64, arm64 places alternative replacement instructions in
.text, immediately after the affected function.
So if the replacement instructions have PC-relative branches without
relocations, their offsets relative to the function have to remain
constant.
Achieve that by cloning the function's alternative replacements
immediately after cloning the function itself.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/elf.c | 9 +++--
tools/objtool/include/objtool/elf.h | 7 +++-
tools/objtool/klp-diff.c | 63 ++++++++++++++++++++++++-----
3 files changed, 65 insertions(+), 14 deletions(-)
diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index a4d9afa3a079c..a5b2929ea0fa9 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -1413,14 +1413,15 @@ int elf_add_string(struct elf *elf, struct section *strtab, const char *str)
return -1;
}
- data = elf_add_data(elf, strtab, str, strlen(str) + 1);
+ data = elf_add_data(elf, strtab, str, strlen(str) + 1, true);
if (!data)
return -1;
return data - strtab->data->d_buf;
}
-void *elf_add_data(struct elf *elf, struct section *sec, const void *data, size_t size)
+void *elf_add_data(struct elf *elf, struct section *sec, const void *data,
+ size_t size, bool align)
{
unsigned long offset, size_old, size_new, alloc_size_old, alloc_size_new;
Elf_Scn *s;
@@ -1447,7 +1448,7 @@ void *elf_add_data(struct elf *elf, struct section *sec, const void *data, size_
}
size_old = sec->data->d_size;
- offset = ALIGN(size_old, sec->sh.sh_addralign);
+ offset = ALIGN(size_old, align ? sec->sh.sh_addralign : 1);
size_new = offset + size;
if (!sec->data_overallocated)
@@ -1590,7 +1591,7 @@ static int elf_alloc_reloc(struct elf *elf, struct section *rsec)
unsigned long nr_alloc_old = 0, nr_alloc_new;
struct symbol *sym;
- if (!elf_add_data(elf, rsec, NULL, elf_rela_size(elf)))
+ if (!elf_add_data(elf, rsec, NULL, elf_rela_size(elf), true))
return -1;
rsec->data->d_type = ELF_T_RELA;
diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h
index ab1d53ed23189..fba0a0e08f8b6 100644
--- a/tools/objtool/include/objtool/elf.h
+++ b/tools/objtool/include/objtool/elf.h
@@ -106,6 +106,8 @@ struct symbol {
u8 included : 1;
u8 klp : 1;
u8 dont_correlate : 1;
+ u8 fake : 1;
+ u8 unalign : 1;
struct list_head pv_target;
struct reloc *relocs;
struct section *group_sec;
@@ -186,7 +188,7 @@ struct symbol *elf_create_symbol(struct elf *elf, const char *name,
struct symbol *elf_create_section_symbol(struct elf *elf, struct section *sec);
void *elf_add_data(struct elf *elf, struct section *sec, const void *data,
- size_t size);
+ size_t size, bool align);
int elf_find_string(struct elf *elf, struct section *strtab, const char *str);
int elf_add_string(struct elf *elf, struct section *strtab, const char *str);
@@ -532,6 +534,9 @@ static inline void set_sym_next_reloc(struct reloc *reloc, struct reloc *next)
#define sec_for_each_sym_from(sec, sym) \
list_for_each_entry_from(sym, &sec->symbol_list, list)
+#define sec_for_each_sym_continue(sec, sym) \
+ list_for_each_entry_continue(sym, &sec->symbol_list, list)
+
#define sec_prev_sym(sym) \
sym->sec && sym->list.prev != &sym->sec->symbol_list ? \
list_prev_entry(sym, list) : NULL
diff --git a/tools/objtool/klp-diff.c b/tools/objtool/klp-diff.c
index e1d4d94c9d77c..b9624bd9439b9 100644
--- a/tools/objtool/klp-diff.c
+++ b/tools/objtool/klp-diff.c
@@ -1027,8 +1027,9 @@ static int clone_sym_relocs(struct elfs *e, struct symbol *patched_sym);
static struct symbol *__clone_symbol(struct elf *elf, struct symbol *patched_sym,
bool data_too)
{
- struct section *out_sec = NULL;
unsigned long offset = 0, pfx_size = 0;
+ bool align = !patched_sym->unalign;
+ struct section *out_sec = NULL;
struct symbol *out_sym;
if (data_too && !is_undef_sym(patched_sym)) {
@@ -1054,7 +1055,7 @@ static struct symbol *__clone_symbol(struct elf *elf, struct symbol *patched_sym
}
if (!is_sec_sym(patched_sym))
- offset = ALIGN(sec_size(out_sec), out_sec->sh.sh_addralign);
+ offset = ALIGN(sec_size(out_sec), align ? out_sec->sh.sh_addralign : 1);
if (patched_sym->len || is_sec_sym(patched_sym)) {
void *data = NULL;
@@ -1072,7 +1073,7 @@ static struct symbol *__clone_symbol(struct elf *elf, struct symbol *patched_sym
else
size = patched_sym->len + pfx_size;
- if (!elf_add_data(elf, out_sec, data, size))
+ if (!elf_add_data(elf, out_sec, data, size, align))
return NULL;
offset += pfx_size;
@@ -1114,6 +1115,37 @@ static const char *sym_bind(struct symbol *sym)
}
}
+static struct symbol *clone_symbol(struct elfs *e, struct symbol *patched_sym,
+ bool data_too);
+
+/*
+ * For arm64 alternatives, the replacement instructions come immediately after
+ * the function. Clone any such blocks of instructions in place to preserve
+ * their offsets relative to the function in case they have hard-coded PC
+ * relative branches.
+ */
+static int clone_inline_alternatives(struct elfs *e, struct symbol *patched_sym)
+{
+ struct symbol *next;
+
+ if (!__is_defined(ARCH_HAS_INLINE_ALTS) || !is_func_sym(patched_sym))
+ return 0;
+
+ next = patched_sym;
+ sec_for_each_sym_continue(patched_sym->sec, next) {
+ if (next->offset < (patched_sym->offset + patched_sym->len) ||
+ is_mapping_sym(next))
+ continue;
+ if (!next->fake)
+ break;
+ next->unalign = 1;
+ if (!clone_symbol(e, next, true))
+ return -1;
+ }
+
+ return 0;
+}
+
/*
* Copy a symbol to the output object, optionally including its data and
* relocations.
@@ -1138,7 +1170,13 @@ static struct symbol *clone_symbol(struct elfs *e, struct symbol *patched_sym,
if (!__clone_symbol(e->out, patched_sym, data_too))
return NULL;
- if (data_too && clone_sym_relocs(e, patched_sym))
+ if (!data_too || is_undef_sym(patched_sym))
+ return patched_sym->clone;
+
+ if (clone_sym_relocs(e, patched_sym))
+ return NULL;
+
+ if (clone_inline_alternatives(e, patched_sym))
return NULL;
return patched_sym->clone;
@@ -1551,7 +1589,7 @@ static int clone_reloc_klp(struct elfs *e, struct reloc *patched_reloc,
memset(&klp_reloc, 0, sizeof(klp_reloc));
klp_reloc.type = reloc_type(patched_reloc);
- if (!elf_add_data(e->out, klp_relocs, &klp_reloc, sizeof(klp_reloc)))
+ if (!elf_add_data(e->out, klp_relocs, &klp_reloc, sizeof(klp_reloc), true))
return -1;
/* klp_reloc.offset */
@@ -1714,6 +1752,7 @@ static int create_fake_symbol(struct elf *elf, struct section *sec,
unsigned long offset, size_t size)
{
char name[SYM_NAME_LEN];
+ struct symbol *sym;
unsigned int type;
static int ctr;
char *c;
@@ -1730,7 +1769,13 @@ static int create_fake_symbol(struct elf *elf, struct section *sec,
* while still allowing objdump to disassemble it.
*/
type = is_text_sec(sec) ? STT_NOTYPE : STT_OBJECT;
- return elf_create_symbol(elf, name, sec, STB_LOCAL, type, offset, size) ? 0 : -1;
+
+ sym = elf_create_symbol(elf, name, sec, STB_LOCAL, type, offset, size);
+ if (!sym)
+ return -1;
+
+ sym->fake = 1;
+ return 0;
}
/*
@@ -2095,7 +2140,7 @@ static int create_klp_sections(struct elfs *e)
return -1;
/* allocate klp_object_ext */
- obj_data = elf_add_data(e->out, obj_sec, NULL, obj_size);
+ obj_data = elf_add_data(e->out, obj_sec, NULL, obj_size, true);
if (!obj_data)
return -1;
@@ -2130,7 +2175,7 @@ static int create_klp_sections(struct elfs *e)
continue;
/* allocate klp_func_ext */
- func_data = elf_add_data(e->out, funcs_sec, NULL, func_size);
+ func_data = elf_add_data(e->out, funcs_sec, NULL, func_size, true);
if (!func_data)
return -1;
@@ -2276,7 +2321,7 @@ static int copy_import_ns(struct elfs *e)
}
}
- if (!elf_add_data(e->out, out_sec, import_ns, strlen(import_ns) + 1))
+ if (!elf_add_data(e->out, out_sec, import_ns, strlen(import_ns) + 1, true))
return -1;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 19/21] objtool/klp: Introduce objtool for arm64
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (13 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 18/21] objtool/klp: Clone inline alternative replacements Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 20/21] klp-build: Support cross-compilation Josh Poimboeuf
` (6 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Add basic support for arm64 on objtool. Only "objtool klp" subcommands
are currently supported.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/arm64/Kconfig | 2 +
tools/objtool/Makefile | 4 +
tools/objtool/arch/arm64/Build | 2 +
tools/objtool/arch/arm64/decode.c | 177 ++++++++++++++++++
.../arch/arm64/include/arch/cfi_regs.h | 11 ++
tools/objtool/arch/arm64/include/arch/elf.h | 15 ++
.../objtool/arch/arm64/include/arch/special.h | 21 +++
tools/objtool/arch/arm64/special.c | 21 +++
8 files changed, 253 insertions(+)
create mode 100644 tools/objtool/arch/arm64/Build
create mode 100644 tools/objtool/arch/arm64/decode.c
create mode 100644 tools/objtool/arch/arm64/include/arch/cfi_regs.h
create mode 100644 tools/objtool/arch/arm64/include/arch/elf.h
create mode 100644 tools/objtool/arch/arm64/include/arch/special.h
create mode 100644 tools/objtool/arch/arm64/special.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index fe60738e5943b..101080fd4f99e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -210,9 +210,11 @@ config ARM64
select HAVE_HW_BREAKPOINT if PERF_EVENTS
select HAVE_IOREMAP_PROT
select HAVE_IRQ_TIME_ACCOUNTING
+ select HAVE_KLP_BUILD
select HAVE_LIVEPATCH
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI
+ select HAVE_OBJTOOL
select HAVE_PERF_EVENTS
select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI
select HAVE_PERF_REGS
diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
index a4484fd22a96d..94aabeee97367 100644
--- a/tools/objtool/Makefile
+++ b/tools/objtool/Makefile
@@ -11,6 +11,10 @@ ifeq ($(SRCARCH),loongarch)
BUILD_ORC := y
endif
+ifeq ($(SRCARCH),arm64)
+ ARCH_HAS_KLP := y
+endif
+
ifeq ($(ARCH_HAS_KLP),y)
HAVE_XXHASH = $(shell printf "$(pound)include <xxhash.h>\nXXH3_state_t *state;int main() {}" | \
$(HOSTCC) $(HOSTCFLAGS) -xc - -o /dev/null -lxxhash 2> /dev/null && echo y || echo n)
diff --git a/tools/objtool/arch/arm64/Build b/tools/objtool/arch/arm64/Build
new file mode 100644
index 0000000000000..d24d5636a5b84
--- /dev/null
+++ b/tools/objtool/arch/arm64/Build
@@ -0,0 +1,2 @@
+objtool-y += decode.o
+objtool-y += special.o
diff --git a/tools/objtool/arch/arm64/decode.c b/tools/objtool/arch/arm64/decode.c
new file mode 100644
index 0000000000000..47658c76e1af0
--- /dev/null
+++ b/tools/objtool/arch/arm64/decode.c
@@ -0,0 +1,177 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <objtool/check.h>
+#include <objtool/disas.h>
+#include <objtool/elf.h>
+#include <objtool/arch.h>
+#include <objtool/warn.h>
+#include <objtool/builtin.h>
+
+const char *arch_reg_name[CFI_NUM_REGS] = {};
+
+int arch_ftrace_match(const char *name)
+{
+ return 0;
+}
+
+s64 arch_insn_adjusted_addend(struct instruction *insn, struct reloc *reloc)
+{
+ return reloc_addend(reloc);
+}
+
+bool arch_callee_saved_reg(unsigned char reg)
+{
+ return false;
+}
+
+int arch_decode_hint_reg(u8 sp_reg, int *base)
+{
+ exit(-1);
+}
+
+const char *arch_nop_insn(int len)
+{
+ exit(-1);
+}
+
+const char *arch_ret_insn(int len)
+{
+ exit(-1);
+}
+
+int arch_decode_instruction(struct objtool_file *file, const struct section *sec,
+ unsigned long offset, unsigned int maxlen,
+ struct instruction *insn)
+{
+ u32 ins;
+
+ if (maxlen < 4) {
+ ERROR_INSN(insn, "can't decode instruction");
+ return -1;
+ }
+
+ /* arm64 instructions are always LE, thus no bswap_if_needed() */
+ ins = le32toh(*(u32 *)(sec->data->d_buf + offset));
+
+ /*
+ * These are the bare minimum needed for static branch detection and
+ * checksum calculations.
+ */
+ if (ins == 0xd503201f) {
+ /* NOP: static branch */
+ insn->type = INSN_NOP;
+ } else if ((ins & 0xfc000000) == 0x14000000) {
+ /* B: static branch, intra-TU sibling call */
+ insn->type = INSN_JUMP_UNCONDITIONAL;
+ insn->immediate = sign_extend64(ins & 0x03ffffff, 25);
+ } else if ((ins & 0xfc000000) == 0x94000000) {
+ /* BL: intra-TU call */
+ insn->type = INSN_CALL;
+ insn->immediate = sign_extend64(ins & 0x03ffffff, 25);
+ } else if ((ins & 0xff000000) == 0x54000000) {
+ /* B.cond: intra-TU sibling call */
+ insn->type = INSN_JUMP_CONDITIONAL;
+ insn->immediate = sign_extend64((ins >> 5) & 0x7ffff, 18);
+ } else if ((ins & 0x7e000000) == 0x34000000) {
+ /* CBZ/CBNZ: intra-TU sibling call */
+ insn->type = INSN_JUMP_CONDITIONAL;
+ insn->immediate = sign_extend64((ins >> 5) & 0x7ffff, 18);
+ } else if ((ins & 0x7e000000) == 0x36000000) {
+ /* TBZ/TBNZ: intra-TU sibling call */
+ insn->type = INSN_JUMP_CONDITIONAL;
+ insn->immediate = sign_extend64((ins >> 5) & 0x3fff, 13);
+ } else {
+ insn->type = INSN_OTHER;
+ }
+
+ insn->len = 4;
+ return 0;
+}
+
+size_t arch_jump_opcode_bytes(struct objtool_file *file, struct instruction *insn,
+ unsigned char *buf)
+{
+ u32 ins;
+
+ ins = le32toh(*(u32 *)(insn->sec->data->d_buf + insn->offset));
+
+ switch (insn->type) {
+ case INSN_JUMP_UNCONDITIONAL:
+ case INSN_CALL:
+ ins &= ~0x03ffffff;
+ break;
+ case INSN_JUMP_CONDITIONAL:
+ if ((ins & 0xff000000) == 0x54000000)
+ ins &= ~0x00ffffe0; /* B.cond */
+ else if ((ins & 0x7e000000) == 0x34000000)
+ ins &= ~0x00ffffe0; /* CBZ/CBNZ */
+ else
+ ins &= ~0x0007ffe0; /* TBZ/TBNZ */
+ break;
+ default:
+ break;
+ }
+
+ ins = htole32(ins);
+ memcpy(buf, &ins, 4);
+ return 4;
+}
+
+u64 arch_adjusted_addend(struct reloc *reloc)
+{
+ return reloc_addend(reloc);
+}
+
+unsigned long arch_jump_destination(struct instruction *insn)
+{
+ return insn->offset + (insn->immediate << 2);
+}
+
+bool arch_pc_relative_reloc(struct reloc *reloc)
+{
+ switch (reloc_type(reloc)) {
+ case R_AARCH64_PREL64:
+ case R_AARCH64_PREL32:
+ case R_AARCH64_PREL16:
+ case R_AARCH64_LD_PREL_LO19:
+ case R_AARCH64_ADR_PREL_LO21:
+ case R_AARCH64_ADR_PREL_PG_HI21:
+ case R_AARCH64_ADR_PREL_PG_HI21_NC:
+ case R_AARCH64_JUMP26:
+ case R_AARCH64_CALL26:
+ case R_AARCH64_CONDBR19:
+ case R_AARCH64_TSTBR14:
+ return true;
+ default:
+ return false;
+ }
+}
+
+void arch_initial_func_cfi_state(struct cfi_init_state *state)
+{
+ state->cfa.base = CFI_UNDEFINED;
+}
+
+unsigned int arch_reloc_size(struct reloc *reloc)
+{
+ switch (reloc_type(reloc)) {
+ case R_AARCH64_ABS64:
+ case R_AARCH64_PREL64:
+ return 8;
+ case R_AARCH64_PREL16:
+ return 2;
+ default:
+ return 4;
+ }
+}
+
+#ifdef DISAS
+int arch_disas_info_init(struct disassemble_info *dinfo)
+{
+ return disas_info_init(dinfo, bfd_arch_aarch64,
+ bfd_mach_arm_unknown, bfd_mach_aarch64,
+ NULL);
+}
+#endif /* DISAS */
diff --git a/tools/objtool/arch/arm64/include/arch/cfi_regs.h b/tools/objtool/arch/arm64/include/arch/cfi_regs.h
new file mode 100644
index 0000000000000..49c81cbb6646d
--- /dev/null
+++ b/tools/objtool/arch/arm64/include/arch/cfi_regs.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _OBJTOOL_ARCH_CFI_REGS_H
+#define _OBJTOOL_ARCH_CFI_REGS_H
+
+/* These aren't actually used for arm64 */
+#define CFI_BP 0
+#define CFI_SP 0
+#define CFI_RA 0
+#define CFI_NUM_REGS 2
+
+#endif /* _OBJTOOL_ARCH_CFI_REGS_H */
diff --git a/tools/objtool/arch/arm64/include/arch/elf.h b/tools/objtool/arch/arm64/include/arch/elf.h
new file mode 100644
index 0000000000000..418b90885c501
--- /dev/null
+++ b/tools/objtool/arch/arm64/include/arch/elf.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _OBJTOOL_ARCH_ELF_H
+#define _OBJTOOL_ARCH_ELF_H
+
+#define R_NONE R_AARCH64_NONE
+#define R_ABS64 R_AARCH64_ABS64
+#define R_ABS32 R_AARCH64_ABS32
+#define R_DATA32 R_AARCH64_PREL32
+#define R_DATA64 R_AARCH64_PREL64
+#define R_TEXT32 R_AARCH64_PREL32
+#define R_TEXT64 R_AARCH64_PREL64
+
+#define ARCH_HAS_INLINE_ALTS 1
+
+#endif /* _OBJTOOL_ARCH_ELF_H */
diff --git a/tools/objtool/arch/arm64/include/arch/special.h b/tools/objtool/arch/arm64/include/arch/special.h
new file mode 100644
index 0000000000000..8ae804a83ea49
--- /dev/null
+++ b/tools/objtool/arch/arm64/include/arch/special.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _OBJTOOL_ARCH_SPECIAL_H
+#define _OBJTOOL_ARCH_SPECIAL_H
+
+#define EX_ENTRY_SIZE 12
+#define EX_ORIG_OFFSET 0
+#define EX_NEW_OFFSET 4
+
+#define JUMP_ENTRY_SIZE 16
+#define JUMP_ORIG_OFFSET 0
+#define JUMP_NEW_OFFSET 4
+#define JUMP_KEY_OFFSET 8
+
+#define ALT_ENTRY_SIZE 12
+#define ALT_ORIG_OFFSET 0
+#define ALT_NEW_OFFSET 4
+#define ALT_FEATURE_OFFSET 8
+#define ALT_ORIG_LEN_OFFSET 10
+#define ALT_NEW_LEN_OFFSET 11
+
+#endif /* _OBJTOOL_ARCH_SPECIAL_H */
diff --git a/tools/objtool/arch/arm64/special.c b/tools/objtool/arch/arm64/special.c
new file mode 100644
index 0000000000000..6ddecd362f3dd
--- /dev/null
+++ b/tools/objtool/arch/arm64/special.c
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+#include <objtool/special.h>
+
+bool arch_support_alt_relocation(struct special_alt *special_alt,
+ struct instruction *insn,
+ struct reloc *reloc)
+{
+ return true;
+}
+
+struct reloc *arch_find_switch_table(struct objtool_file *file,
+ struct instruction *insn,
+ unsigned long *table_size)
+{
+ return NULL;
+}
+
+const char *arch_cpu_feature_name(int feature_number)
+{
+ return NULL;
+}
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 19/21] objtool/klp: Introduce objtool for arm64
2026-05-13 3:33 ` [PATCH v3 19/21] objtool/klp: Introduce objtool for arm64 Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Add basic support for arm64 on objtool. Only "objtool klp" subcommands
are currently supported.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/arm64/Kconfig | 2 +
tools/objtool/Makefile | 4 +
tools/objtool/arch/arm64/Build | 2 +
tools/objtool/arch/arm64/decode.c | 177 ++++++++++++++++++
.../arch/arm64/include/arch/cfi_regs.h | 11 ++
tools/objtool/arch/arm64/include/arch/elf.h | 15 ++
.../objtool/arch/arm64/include/arch/special.h | 21 +++
tools/objtool/arch/arm64/special.c | 21 +++
8 files changed, 253 insertions(+)
create mode 100644 tools/objtool/arch/arm64/Build
create mode 100644 tools/objtool/arch/arm64/decode.c
create mode 100644 tools/objtool/arch/arm64/include/arch/cfi_regs.h
create mode 100644 tools/objtool/arch/arm64/include/arch/elf.h
create mode 100644 tools/objtool/arch/arm64/include/arch/special.h
create mode 100644 tools/objtool/arch/arm64/special.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index fe60738e5943b..101080fd4f99e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -210,9 +210,11 @@ config ARM64
select HAVE_HW_BREAKPOINT if PERF_EVENTS
select HAVE_IOREMAP_PROT
select HAVE_IRQ_TIME_ACCOUNTING
+ select HAVE_KLP_BUILD
select HAVE_LIVEPATCH
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI
+ select HAVE_OBJTOOL
select HAVE_PERF_EVENTS
select HAVE_PERF_EVENTS_NMI if ARM64_PSEUDO_NMI
select HAVE_PERF_REGS
diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
index a4484fd22a96d..94aabeee97367 100644
--- a/tools/objtool/Makefile
+++ b/tools/objtool/Makefile
@@ -11,6 +11,10 @@ ifeq ($(SRCARCH),loongarch)
BUILD_ORC := y
endif
+ifeq ($(SRCARCH),arm64)
+ ARCH_HAS_KLP := y
+endif
+
ifeq ($(ARCH_HAS_KLP),y)
HAVE_XXHASH = $(shell printf "$(pound)include <xxhash.h>\nXXH3_state_t *state;int main() {}" | \
$(HOSTCC) $(HOSTCFLAGS) -xc - -o /dev/null -lxxhash 2> /dev/null && echo y || echo n)
diff --git a/tools/objtool/arch/arm64/Build b/tools/objtool/arch/arm64/Build
new file mode 100644
index 0000000000000..d24d5636a5b84
--- /dev/null
+++ b/tools/objtool/arch/arm64/Build
@@ -0,0 +1,2 @@
+objtool-y += decode.o
+objtool-y += special.o
diff --git a/tools/objtool/arch/arm64/decode.c b/tools/objtool/arch/arm64/decode.c
new file mode 100644
index 0000000000000..47658c76e1af0
--- /dev/null
+++ b/tools/objtool/arch/arm64/decode.c
@@ -0,0 +1,177 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <objtool/check.h>
+#include <objtool/disas.h>
+#include <objtool/elf.h>
+#include <objtool/arch.h>
+#include <objtool/warn.h>
+#include <objtool/builtin.h>
+
+const char *arch_reg_name[CFI_NUM_REGS] = {};
+
+int arch_ftrace_match(const char *name)
+{
+ return 0;
+}
+
+s64 arch_insn_adjusted_addend(struct instruction *insn, struct reloc *reloc)
+{
+ return reloc_addend(reloc);
+}
+
+bool arch_callee_saved_reg(unsigned char reg)
+{
+ return false;
+}
+
+int arch_decode_hint_reg(u8 sp_reg, int *base)
+{
+ exit(-1);
+}
+
+const char *arch_nop_insn(int len)
+{
+ exit(-1);
+}
+
+const char *arch_ret_insn(int len)
+{
+ exit(-1);
+}
+
+int arch_decode_instruction(struct objtool_file *file, const struct section *sec,
+ unsigned long offset, unsigned int maxlen,
+ struct instruction *insn)
+{
+ u32 ins;
+
+ if (maxlen < 4) {
+ ERROR_INSN(insn, "can't decode instruction");
+ return -1;
+ }
+
+ /* arm64 instructions are always LE, thus no bswap_if_needed() */
+ ins = le32toh(*(u32 *)(sec->data->d_buf + offset));
+
+ /*
+ * These are the bare minimum needed for static branch detection and
+ * checksum calculations.
+ */
+ if (ins == 0xd503201f) {
+ /* NOP: static branch */
+ insn->type = INSN_NOP;
+ } else if ((ins & 0xfc000000) == 0x14000000) {
+ /* B: static branch, intra-TU sibling call */
+ insn->type = INSN_JUMP_UNCONDITIONAL;
+ insn->immediate = sign_extend64(ins & 0x03ffffff, 25);
+ } else if ((ins & 0xfc000000) == 0x94000000) {
+ /* BL: intra-TU call */
+ insn->type = INSN_CALL;
+ insn->immediate = sign_extend64(ins & 0x03ffffff, 25);
+ } else if ((ins & 0xff000000) == 0x54000000) {
+ /* B.cond: intra-TU sibling call */
+ insn->type = INSN_JUMP_CONDITIONAL;
+ insn->immediate = sign_extend64((ins >> 5) & 0x7ffff, 18);
+ } else if ((ins & 0x7e000000) == 0x34000000) {
+ /* CBZ/CBNZ: intra-TU sibling call */
+ insn->type = INSN_JUMP_CONDITIONAL;
+ insn->immediate = sign_extend64((ins >> 5) & 0x7ffff, 18);
+ } else if ((ins & 0x7e000000) == 0x36000000) {
+ /* TBZ/TBNZ: intra-TU sibling call */
+ insn->type = INSN_JUMP_CONDITIONAL;
+ insn->immediate = sign_extend64((ins >> 5) & 0x3fff, 13);
+ } else {
+ insn->type = INSN_OTHER;
+ }
+
+ insn->len = 4;
+ return 0;
+}
+
+size_t arch_jump_opcode_bytes(struct objtool_file *file, struct instruction *insn,
+ unsigned char *buf)
+{
+ u32 ins;
+
+ ins = le32toh(*(u32 *)(insn->sec->data->d_buf + insn->offset));
+
+ switch (insn->type) {
+ case INSN_JUMP_UNCONDITIONAL:
+ case INSN_CALL:
+ ins &= ~0x03ffffff;
+ break;
+ case INSN_JUMP_CONDITIONAL:
+ if ((ins & 0xff000000) == 0x54000000)
+ ins &= ~0x00ffffe0; /* B.cond */
+ else if ((ins & 0x7e000000) == 0x34000000)
+ ins &= ~0x00ffffe0; /* CBZ/CBNZ */
+ else
+ ins &= ~0x0007ffe0; /* TBZ/TBNZ */
+ break;
+ default:
+ break;
+ }
+
+ ins = htole32(ins);
+ memcpy(buf, &ins, 4);
+ return 4;
+}
+
+u64 arch_adjusted_addend(struct reloc *reloc)
+{
+ return reloc_addend(reloc);
+}
+
+unsigned long arch_jump_destination(struct instruction *insn)
+{
+ return insn->offset + (insn->immediate << 2);
+}
+
+bool arch_pc_relative_reloc(struct reloc *reloc)
+{
+ switch (reloc_type(reloc)) {
+ case R_AARCH64_PREL64:
+ case R_AARCH64_PREL32:
+ case R_AARCH64_PREL16:
+ case R_AARCH64_LD_PREL_LO19:
+ case R_AARCH64_ADR_PREL_LO21:
+ case R_AARCH64_ADR_PREL_PG_HI21:
+ case R_AARCH64_ADR_PREL_PG_HI21_NC:
+ case R_AARCH64_JUMP26:
+ case R_AARCH64_CALL26:
+ case R_AARCH64_CONDBR19:
+ case R_AARCH64_TSTBR14:
+ return true;
+ default:
+ return false;
+ }
+}
+
+void arch_initial_func_cfi_state(struct cfi_init_state *state)
+{
+ state->cfa.base = CFI_UNDEFINED;
+}
+
+unsigned int arch_reloc_size(struct reloc *reloc)
+{
+ switch (reloc_type(reloc)) {
+ case R_AARCH64_ABS64:
+ case R_AARCH64_PREL64:
+ return 8;
+ case R_AARCH64_PREL16:
+ return 2;
+ default:
+ return 4;
+ }
+}
+
+#ifdef DISAS
+int arch_disas_info_init(struct disassemble_info *dinfo)
+{
+ return disas_info_init(dinfo, bfd_arch_aarch64,
+ bfd_mach_arm_unknown, bfd_mach_aarch64,
+ NULL);
+}
+#endif /* DISAS */
diff --git a/tools/objtool/arch/arm64/include/arch/cfi_regs.h b/tools/objtool/arch/arm64/include/arch/cfi_regs.h
new file mode 100644
index 0000000000000..49c81cbb6646d
--- /dev/null
+++ b/tools/objtool/arch/arm64/include/arch/cfi_regs.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _OBJTOOL_ARCH_CFI_REGS_H
+#define _OBJTOOL_ARCH_CFI_REGS_H
+
+/* These aren't actually used for arm64 */
+#define CFI_BP 0
+#define CFI_SP 0
+#define CFI_RA 0
+#define CFI_NUM_REGS 2
+
+#endif /* _OBJTOOL_ARCH_CFI_REGS_H */
diff --git a/tools/objtool/arch/arm64/include/arch/elf.h b/tools/objtool/arch/arm64/include/arch/elf.h
new file mode 100644
index 0000000000000..418b90885c501
--- /dev/null
+++ b/tools/objtool/arch/arm64/include/arch/elf.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _OBJTOOL_ARCH_ELF_H
+#define _OBJTOOL_ARCH_ELF_H
+
+#define R_NONE R_AARCH64_NONE
+#define R_ABS64 R_AARCH64_ABS64
+#define R_ABS32 R_AARCH64_ABS32
+#define R_DATA32 R_AARCH64_PREL32
+#define R_DATA64 R_AARCH64_PREL64
+#define R_TEXT32 R_AARCH64_PREL32
+#define R_TEXT64 R_AARCH64_PREL64
+
+#define ARCH_HAS_INLINE_ALTS 1
+
+#endif /* _OBJTOOL_ARCH_ELF_H */
diff --git a/tools/objtool/arch/arm64/include/arch/special.h b/tools/objtool/arch/arm64/include/arch/special.h
new file mode 100644
index 0000000000000..8ae804a83ea49
--- /dev/null
+++ b/tools/objtool/arch/arm64/include/arch/special.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _OBJTOOL_ARCH_SPECIAL_H
+#define _OBJTOOL_ARCH_SPECIAL_H
+
+#define EX_ENTRY_SIZE 12
+#define EX_ORIG_OFFSET 0
+#define EX_NEW_OFFSET 4
+
+#define JUMP_ENTRY_SIZE 16
+#define JUMP_ORIG_OFFSET 0
+#define JUMP_NEW_OFFSET 4
+#define JUMP_KEY_OFFSET 8
+
+#define ALT_ENTRY_SIZE 12
+#define ALT_ORIG_OFFSET 0
+#define ALT_NEW_OFFSET 4
+#define ALT_FEATURE_OFFSET 8
+#define ALT_ORIG_LEN_OFFSET 10
+#define ALT_NEW_LEN_OFFSET 11
+
+#endif /* _OBJTOOL_ARCH_SPECIAL_H */
diff --git a/tools/objtool/arch/arm64/special.c b/tools/objtool/arch/arm64/special.c
new file mode 100644
index 0000000000000..6ddecd362f3dd
--- /dev/null
+++ b/tools/objtool/arch/arm64/special.c
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+#include <objtool/special.h>
+
+bool arch_support_alt_relocation(struct special_alt *special_alt,
+ struct instruction *insn,
+ struct reloc *reloc)
+{
+ return true;
+}
+
+struct reloc *arch_find_switch_table(struct objtool_file *file,
+ struct instruction *insn,
+ unsigned long *table_size)
+{
+ return NULL;
+}
+
+const char *arch_cpu_feature_name(int feature_number)
+{
+ return NULL;
+}
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 20/21] klp-build: Support cross-compilation
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (14 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 19/21] objtool/klp: Introduce objtool for arm64 Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 21/21] klp-build: Add arm64 syscall patching macro Josh Poimboeuf
` (5 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Add support for cross-compilation. The user must export ARCH, and
either CROSS_COMPILE or LLVM as needed.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
scripts/livepatch/klp-build | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/scripts/livepatch/klp-build b/scripts/livepatch/klp-build
index 911ada05673c2..e83973567c878 100755
--- a/scripts/livepatch/klp-build
+++ b/scripts/livepatch/klp-build
@@ -432,6 +432,25 @@ validate_patches() {
revert_patches
}
+cross_compile_init() {
+ if [[ -v LLVM && -n "$LLVM" ]]; then
+ local prefix=""
+ local suffix=""
+
+ if [[ "$LLVM" == */ ]]; then
+ # LLVM=/path/to/bin/
+ prefix="$LLVM"
+ elif [[ "$LLVM" == -* ]]; then
+ # LLVM=-14
+ suffix="$LLVM"
+ fi
+
+ OBJCOPY="${prefix}llvm-objcopy${suffix}"
+ else
+ OBJCOPY="${CROSS_COMPILE:-}objcopy"
+ fi
+}
+
do_init() {
# We're not yet smart enough to handle anything other than in-tree
# builds in pwd.
@@ -462,6 +481,7 @@ do_init() {
validate_config
set_module_name
set_kernelversion
+ cross_compile_init
}
# Refresh the patch hunk headers, specifically the line numbers and counts.
@@ -871,7 +891,7 @@ build_patch_module() {
cp -f "$kmod_file" "$kmod_file.orig"
# Work around issue where slight .config change makes corrupt BTF
- objcopy --remove-section=.BTF "$kmod_file"
+ "$OBJCOPY" --remove-section=.BTF "$kmod_file"
# Fix (and work around) linker wreckage for klp syms / relocs
"$OBJTOOL" klp post-link "$kmod_file" || die "objtool klp post-link failed"
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 20/21] klp-build: Support cross-compilation
2026-05-13 3:33 ` [PATCH v3 20/21] klp-build: Support cross-compilation Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Add support for cross-compilation. The user must export ARCH, and
either CROSS_COMPILE or LLVM as needed.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
scripts/livepatch/klp-build | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/scripts/livepatch/klp-build b/scripts/livepatch/klp-build
index 911ada05673c2..e83973567c878 100755
--- a/scripts/livepatch/klp-build
+++ b/scripts/livepatch/klp-build
@@ -432,6 +432,25 @@ validate_patches() {
revert_patches
}
+cross_compile_init() {
+ if [[ -v LLVM && -n "$LLVM" ]]; then
+ local prefix=""
+ local suffix=""
+
+ if [[ "$LLVM" == */ ]]; then
+ # LLVM=/path/to/bin/
+ prefix="$LLVM"
+ elif [[ "$LLVM" == -* ]]; then
+ # LLVM=-14
+ suffix="$LLVM"
+ fi
+
+ OBJCOPY="${prefix}llvm-objcopy${suffix}"
+ else
+ OBJCOPY="${CROSS_COMPILE:-}objcopy"
+ fi
+}
+
do_init() {
# We're not yet smart enough to handle anything other than in-tree
# builds in pwd.
@@ -462,6 +481,7 @@ do_init() {
validate_config
set_module_name
set_kernelversion
+ cross_compile_init
}
# Refresh the patch hunk headers, specifically the line numbers and counts.
@@ -871,7 +891,7 @@ build_patch_module() {
cp -f "$kmod_file" "$kmod_file.orig"
# Work around issue where slight .config change makes corrupt BTF
- objcopy --remove-section=.BTF "$kmod_file"
+ "$OBJCOPY" --remove-section=.BTF "$kmod_file"
# Fix (and work around) linker wreckage for klp syms / relocs
"$OBJTOOL" klp post-link "$kmod_file" || die "objtool klp post-link failed"
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 21/21] klp-build: Add arm64 syscall patching macro
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (15 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 20/21] klp-build: Support cross-compilation Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (4 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Add arm64 support for KLP_SYSCALL_DEFINEx(), mirroring the arm64
__SYSCALL_DEFINEx() pattern from arch/arm64/include/asm/syscall_wrapper.h.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
include/linux/livepatch_helpers.h | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/include/linux/livepatch_helpers.h b/include/linux/livepatch_helpers.h
index 99d68d0773fa8..4b647b83865f9 100644
--- a/include/linux/livepatch_helpers.h
+++ b/include/linux/livepatch_helpers.h
@@ -72,6 +72,25 @@
} \
static inline long __klp_do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+#elif defined(CONFIG_ARM64)
+
+#define __KLP_SYSCALL_DEFINEx(x, name, ...) \
+ static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \
+ static inline long __klp_do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__));\
+ asmlinkage long __arm64_sys##name(const struct pt_regs *regs); \
+ asmlinkage long __arm64_sys##name(const struct pt_regs *regs) \
+ { \
+ return __se_sys##name(SC_ARM64_REGS_TO_ARGS(x,__VA_ARGS__));\
+ } \
+ static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)) \
+ { \
+ long ret = __klp_do_sys##name(__MAP(x,__SC_CAST,__VA_ARGS__));\
+ __MAP(x,__SC_TEST,__VA_ARGS__); \
+ __PROTECT(x, ret,__MAP(x,__SC_ARGS,__VA_ARGS__)); \
+ return ret; \
+ } \
+ static inline long __klp_do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+
#endif
#endif /* _LINUX_LIVEPATCH_HELPERS_H */
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 21/21] klp-build: Add arm64 syscall patching macro
2026-05-13 3:33 ` [PATCH v3 21/21] klp-build: Add arm64 syscall patching macro Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Add arm64 support for KLP_SYSCALL_DEFINEx(), mirroring the arm64
__SYSCALL_DEFINEx() pattern from arch/arm64/include/asm/syscall_wrapper.h.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
include/linux/livepatch_helpers.h | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/include/linux/livepatch_helpers.h b/include/linux/livepatch_helpers.h
index 99d68d0773fa8..4b647b83865f9 100644
--- a/include/linux/livepatch_helpers.h
+++ b/include/linux/livepatch_helpers.h
@@ -72,6 +72,25 @@
} \
static inline long __klp_do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+#elif defined(CONFIG_ARM64)
+
+#define __KLP_SYSCALL_DEFINEx(x, name, ...) \
+ static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \
+ static inline long __klp_do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__));\
+ asmlinkage long __arm64_sys##name(const struct pt_regs *regs); \
+ asmlinkage long __arm64_sys##name(const struct pt_regs *regs) \
+ { \
+ return __se_sys##name(SC_ARM64_REGS_TO_ARGS(x,__VA_ARGS__));\
+ } \
+ static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)) \
+ { \
+ long ret = __klp_do_sys##name(__MAP(x,__SC_CAST,__VA_ARGS__));\
+ __MAP(x,__SC_TEST,__VA_ARGS__); \
+ __PROTECT(x, ret,__MAP(x,__SC_ARGS,__VA_ARGS__)); \
+ return ret; \
+ } \
+ static inline long __klp_do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+
#endif
#endif /* _LINUX_LIVEPATCH_HELPERS_H */
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (16 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 21/21] klp-build: Add arm64 syscall patching macro Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:33 ` [PATCH v3 02/21] arm64: Annotate intra-function calls Josh Poimboeuf
` (3 subsequent siblings)
21 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Based on tip/objtool/core.
v3:
- Too many changes to list. Did a lot of testing and fixed a bunch of
issues (many of which have already been merged in tip/objtool/core).
v2: https://lore.kernel.org/cover.1773787568.git.jpoimboe@kernel.org
- patches 1-2 were merged, rebase on tip/master
- improve commit message for "objtool: Extricate checksum calculation from validate_branch()"
- add review tags
v1: https://lore.kernel.org/cover.1772681234.git.jpoimboe@kernel.org
Port objtool and the klp-build tooling (for building livepatch modules)
to arm64.
Note this doesn't bring all the objtool bells and whistles to arm64, nor
any of the CFG reverse engineering. This only adds the bare minimum
needed for 'objtool --checksum'.
And note that objtool still doesn't get enabled at all for normal arm64
kernel builds, so this doesn't affect any users other than those running
klp-build directly.
Josh Poimboeuf (21):
klp-build: Reject patches to init/*.c
arm64: Annotate intra-function calls
arm64: Fix EFI linking with -fdata-sections
arm64: Rename TRAMP_VALIAS -> TRAMP_VALIAS_ASM in asm-offsets
arm64: vdso: Discard .discard.* sections
arm64: Annotate special section entries
crypto: arm64: Move data to .rodata
objtool: Allow setting --mnop without --mcount
kbuild: Only run objtool if there is at least one command
objtool: Ignore jumps to the end of the function for checksum runs
objtool: Allow empty alternatives
objtool: Refactor elf_add_data() to use a growable data buffer
objtool: Reuse string references
objtool: Prevent kCFI hashes from being decoded as instructions
objtool/klp: Add arm64 support for prefix/PFE detection
objtool/klp: Filter arm64 mapping symbols in find_symbol_by_offset()
objtool/klp: Don't correlate arm64 mapping symbols
objtool/klp: Clone inline alternative replacements
objtool/klp: Introduce objtool for arm64
klp-build: Support cross-compilation
klp-build: Add arm64 syscall patching macro
arch/arm64/Kconfig | 2 +
arch/arm64/include/asm/alternative-macros.h | 27 +-
arch/arm64/include/asm/asm-bug.h | 2 +
arch/arm64/include/asm/asm-extable.h | 21 +-
arch/arm64/include/asm/jump_label.h | 2 +
arch/arm64/kernel/asm-offsets.c | 7 +-
arch/arm64/kernel/entry.S | 10 +-
arch/arm64/kernel/proton-pack.c | 12 +-
arch/arm64/kernel/vdso/vdso.lds.S | 1 +
arch/arm64/kernel/vmlinux.lds.S | 2 +-
arch/x86/boot/startup/Makefile | 2 +-
include/linux/annotate.h | 14 +-
include/linux/livepatch_helpers.h | 19 ++
include/linux/objtool_types.h | 1 +
lib/crypto/arm64/sha2-armv8.pl | 18 +-
scripts/Makefile.build | 4 +-
scripts/Makefile.lib | 52 ++--
scripts/Makefile.vmlinux_o | 15 +-
scripts/livepatch/klp-build | 24 +-
tools/include/linux/objtool_types.h | 1 +
tools/objtool/Makefile | 4 +
tools/objtool/arch/arm64/Build | 2 +
tools/objtool/arch/arm64/decode.c | 177 +++++++++++++
.../arch/arm64/include/arch/cfi_regs.h | 11 +
tools/objtool/arch/arm64/include/arch/elf.h | 15 ++
.../objtool/arch/arm64/include/arch/special.h | 21 ++
tools/objtool/arch/arm64/special.c | 21 ++
tools/objtool/arch/x86/include/arch/elf.h | 2 +
tools/objtool/builtin-check.c | 5 -
tools/objtool/check.c | 65 +++--
tools/objtool/elf.c | 170 +++++++------
tools/objtool/include/objtool/elf.h | 48 +++-
tools/objtool/klp-diff.c | 237 ++++++++++++++++--
33 files changed, 819 insertions(+), 195 deletions(-)
create mode 100644 tools/objtool/arch/arm64/Build
create mode 100644 tools/objtool/arch/arm64/decode.c
create mode 100644 tools/objtool/arch/arm64/include/arch/cfi_regs.h
create mode 100644 tools/objtool/arch/arm64/include/arch/elf.h
create mode 100644 tools/objtool/arch/arm64/include/arch/special.h
create mode 100644 tools/objtool/arch/arm64/special.c
--
2.53.0
^ permalink raw reply [flat|nested] 46+ messages in thread* [PATCH v3 02/21] arm64: Annotate intra-function calls
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (17 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` [PATCH v3 04/21] arm64: Rename TRAMP_VALIAS -> TRAMP_VALIAS_ASM in asm-offsets Josh Poimboeuf
` (2 subsequent siblings)
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
In preparation for enabling objtool on arm64, annotate intra-function
calls.
Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/arm64/kernel/entry.S | 2 ++
arch/arm64/kernel/proton-pack.c | 12 +++++++-----
2 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index e0db14e9c843a..d4cbdfb23d733 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -10,6 +10,7 @@
#include <linux/arm-smccc.h>
#include <linux/init.h>
#include <linux/linkage.h>
+#include <linux/annotate.h>
#include <asm/alternative.h>
#include <asm/assembler.h>
@@ -705,6 +706,7 @@ alternative_else_nop_endif
* entry onto the return stack and using a RET instruction to
* enter the full-fat kernel vectors.
*/
+ ANNOTATE_INTRA_FUNCTION_CALL
bl 2f
b .
2:
diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
index b3801f532b10b..b63887a1b8234 100644
--- a/arch/arm64/kernel/proton-pack.c
+++ b/arch/arm64/kernel/proton-pack.c
@@ -24,6 +24,7 @@
#include <linux/nospec.h>
#include <linux/prctl.h>
#include <linux/sched/task_stack.h>
+#include <linux/annotate.h>
#include <asm/debug-monitors.h>
#include <asm/insn.h>
@@ -245,11 +246,12 @@ static noinstr void qcom_link_stack_sanitisation(void)
{
u64 tmp;
- asm volatile("mov %0, x30 \n"
- ".rept 16 \n"
- "bl . + 4 \n"
- ".endr \n"
- "mov x30, %0 \n"
+ asm volatile("mov %0, x30 \n"
+ ".rept 16 \n"
+ ANNOTATE_INTRA_FUNCTION_CALL " \n"
+ "bl . + 4 \n"
+ ".endr \n"
+ "mov x30, %0 \n"
: "=&r" (tmp));
}
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 02/21] arm64: Annotate intra-function calls
2026-05-13 3:33 ` [PATCH v3 02/21] arm64: Annotate intra-function calls Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
In preparation for enabling objtool on arm64, annotate intra-function
calls.
Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/arm64/kernel/entry.S | 2 ++
arch/arm64/kernel/proton-pack.c | 12 +++++++-----
2 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index e0db14e9c843a..d4cbdfb23d733 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -10,6 +10,7 @@
#include <linux/arm-smccc.h>
#include <linux/init.h>
#include <linux/linkage.h>
+#include <linux/annotate.h>
#include <asm/alternative.h>
#include <asm/assembler.h>
@@ -705,6 +706,7 @@ alternative_else_nop_endif
* entry onto the return stack and using a RET instruction to
* enter the full-fat kernel vectors.
*/
+ ANNOTATE_INTRA_FUNCTION_CALL
bl 2f
b .
2:
diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
index b3801f532b10b..b63887a1b8234 100644
--- a/arch/arm64/kernel/proton-pack.c
+++ b/arch/arm64/kernel/proton-pack.c
@@ -24,6 +24,7 @@
#include <linux/nospec.h>
#include <linux/prctl.h>
#include <linux/sched/task_stack.h>
+#include <linux/annotate.h>
#include <asm/debug-monitors.h>
#include <asm/insn.h>
@@ -245,11 +246,12 @@ static noinstr void qcom_link_stack_sanitisation(void)
{
u64 tmp;
- asm volatile("mov %0, x30 \n"
- ".rept 16 \n"
- "bl . + 4 \n"
- ".endr \n"
- "mov x30, %0 \n"
+ asm volatile("mov %0, x30 \n"
+ ".rept 16 \n"
+ ANNOTATE_INTRA_FUNCTION_CALL " \n"
+ "bl . + 4 \n"
+ ".endr \n"
+ "mov x30, %0 \n"
: "=&r" (tmp));
}
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 04/21] arm64: Rename TRAMP_VALIAS -> TRAMP_VALIAS_ASM in asm-offsets
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (18 preceding siblings ...)
2026-05-13 3:33 ` [PATCH v3 02/21] arm64: Annotate intra-function calls Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 3:34 ` [PATCH v3 10/21] objtool: Ignore jumps to the end of the function for checksum runs Josh Poimboeuf
2026-05-13 3:34 ` [PATCH v3 11/21] objtool: Allow empty alternatives Josh Poimboeuf
21 siblings, 1 reply; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Rename the asm-offsets TRAMP_VALIAS macro to TRAMP_VALIAS_ASM, following
the naming convention already used by PIE_E0_ASM and PIE_E1_ASM. This
disambiguates the asm-offsets-generated constant from the C macro of the
same name defined in fixmap.h and vectors.h.
This is needed by a later patch which adds new includes to asm-offsets.c
that would otherwise conflict with the C version.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/arm64/kernel/asm-offsets.c | 2 +-
arch/arm64/kernel/entry.S | 8 ++++----
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index b6367ff3a49ca..44b92f582c127 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -153,7 +153,7 @@ int main(void)
DEFINE(ARM64_FTR_SYSVAL, offsetof(struct arm64_ftr_reg, sys_val));
BLANK();
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
- DEFINE(TRAMP_VALIAS, TRAMP_VALIAS);
+ DEFINE(TRAMP_VALIAS_ASM, TRAMP_VALIAS);
#endif
#ifdef CONFIG_ARM_SDE_INTERFACE
DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs));
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index d4cbdfb23d733..85f6305c1f568 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -102,7 +102,7 @@
.endm
.macro tramp_alias, dst, sym
- .set .Lalias\@, TRAMP_VALIAS + \sym - .entry.tramp.text
+ .set .Lalias\@, TRAMP_VALIAS_ASM + \sym - .entry.tramp.text
movz \dst, :abs_g2_s:.Lalias\@
movk \dst, :abs_g1_nc:.Lalias\@
movk \dst, :abs_g0_nc:.Lalias\@
@@ -626,10 +626,10 @@ SYM_CODE_END(ret_to_user)
#ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
alternative_if ARM64_WORKAROUND_QCOM_FALKOR_E1003
/* ASID already in \tmp[63:48] */
- movk \tmp, #:abs_g2_nc:(TRAMP_VALIAS >> 12)
- movk \tmp, #:abs_g1_nc:(TRAMP_VALIAS >> 12)
+ movk \tmp, #:abs_g2_nc:(TRAMP_VALIAS_ASM >> 12)
+ movk \tmp, #:abs_g1_nc:(TRAMP_VALIAS_ASM >> 12)
/* 2MB boundary containing the vectors, so we nobble the walk cache */
- movk \tmp, #:abs_g0_nc:((TRAMP_VALIAS & ~(SZ_2M - 1)) >> 12)
+ movk \tmp, #:abs_g0_nc:((TRAMP_VALIAS_ASM & ~(SZ_2M - 1)) >> 12)
isb
tlbi vae1, \tmp
dsb nsh
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 04/21] arm64: Rename TRAMP_VALIAS -> TRAMP_VALIAS_ASM in asm-offsets
2026-05-13 3:34 ` [PATCH v3 04/21] arm64: Rename TRAMP_VALIAS -> TRAMP_VALIAS_ASM in asm-offsets Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
0 siblings, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Rename the asm-offsets TRAMP_VALIAS macro to TRAMP_VALIAS_ASM, following
the naming convention already used by PIE_E0_ASM and PIE_E1_ASM. This
disambiguates the asm-offsets-generated constant from the C macro of the
same name defined in fixmap.h and vectors.h.
This is needed by a later patch which adds new includes to asm-offsets.c
that would otherwise conflict with the C version.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
arch/arm64/kernel/asm-offsets.c | 2 +-
arch/arm64/kernel/entry.S | 8 ++++----
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index b6367ff3a49ca..44b92f582c127 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -153,7 +153,7 @@ int main(void)
DEFINE(ARM64_FTR_SYSVAL, offsetof(struct arm64_ftr_reg, sys_val));
BLANK();
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
- DEFINE(TRAMP_VALIAS, TRAMP_VALIAS);
+ DEFINE(TRAMP_VALIAS_ASM, TRAMP_VALIAS);
#endif
#ifdef CONFIG_ARM_SDE_INTERFACE
DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs));
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index d4cbdfb23d733..85f6305c1f568 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -102,7 +102,7 @@
.endm
.macro tramp_alias, dst, sym
- .set .Lalias\@, TRAMP_VALIAS + \sym - .entry.tramp.text
+ .set .Lalias\@, TRAMP_VALIAS_ASM + \sym - .entry.tramp.text
movz \dst, :abs_g2_s:.Lalias\@
movk \dst, :abs_g1_nc:.Lalias\@
movk \dst, :abs_g0_nc:.Lalias\@
@@ -626,10 +626,10 @@ SYM_CODE_END(ret_to_user)
#ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
alternative_if ARM64_WORKAROUND_QCOM_FALKOR_E1003
/* ASID already in \tmp[63:48] */
- movk \tmp, #:abs_g2_nc:(TRAMP_VALIAS >> 12)
- movk \tmp, #:abs_g1_nc:(TRAMP_VALIAS >> 12)
+ movk \tmp, #:abs_g2_nc:(TRAMP_VALIAS_ASM >> 12)
+ movk \tmp, #:abs_g1_nc:(TRAMP_VALIAS_ASM >> 12)
/* 2MB boundary containing the vectors, so we nobble the walk cache */
- movk \tmp, #:abs_g0_nc:((TRAMP_VALIAS & ~(SZ_2M - 1)) >> 12)
+ movk \tmp, #:abs_g0_nc:((TRAMP_VALIAS_ASM & ~(SZ_2M - 1)) >> 12)
isb
tlbi vae1, \tmp
dsb nsh
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread
* [PATCH v3 10/21] objtool: Ignore jumps to the end of the function for checksum runs
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (19 preceding siblings ...)
2026-05-13 3:34 ` [PATCH v3 04/21] arm64: Rename TRAMP_VALIAS -> TRAMP_VALIAS_ASM in asm-offsets Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 7:36 ` Peter Zijlstra
2026-05-13 3:34 ` [PATCH v3 11/21] objtool: Allow empty alternatives Josh Poimboeuf
21 siblings, 2 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Sometimes Clang arm64 code jumps to the end of the function for UB.
No need to make that an error for checksum runs.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/check.c | 42 +++++++++++++++++++++++-------------------
1 file changed, 23 insertions(+), 19 deletions(-)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 10b18cf9c3608..73451aef68029 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -37,6 +37,22 @@ struct disas_context *objtool_disas_ctx;
size_t sym_name_max_len;
+static bool validate_branch_enabled(void)
+{
+ return opts.stackval ||
+ opts.orc ||
+ opts.uaccess;
+}
+
+static bool alts_needed(void)
+{
+ return validate_branch_enabled() ||
+ opts.noinstr ||
+ opts.hack_jump_label ||
+ opts.disas ||
+ opts.checksum;
+}
+
struct instruction *find_insn(struct objtool_file *file,
struct section *sec, unsigned long offset)
{
@@ -1593,10 +1609,14 @@ static int add_jump_destinations(struct objtool_file *file)
/*
* GCOV/KCOV dead code can jump to the end of
* the function/section.
+ *
+ * Clang on arm64 also does this sometimes for
+ * undefined behavior.
*/
- if (file->ignore_unreachables && func &&
- dest_sec == insn->sec &&
- dest_off == func->offset + func->len)
+ if (!validate_branch_enabled() ||
+ (file->ignore_unreachables && func &&
+ dest_sec == insn->sec &&
+ dest_off == func->offset + func->len))
continue;
ERROR_INSN(insn, "can't find jump dest instruction at %s",
@@ -2584,22 +2604,6 @@ static void mark_holes(struct objtool_file *file)
}
}
-static bool validate_branch_enabled(void)
-{
- return opts.stackval ||
- opts.orc ||
- opts.uaccess;
-}
-
-static bool alts_needed(void)
-{
- return validate_branch_enabled() ||
- opts.noinstr ||
- opts.hack_jump_label ||
- opts.disas ||
- opts.checksum;
-}
-
int decode_file(struct objtool_file *file)
{
arch_initial_func_cfi_state(&initial_func_cfi);
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 10/21] objtool: Ignore jumps to the end of the function for checksum runs
2026-05-13 3:34 ` [PATCH v3 10/21] objtool: Ignore jumps to the end of the function for checksum runs Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 7:36 ` Peter Zijlstra
1 sibling, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
Sometimes Clang arm64 code jumps to the end of the function for UB.
No need to make that an error for checksum runs.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/check.c | 42 +++++++++++++++++++++++-------------------
1 file changed, 23 insertions(+), 19 deletions(-)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 10b18cf9c3608..73451aef68029 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -37,6 +37,22 @@ struct disas_context *objtool_disas_ctx;
size_t sym_name_max_len;
+static bool validate_branch_enabled(void)
+{
+ return opts.stackval ||
+ opts.orc ||
+ opts.uaccess;
+}
+
+static bool alts_needed(void)
+{
+ return validate_branch_enabled() ||
+ opts.noinstr ||
+ opts.hack_jump_label ||
+ opts.disas ||
+ opts.checksum;
+}
+
struct instruction *find_insn(struct objtool_file *file,
struct section *sec, unsigned long offset)
{
@@ -1593,10 +1609,14 @@ static int add_jump_destinations(struct objtool_file *file)
/*
* GCOV/KCOV dead code can jump to the end of
* the function/section.
+ *
+ * Clang on arm64 also does this sometimes for
+ * undefined behavior.
*/
- if (file->ignore_unreachables && func &&
- dest_sec == insn->sec &&
- dest_off == func->offset + func->len)
+ if (!validate_branch_enabled() ||
+ (file->ignore_unreachables && func &&
+ dest_sec == insn->sec &&
+ dest_off == func->offset + func->len))
continue;
ERROR_INSN(insn, "can't find jump dest instruction at %s",
@@ -2584,22 +2604,6 @@ static void mark_holes(struct objtool_file *file)
}
}
-static bool validate_branch_enabled(void)
-{
- return opts.stackval ||
- opts.orc ||
- opts.uaccess;
-}
-
-static bool alts_needed(void)
-{
- return validate_branch_enabled() ||
- opts.noinstr ||
- opts.hack_jump_label ||
- opts.disas ||
- opts.checksum;
-}
-
int decode_file(struct objtool_file *file)
{
arch_initial_func_cfi_state(&initial_func_cfi);
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH v3 10/21] objtool: Ignore jumps to the end of the function for checksum runs
2026-05-13 3:34 ` [PATCH v3 10/21] objtool: Ignore jumps to the end of the function for checksum runs Josh Poimboeuf
2026-05-13 3:33 ` Josh Poimboeuf
@ 2026-05-13 7:36 ` Peter Zijlstra
1 sibling, 0 replies; 46+ messages in thread
From: Peter Zijlstra @ 2026-05-13 7:36 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: x86, linux-kernel, live-patching, Joe Lawrence, Song Liu,
Catalin Marinas, Will Deacon, linux-arm-kernel, Mark Rutland,
Miroslav Benes, Petr Mladek
On Tue, May 12, 2026 at 08:33:44PM -0700, Josh Poimboeuf wrote:
> Sometimes Clang arm64 code jumps to the end of the function for UB.
> No need to make that an error for checksum runs.
I'm not sure I understand the rationale. If the compiler trips UB, we
want to be complaining about that no?
Same as when clang just stops code-gen, also something we commonly see
when it hits UB.
^ permalink raw reply [flat|nested] 46+ messages in thread
* [PATCH v3 11/21] objtool: Allow empty alternatives
2026-05-13 3:33 [PATCH v3 00/21] objtool/arm64: Port klp-build to arm64 Josh Poimboeuf
` (20 preceding siblings ...)
2026-05-13 3:34 ` [PATCH v3 10/21] objtool: Ignore jumps to the end of the function for checksum runs Josh Poimboeuf
@ 2026-05-13 3:34 ` Josh Poimboeuf
2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 7:37 ` Peter Zijlstra
21 siblings, 2 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:34 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
arm64 can have empty alternatives, which are effectively no-ops. Ignore
them. While at it, fix a memory leak.
Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/check.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 73451aef68029..e05dc7a93dc1e 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1953,6 +1953,9 @@ static int add_special_section_alts(struct objtool_file *file)
list_for_each_entry_safe(special_alt, tmp, &special_alts, list) {
+ if (special_alt->group && !special_alt->orig_len)
+ goto next;
+
orig_insn = find_insn(file, special_alt->orig_sec,
special_alt->orig_off);
if (!orig_insn) {
@@ -1973,10 +1976,6 @@ static int add_special_section_alts(struct objtool_file *file)
}
if (special_alt->group) {
- if (!special_alt->orig_len) {
- ERROR_INSN(orig_insn, "empty alternative entry");
- continue;
- }
if (handle_group_alt(file, special_alt, orig_insn, &new_insn))
return -1;
@@ -2014,6 +2013,7 @@ static int add_special_section_alts(struct objtool_file *file)
a->next = alt;
}
+next:
list_del(&special_alt->list);
free(special_alt);
}
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* [PATCH v3 11/21] objtool: Allow empty alternatives
2026-05-13 3:34 ` [PATCH v3 11/21] objtool: Allow empty alternatives Josh Poimboeuf
@ 2026-05-13 3:33 ` Josh Poimboeuf
2026-05-13 7:37 ` Peter Zijlstra
1 sibling, 0 replies; 46+ messages in thread
From: Josh Poimboeuf @ 2026-05-13 3:33 UTC (permalink / raw)
To: x86
Cc: linux-kernel, live-patching, Peter Zijlstra, Joe Lawrence,
Song Liu, Catalin Marinas, Will Deacon, linux-arm-kernel,
Mark Rutland, Miroslav Benes, Petr Mladek
arm64 can have empty alternatives, which are effectively no-ops. Ignore
them. While at it, fix a memory leak.
Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
---
tools/objtool/check.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 73451aef68029..e05dc7a93dc1e 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -1953,6 +1953,9 @@ static int add_special_section_alts(struct objtool_file *file)
list_for_each_entry_safe(special_alt, tmp, &special_alts, list) {
+ if (special_alt->group && !special_alt->orig_len)
+ goto next;
+
orig_insn = find_insn(file, special_alt->orig_sec,
special_alt->orig_off);
if (!orig_insn) {
@@ -1973,10 +1976,6 @@ static int add_special_section_alts(struct objtool_file *file)
}
if (special_alt->group) {
- if (!special_alt->orig_len) {
- ERROR_INSN(orig_insn, "empty alternative entry");
- continue;
- }
if (handle_group_alt(file, special_alt, orig_insn, &new_insn))
return -1;
@@ -2014,6 +2013,7 @@ static int add_special_section_alts(struct objtool_file *file)
a->next = alt;
}
+next:
list_del(&special_alt->list);
free(special_alt);
}
--
2.53.0
^ permalink raw reply related [flat|nested] 46+ messages in thread* Re: [PATCH v3 11/21] objtool: Allow empty alternatives
2026-05-13 3:34 ` [PATCH v3 11/21] objtool: Allow empty alternatives Josh Poimboeuf
2026-05-13 3:33 ` Josh Poimboeuf
@ 2026-05-13 7:37 ` Peter Zijlstra
1 sibling, 0 replies; 46+ messages in thread
From: Peter Zijlstra @ 2026-05-13 7:37 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: x86, linux-kernel, live-patching, Joe Lawrence, Song Liu,
Catalin Marinas, Will Deacon, linux-arm-kernel, Mark Rutland,
Miroslav Benes, Petr Mladek
On Tue, May 12, 2026 at 08:33:45PM -0700, Josh Poimboeuf wrote:
> arm64 can have empty alternatives, which are effectively no-ops. Ignore
> them. While at it, fix a memory leak.
How does this happen?
^ permalink raw reply [flat|nested] 46+ messages in thread