* [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer
@ 2024-01-30 10:59 Borislav Petkov
2024-01-30 10:59 ` [PATCH 1/4] x86/alternatives: Use a temporary buffer when optimizing NOPs Borislav Petkov
` (4 more replies)
0 siblings, 5 replies; 15+ messages in thread
From: Borislav Petkov @ 2024-01-30 10:59 UTC (permalink / raw)
To: X86 ML; +Cc: Paul Gortmaker, LKML
From: "Borislav Petkov (AMD)" <bp@alien8.de>
Hi,
here's a small set which sprang out from my reacting to the fact that
NOPs optimization in the alternatives code needs to happen on
a temporary buffer like the other alternative operations - not in-place
and cause all kinds of fun.
The result is this, which makes the alternatives code simpler and it is
a net win, size-wise:
1 file changed, 50 insertions(+), 72 deletions(-)
Constructive feedback is always welcome!
Thx.
Borislav Petkov (AMD) (4):
x86/alternatives: Use a temporary buffer when optimizing NOPs
x86/alternatives: Get rid of __optimize_nops()
x86/alternatives: Optimize optimize_nops()
x86/alternatives: Sort local vars in apply_alternatives()
arch/x86/kernel/alternative.c | 122 ++++++++++++++--------------------
1 file changed, 50 insertions(+), 72 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 15+ messages in thread* [PATCH 1/4] x86/alternatives: Use a temporary buffer when optimizing NOPs 2024-01-30 10:59 [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer Borislav Petkov @ 2024-01-30 10:59 ` Borislav Petkov 2024-02-13 15:36 ` [tip: x86/alternatives] " tip-bot2 for Borislav Petkov (AMD) 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 2024-01-30 10:59 ` [PATCH 2/4] x86/alternatives: Get rid of __optimize_nops() Borislav Petkov ` (3 subsequent siblings) 4 siblings, 2 replies; 15+ messages in thread From: Borislav Petkov @ 2024-01-30 10:59 UTC (permalink / raw) To: X86 ML; +Cc: Paul Gortmaker, LKML, Thomas Gleixner From: "Borislav Petkov (AMD)" <bp@alien8.de> Instead of optimizing NOPs inplace, use a temporary buffer like the usual alternatives patching flow does. This obviates the need to grab locks when patching, see 6778977590da ("x86/alternatives: Disable interrupts and sync when optimizing NOPs in place") While at it, add nomenclature definitions clarifying and simplifying the naming of function-local variables in the alternatives code. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> --- arch/x86/kernel/alternative.c | 78 +++++++++++++++++++---------------- 1 file changed, 42 insertions(+), 36 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index cc130b57542a..d633eb59f2b6 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -124,6 +124,20 @@ const unsigned char * const x86_nops[ASM_NOP_MAX+1] = #endif }; +/* + * Nomenclature for variable names to simplify and clarify this code and ease + * any potential staring at it: + * + * @instr: source address of the original instructions in the kernel text as + * generated by the compiler. + * + * @buf: temporary buffer on which the patching operates. This buffer is + * eventually text-poked into the kernel image. + * + * @replacement: pointer to the opcodes which are replacing @instr, located in + * the .altinstr_replacement section. + */ + /* * Fill the buffer with a single effective instruction of size @len. * @@ -133,28 +147,28 @@ const unsigned char * const x86_nops[ASM_NOP_MAX+1] = * each single-byte NOPs). If @len to fill out is > ASM_NOP_MAX, pad with INT3 and * *jump* over instead of executing long and daft NOPs. */ -static void __init_or_module add_nop(u8 *instr, unsigned int len) +static void __init_or_module add_nop(u8 *buf, unsigned int len) { - u8 *target = instr + len; + u8 *target = buf + len; if (!len) return; if (len <= ASM_NOP_MAX) { - memcpy(instr, x86_nops[len], len); + memcpy(buf, x86_nops[len], len); return; } if (len < 128) { - __text_gen_insn(instr, JMP8_INSN_OPCODE, instr, target, JMP8_INSN_SIZE); - instr += JMP8_INSN_SIZE; + __text_gen_insn(buf, JMP8_INSN_OPCODE, buf, target, JMP8_INSN_SIZE); + buf += JMP8_INSN_SIZE; } else { - __text_gen_insn(instr, JMP32_INSN_OPCODE, instr, target, JMP32_INSN_SIZE); - instr += JMP32_INSN_SIZE; + __text_gen_insn(buf, JMP32_INSN_OPCODE, buf, target, JMP32_INSN_SIZE); + buf += JMP32_INSN_SIZE; } - for (;instr < target; instr++) - *instr = INT3_INSN_OPCODE; + for (;buf < target; buf++) + *buf = INT3_INSN_OPCODE; } extern s32 __retpoline_sites[], __retpoline_sites_end[]; @@ -187,12 +201,12 @@ static bool insn_is_nop(struct insn *insn) * Find the offset of the first non-NOP instruction starting at @offset * but no further than @len. */ -static int skip_nops(u8 *instr, int offset, int len) +static int skip_nops(u8 *buf, int offset, int len) { struct insn insn; for (; offset < len; offset += insn.length) { - if (insn_decode_kernel(&insn, &instr[offset])) + if (insn_decode_kernel(&insn, &buf[offset])) break; if (!insn_is_nop(&insn)) @@ -207,7 +221,7 @@ static int skip_nops(u8 *instr, int offset, int len) * to the end of the NOP sequence into a single NOP. */ static bool __init_or_module -__optimize_nops(u8 *instr, size_t len, struct insn *insn, int *next, int *prev, int *target) +__optimize_nops(const u8 * const instr, u8 *buf, size_t len, struct insn *insn, int *next, int *prev, int *target) { int i = *next - insn->length; @@ -222,12 +236,12 @@ __optimize_nops(u8 *instr, size_t len, struct insn *insn, int *next, int *prev, if (insn_is_nop(insn)) { int nop = i; - *next = skip_nops(instr, *next, len); + *next = skip_nops(buf, *next, len); if (*target && *next == *target) nop = *prev; - add_nop(instr + nop, *next - nop); - DUMP_BYTES(ALT, instr, len, "%px: [%d:%d) optimized NOPs: ", instr, nop, *next); + add_nop(buf + nop, *next - nop); + DUMP_BYTES(ALT, buf, len, "%px: [%d:%d) optimized NOPs: ", instr, nop, *next); return true; } @@ -239,32 +253,22 @@ __optimize_nops(u8 *instr, size_t len, struct insn *insn, int *next, int *prev, * "noinline" to cause control flow change and thus invalidate I$ and * cause refetch after modification. */ -static void __init_or_module noinline optimize_nops(u8 *instr, size_t len) +static void __init_or_module noinline optimize_nops(const u8 * const instr, u8 *buf, size_t len) { int prev, target = 0; for (int next, i = 0; i < len; i = next) { struct insn insn; - if (insn_decode_kernel(&insn, &instr[i])) + if (insn_decode_kernel(&insn, &buf[i])) return; next = i + insn.length; - __optimize_nops(instr, len, &insn, &next, &prev, &target); + __optimize_nops(instr, buf, len, &insn, &next, &prev, &target); } } -static void __init_or_module noinline optimize_nops_inplace(u8 *instr, size_t len) -{ - unsigned long flags; - - local_irq_save(flags); - optimize_nops(instr, len); - sync_core(); - local_irq_restore(flags); -} - /* * In this context, "source" is where the instructions are placed in the * section .altinstr_replacement, for example during kernel build by the @@ -336,7 +340,7 @@ bool need_reloc(unsigned long offset, u8 *src, size_t src_len) } static void __init_or_module noinline -apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) +apply_relocation(const u8 * const instr, u8 *buf, size_t len, u8 *src, size_t src_len) { int prev, target = 0; @@ -348,7 +352,7 @@ apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) next = i + insn.length; - if (__optimize_nops(buf, len, &insn, &next, &prev, &target)) + if (__optimize_nops(instr, buf, len, &insn, &next, &prev, &target)) continue; switch (insn.opcode.bytes[0]) { @@ -365,7 +369,7 @@ apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) if (need_reloc(next + insn.immediate.value, src, src_len)) { apply_reloc(insn.immediate.nbytes, buf + i + insn_offset_immediate(&insn), - src - dest); + src - instr); } /* @@ -373,7 +377,7 @@ apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) */ if (insn.opcode.bytes[0] == JMP32_INSN_OPCODE) { s32 imm = insn.immediate.value; - imm += src - dest; + imm += src - instr; imm += JMP32_INSN_SIZE - JMP8_INSN_SIZE; if ((imm >> 31) == (imm >> 7)) { buf[i+0] = JMP8_INSN_OPCODE; @@ -389,7 +393,7 @@ apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) if (need_reloc(next + insn.displacement.value, src, src_len)) { apply_reloc(insn.displacement.nbytes, buf + i + insn_offset_displacement(&insn), - src - dest); + src - instr); } } } @@ -505,7 +509,9 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, * patch if feature is *NOT* present. */ if (!boot_cpu_has(a->cpuid) == !(a->flags & ALT_FLAG_NOT)) { - optimize_nops_inplace(instr, a->instrlen); + memcpy(insn_buff, instr, a->instrlen); + optimize_nops(instr, insn_buff, a->instrlen); + text_poke_early(instr, insn_buff, a->instrlen); continue; } @@ -527,7 +533,7 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, for (; insn_buff_sz < a->instrlen; insn_buff_sz++) insn_buff[insn_buff_sz] = 0x90; - apply_relocation(insn_buff, a->instrlen, instr, replacement, a->replacementlen); + apply_relocation(instr, insn_buff, a->instrlen, replacement, a->replacementlen); DUMP_BYTES(ALT, instr, a->instrlen, "%px: old_insn: ", instr); DUMP_BYTES(ALT, replacement, a->replacementlen, "%px: rpl_insn: ", replacement); @@ -762,7 +768,7 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) len = patch_retpoline(addr, &insn, bytes); if (len == insn.length) { - optimize_nops(bytes, len); + optimize_nops(addr, bytes, len); DUMP_BYTES(RETPOLINE, ((u8*)addr), len, "%px: orig: ", addr); DUMP_BYTES(RETPOLINE, ((u8*)bytes), len, "%px: repl: ", addr); text_poke_early(addr, bytes, len); -- 2.43.0 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [tip: x86/alternatives] x86/alternatives: Use a temporary buffer when optimizing NOPs 2024-01-30 10:59 ` [PATCH 1/4] x86/alternatives: Use a temporary buffer when optimizing NOPs Borislav Petkov @ 2024-02-13 15:36 ` tip-bot2 for Borislav Petkov (AMD) 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 1 sibling, 0 replies; 15+ messages in thread From: tip-bot2 for Borislav Petkov (AMD) @ 2024-02-13 15:36 UTC (permalink / raw) To: linux-tip-commits Cc: Borislav Petkov (AMD), Thomas Gleixner, x86, linux-kernel The following commit has been merged into the x86/alternatives branch of tip: Commit-ID: fca88864f9d5637381b71a0cd38b7f5e82eab01d Gitweb: https://git.kernel.org/tip/fca88864f9d5637381b71a0cd38b7f5e82eab01d Author: Borislav Petkov (AMD) <bp@alien8.de> AuthorDate: Tue, 30 Jan 2024 11:59:38 +01:00 Committer: Borislav Petkov (AMD) <bp@alien8.de> CommitterDate: Tue, 13 Feb 2024 16:16:02 +01:00 x86/alternatives: Use a temporary buffer when optimizing NOPs Instead of optimizing NOPs in-place, use a temporary buffer like the usual alternatives patching flow does. This obviates the need to grab locks when patching, see 6778977590da ("x86/alternatives: Disable interrupts and sync when optimizing NOPs in place") While at it, add nomenclature definitions clarifying and simplifying the naming of function-local variables in the alternatives code. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20240130105941.19707-2-bp@alien8.de --- arch/x86/kernel/alternative.c | 78 ++++++++++++++++++---------------- 1 file changed, 42 insertions(+), 36 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 1d85cb7..835e343 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -125,6 +125,20 @@ const unsigned char * const x86_nops[ASM_NOP_MAX+1] = }; /* + * Nomenclature for variable names to simplify and clarify this code and ease + * any potential staring at it: + * + * @instr: source address of the original instructions in the kernel text as + * generated by the compiler. + * + * @buf: temporary buffer on which the patching operates. This buffer is + * eventually text-poked into the kernel image. + * + * @replacement: pointer to the opcodes which are replacing @instr, located in + * the .altinstr_replacement section. + */ + +/* * Fill the buffer with a single effective instruction of size @len. * * In order not to issue an ORC stack depth tracking CFI entry (Call Frame Info) @@ -133,28 +147,28 @@ const unsigned char * const x86_nops[ASM_NOP_MAX+1] = * each single-byte NOPs). If @len to fill out is > ASM_NOP_MAX, pad with INT3 and * *jump* over instead of executing long and daft NOPs. */ -static void __init_or_module add_nop(u8 *instr, unsigned int len) +static void __init_or_module add_nop(u8 *buf, unsigned int len) { - u8 *target = instr + len; + u8 *target = buf + len; if (!len) return; if (len <= ASM_NOP_MAX) { - memcpy(instr, x86_nops[len], len); + memcpy(buf, x86_nops[len], len); return; } if (len < 128) { - __text_gen_insn(instr, JMP8_INSN_OPCODE, instr, target, JMP8_INSN_SIZE); - instr += JMP8_INSN_SIZE; + __text_gen_insn(buf, JMP8_INSN_OPCODE, buf, target, JMP8_INSN_SIZE); + buf += JMP8_INSN_SIZE; } else { - __text_gen_insn(instr, JMP32_INSN_OPCODE, instr, target, JMP32_INSN_SIZE); - instr += JMP32_INSN_SIZE; + __text_gen_insn(buf, JMP32_INSN_OPCODE, buf, target, JMP32_INSN_SIZE); + buf += JMP32_INSN_SIZE; } - for (;instr < target; instr++) - *instr = INT3_INSN_OPCODE; + for (;buf < target; buf++) + *buf = INT3_INSN_OPCODE; } extern s32 __retpoline_sites[], __retpoline_sites_end[]; @@ -187,12 +201,12 @@ static bool insn_is_nop(struct insn *insn) * Find the offset of the first non-NOP instruction starting at @offset * but no further than @len. */ -static int skip_nops(u8 *instr, int offset, int len) +static int skip_nops(u8 *buf, int offset, int len) { struct insn insn; for (; offset < len; offset += insn.length) { - if (insn_decode_kernel(&insn, &instr[offset])) + if (insn_decode_kernel(&insn, &buf[offset])) break; if (!insn_is_nop(&insn)) @@ -207,7 +221,7 @@ static int skip_nops(u8 *instr, int offset, int len) * to the end of the NOP sequence into a single NOP. */ static bool __init_or_module -__optimize_nops(u8 *instr, size_t len, struct insn *insn, int *next, int *prev, int *target) +__optimize_nops(const u8 * const instr, u8 *buf, size_t len, struct insn *insn, int *next, int *prev, int *target) { int i = *next - insn->length; @@ -222,12 +236,12 @@ __optimize_nops(u8 *instr, size_t len, struct insn *insn, int *next, int *prev, if (insn_is_nop(insn)) { int nop = i; - *next = skip_nops(instr, *next, len); + *next = skip_nops(buf, *next, len); if (*target && *next == *target) nop = *prev; - add_nop(instr + nop, *next - nop); - DUMP_BYTES(ALT, instr, len, "%px: [%d:%d) optimized NOPs: ", instr, nop, *next); + add_nop(buf + nop, *next - nop); + DUMP_BYTES(ALT, buf, len, "%px: [%d:%d) optimized NOPs: ", instr, nop, *next); return true; } @@ -239,32 +253,22 @@ __optimize_nops(u8 *instr, size_t len, struct insn *insn, int *next, int *prev, * "noinline" to cause control flow change and thus invalidate I$ and * cause refetch after modification. */ -static void __init_or_module noinline optimize_nops(u8 *instr, size_t len) +static void __init_or_module noinline optimize_nops(const u8 * const instr, u8 *buf, size_t len) { int prev, target = 0; for (int next, i = 0; i < len; i = next) { struct insn insn; - if (insn_decode_kernel(&insn, &instr[i])) + if (insn_decode_kernel(&insn, &buf[i])) return; next = i + insn.length; - __optimize_nops(instr, len, &insn, &next, &prev, &target); + __optimize_nops(instr, buf, len, &insn, &next, &prev, &target); } } -static void __init_or_module noinline optimize_nops_inplace(u8 *instr, size_t len) -{ - unsigned long flags; - - local_irq_save(flags); - optimize_nops(instr, len); - sync_core(); - local_irq_restore(flags); -} - /* * In this context, "source" is where the instructions are placed in the * section .altinstr_replacement, for example during kernel build by the @@ -336,7 +340,7 @@ bool need_reloc(unsigned long offset, u8 *src, size_t src_len) } static void __init_or_module noinline -apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) +apply_relocation(const u8 * const instr, u8 *buf, size_t len, u8 *src, size_t src_len) { int prev, target = 0; @@ -348,7 +352,7 @@ apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) next = i + insn.length; - if (__optimize_nops(buf, len, &insn, &next, &prev, &target)) + if (__optimize_nops(instr, buf, len, &insn, &next, &prev, &target)) continue; switch (insn.opcode.bytes[0]) { @@ -365,7 +369,7 @@ apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) if (need_reloc(next + insn.immediate.value, src, src_len)) { apply_reloc(insn.immediate.nbytes, buf + i + insn_offset_immediate(&insn), - src - dest); + src - instr); } /* @@ -373,7 +377,7 @@ apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) */ if (insn.opcode.bytes[0] == JMP32_INSN_OPCODE) { s32 imm = insn.immediate.value; - imm += src - dest; + imm += src - instr; imm += JMP32_INSN_SIZE - JMP8_INSN_SIZE; if ((imm >> 31) == (imm >> 7)) { buf[i+0] = JMP8_INSN_OPCODE; @@ -389,7 +393,7 @@ apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) if (need_reloc(next + insn.displacement.value, src, src_len)) { apply_reloc(insn.displacement.nbytes, buf + i + insn_offset_displacement(&insn), - src - dest); + src - instr); } } } @@ -505,7 +509,9 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, * patch if feature is *NOT* present. */ if (!boot_cpu_has(a->cpuid) == !(a->flags & ALT_FLAG_NOT)) { - optimize_nops_inplace(instr, a->instrlen); + memcpy(insn_buff, instr, a->instrlen); + optimize_nops(instr, insn_buff, a->instrlen); + text_poke_early(instr, insn_buff, a->instrlen); continue; } @@ -527,7 +533,7 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, for (; insn_buff_sz < a->instrlen; insn_buff_sz++) insn_buff[insn_buff_sz] = 0x90; - apply_relocation(insn_buff, a->instrlen, instr, replacement, a->replacementlen); + apply_relocation(instr, insn_buff, a->instrlen, replacement, a->replacementlen); DUMP_BYTES(ALT, instr, a->instrlen, "%px: old_insn: ", instr); DUMP_BYTES(ALT, replacement, a->replacementlen, "%px: rpl_insn: ", replacement); @@ -762,7 +768,7 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) len = patch_retpoline(addr, &insn, bytes); if (len == insn.length) { - optimize_nops(bytes, len); + optimize_nops(addr, bytes, len); DUMP_BYTES(RETPOLINE, ((u8*)addr), len, "%px: orig: ", addr); DUMP_BYTES(RETPOLINE, ((u8*)bytes), len, "%px: repl: ", addr); text_poke_early(addr, bytes, len); ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [tip: x86/alternatives] x86/alternatives: Use a temporary buffer when optimizing NOPs 2024-01-30 10:59 ` [PATCH 1/4] x86/alternatives: Use a temporary buffer when optimizing NOPs Borislav Petkov 2024-02-13 15:36 ` [tip: x86/alternatives] " tip-bot2 for Borislav Petkov (AMD) @ 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 1 sibling, 0 replies; 15+ messages in thread From: tip-bot2 for Borislav Petkov (AMD) @ 2024-04-09 17:11 UTC (permalink / raw) To: linux-tip-commits; +Cc: Borislav Petkov (AMD), x86, linux-kernel The following commit has been merged into the x86/alternatives branch of tip: Commit-ID: f796c75837623058db1ff93252b9f1681306b83d Gitweb: https://git.kernel.org/tip/f796c75837623058db1ff93252b9f1681306b83d Author: Borislav Petkov (AMD) <bp@alien8.de> AuthorDate: Tue, 30 Jan 2024 11:59:38 +01:00 Committer: Borislav Petkov (AMD) <bp@alien8.de> CommitterDate: Tue, 09 Apr 2024 18:08:11 +02:00 x86/alternatives: Use a temporary buffer when optimizing NOPs Instead of optimizing NOPs in-place, use a temporary buffer like the usual alternatives patching flow does. This obviates the need to grab locks when patching, see 6778977590da ("x86/alternatives: Disable interrupts and sync when optimizing NOPs in place") While at it, add nomenclature definitions clarifying and simplifying the naming of function-local variables in the alternatives code. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20240130105941.19707-2-bp@alien8.de --- arch/x86/include/asm/text-patching.h | 2 +- arch/x86/kernel/alternative.c | 84 ++++++++++++++------------- arch/x86/kernel/callthunks.c | 9 +-- 3 files changed, 49 insertions(+), 46 deletions(-) diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h index 345aafb..6259f19 100644 --- a/arch/x86/include/asm/text-patching.h +++ b/arch/x86/include/asm/text-patching.h @@ -15,7 +15,7 @@ extern void text_poke_early(void *addr, const void *opcode, size_t len); -extern void apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len); +extern void apply_relocation(u8 *buf, const u8 * const instr, size_t instrlen, u8 *repl, size_t repl_len); /* * Clear and restore the kernel write-protection flag on the local CPU. diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 45a280f..ec94f13 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -125,6 +125,20 @@ const unsigned char * const x86_nops[ASM_NOP_MAX+1] = }; /* + * Nomenclature for variable names to simplify and clarify this code and ease + * any potential staring at it: + * + * @instr: source address of the original instructions in the kernel text as + * generated by the compiler. + * + * @buf: temporary buffer on which the patching operates. This buffer is + * eventually text-poked into the kernel image. + * + * @replacement/@repl: pointer to the opcodes which are replacing @instr, located + * in the .altinstr_replacement section. + */ + +/* * Fill the buffer with a single effective instruction of size @len. * * In order not to issue an ORC stack depth tracking CFI entry (Call Frame Info) @@ -133,28 +147,28 @@ const unsigned char * const x86_nops[ASM_NOP_MAX+1] = * each single-byte NOPs). If @len to fill out is > ASM_NOP_MAX, pad with INT3 and * *jump* over instead of executing long and daft NOPs. */ -static void add_nop(u8 *instr, unsigned int len) +static void add_nop(u8 *buf, unsigned int len) { - u8 *target = instr + len; + u8 *target = buf + len; if (!len) return; if (len <= ASM_NOP_MAX) { - memcpy(instr, x86_nops[len], len); + memcpy(buf, x86_nops[len], len); return; } if (len < 128) { - __text_gen_insn(instr, JMP8_INSN_OPCODE, instr, target, JMP8_INSN_SIZE); - instr += JMP8_INSN_SIZE; + __text_gen_insn(buf, JMP8_INSN_OPCODE, buf, target, JMP8_INSN_SIZE); + buf += JMP8_INSN_SIZE; } else { - __text_gen_insn(instr, JMP32_INSN_OPCODE, instr, target, JMP32_INSN_SIZE); - instr += JMP32_INSN_SIZE; + __text_gen_insn(buf, JMP32_INSN_OPCODE, buf, target, JMP32_INSN_SIZE); + buf += JMP32_INSN_SIZE; } - for (;instr < target; instr++) - *instr = INT3_INSN_OPCODE; + for (;buf < target; buf++) + *buf = INT3_INSN_OPCODE; } extern s32 __retpoline_sites[], __retpoline_sites_end[]; @@ -187,12 +201,12 @@ static bool insn_is_nop(struct insn *insn) * Find the offset of the first non-NOP instruction starting at @offset * but no further than @len. */ -static int skip_nops(u8 *instr, int offset, int len) +static int skip_nops(u8 *buf, int offset, int len) { struct insn insn; for (; offset < len; offset += insn.length) { - if (insn_decode_kernel(&insn, &instr[offset])) + if (insn_decode_kernel(&insn, &buf[offset])) break; if (!insn_is_nop(&insn)) @@ -207,7 +221,7 @@ static int skip_nops(u8 *instr, int offset, int len) * to the end of the NOP sequence into a single NOP. */ static bool -__optimize_nops(u8 *instr, size_t len, struct insn *insn, int *next, int *prev, int *target) +__optimize_nops(const u8 * const instr, u8 *buf, size_t len, struct insn *insn, int *next, int *prev, int *target) { int i = *next - insn->length; @@ -222,12 +236,12 @@ __optimize_nops(u8 *instr, size_t len, struct insn *insn, int *next, int *prev, if (insn_is_nop(insn)) { int nop = i; - *next = skip_nops(instr, *next, len); + *next = skip_nops(buf, *next, len); if (*target && *next == *target) nop = *prev; - add_nop(instr + nop, *next - nop); - DUMP_BYTES(ALT, instr, len, "%px: [%d:%d) optimized NOPs: ", instr, nop, *next); + add_nop(buf + nop, *next - nop); + DUMP_BYTES(ALT, buf, len, "%px: [%d:%d) optimized NOPs: ", instr, nop, *next); return true; } @@ -239,32 +253,22 @@ __optimize_nops(u8 *instr, size_t len, struct insn *insn, int *next, int *prev, * "noinline" to cause control flow change and thus invalidate I$ and * cause refetch after modification. */ -static void __init_or_module noinline optimize_nops(u8 *instr, size_t len) +static void __init_or_module noinline optimize_nops(const u8 * const instr, u8 *buf, size_t len) { int prev, target = 0; for (int next, i = 0; i < len; i = next) { struct insn insn; - if (insn_decode_kernel(&insn, &instr[i])) + if (insn_decode_kernel(&insn, &buf[i])) return; next = i + insn.length; - __optimize_nops(instr, len, &insn, &next, &prev, &target); + __optimize_nops(instr, buf, len, &insn, &next, &prev, &target); } } -static void __init_or_module noinline optimize_nops_inplace(u8 *instr, size_t len) -{ - unsigned long flags; - - local_irq_save(flags); - optimize_nops(instr, len); - sync_core(); - local_irq_restore(flags); -} - /* * In this context, "source" is where the instructions are placed in the * section .altinstr_replacement, for example during kernel build by the @@ -335,11 +339,11 @@ bool need_reloc(unsigned long offset, u8 *src, size_t src_len) return (target < src || target > src + src_len); } -void apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) +void apply_relocation(u8 *buf, const u8 * const instr, size_t instrlen, u8 *repl, size_t repl_len) { int prev, target = 0; - for (int next, i = 0; i < len; i = next) { + for (int next, i = 0; i < instrlen; i = next) { struct insn insn; if (WARN_ON_ONCE(insn_decode_kernel(&insn, &buf[i]))) @@ -347,7 +351,7 @@ void apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) next = i + insn.length; - if (__optimize_nops(buf, len, &insn, &next, &prev, &target)) + if (__optimize_nops(instr, buf, instrlen, &insn, &next, &prev, &target)) continue; switch (insn.opcode.bytes[0]) { @@ -361,10 +365,10 @@ void apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) case JMP8_INSN_OPCODE: case JMP32_INSN_OPCODE: case CALL_INSN_OPCODE: - if (need_reloc(next + insn.immediate.value, src, src_len)) { + if (need_reloc(next + insn.immediate.value, repl, repl_len)) { apply_reloc(insn.immediate.nbytes, buf + i + insn_offset_immediate(&insn), - src - dest); + repl - instr); } /* @@ -372,7 +376,7 @@ void apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) */ if (insn.opcode.bytes[0] == JMP32_INSN_OPCODE) { s32 imm = insn.immediate.value; - imm += src - dest; + imm += repl - instr; imm += JMP32_INSN_SIZE - JMP8_INSN_SIZE; if ((imm >> 31) == (imm >> 7)) { buf[i+0] = JMP8_INSN_OPCODE; @@ -385,10 +389,10 @@ void apply_relocation(u8 *buf, size_t len, u8 *dest, u8 *src, size_t src_len) } if (insn_rip_relative(&insn)) { - if (need_reloc(next + insn.displacement.value, src, src_len)) { + if (need_reloc(next + insn.displacement.value, repl, repl_len)) { apply_reloc(insn.displacement.nbytes, buf + i + insn_offset_displacement(&insn), - src - dest); + repl - instr); } } } @@ -504,7 +508,9 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, * patch if feature is *NOT* present. */ if (!boot_cpu_has(a->cpuid) == !(a->flags & ALT_FLAG_NOT)) { - optimize_nops_inplace(instr, a->instrlen); + memcpy(insn_buff, instr, a->instrlen); + optimize_nops(instr, insn_buff, a->instrlen); + text_poke_early(instr, insn_buff, a->instrlen); continue; } @@ -526,7 +532,7 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, for (; insn_buff_sz < a->instrlen; insn_buff_sz++) insn_buff[insn_buff_sz] = 0x90; - apply_relocation(insn_buff, a->instrlen, instr, replacement, a->replacementlen); + apply_relocation(insn_buff, instr, a->instrlen, replacement, a->replacementlen); DUMP_BYTES(ALT, instr, a->instrlen, "%px: old_insn: ", instr); DUMP_BYTES(ALT, replacement, a->replacementlen, "%px: rpl_insn: ", replacement); @@ -761,7 +767,7 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) len = patch_retpoline(addr, &insn, bytes); if (len == insn.length) { - optimize_nops(bytes, len); + optimize_nops(addr, bytes, len); DUMP_BYTES(RETPOLINE, ((u8*)addr), len, "%px: orig: ", addr); DUMP_BYTES(RETPOLINE, ((u8*)bytes), len, "%px: repl: ", addr); text_poke_early(addr, bytes, len); diff --git a/arch/x86/kernel/callthunks.c b/arch/x86/kernel/callthunks.c index e92ff0c..4656474 100644 --- a/arch/x86/kernel/callthunks.c +++ b/arch/x86/kernel/callthunks.c @@ -185,8 +185,7 @@ static void *patch_dest(void *dest, bool direct) u8 *pad = dest - tsize; memcpy(insn_buff, skl_call_thunk_template, tsize); - apply_relocation(insn_buff, tsize, pad, - skl_call_thunk_template, tsize); + apply_relocation(insn_buff, pad, tsize, skl_call_thunk_template, tsize); /* Already patched? */ if (!bcmp(pad, insn_buff, tsize)) @@ -308,8 +307,7 @@ static bool is_callthunk(void *addr) pad = (void *)(dest - tmpl_size); memcpy(insn_buff, skl_call_thunk_template, tmpl_size); - apply_relocation(insn_buff, tmpl_size, pad, - skl_call_thunk_template, tmpl_size); + apply_relocation(insn_buff, pad, tmpl_size, skl_call_thunk_template, tmpl_size); return !bcmp(pad, insn_buff, tmpl_size); } @@ -327,8 +325,7 @@ int x86_call_depth_emit_accounting(u8 **pprog, void *func, void *ip) return 0; memcpy(insn_buff, skl_call_thunk_template, tmpl_size); - apply_relocation(insn_buff, tmpl_size, ip, - skl_call_thunk_template, tmpl_size); + apply_relocation(insn_buff, ip, tmpl_size, skl_call_thunk_template, tmpl_size); memcpy(*pprog, insn_buff, tmpl_size); *pprog += tmpl_size; ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/4] x86/alternatives: Get rid of __optimize_nops() 2024-01-30 10:59 [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer Borislav Petkov 2024-01-30 10:59 ` [PATCH 1/4] x86/alternatives: Use a temporary buffer when optimizing NOPs Borislav Petkov @ 2024-01-30 10:59 ` Borislav Petkov 2024-02-13 15:35 ` [tip: x86/alternatives] " tip-bot2 for Borislav Petkov (AMD) 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 2024-01-30 10:59 ` [PATCH 3/4] x86/alternatives: Optimize optimize_nops() Borislav Petkov ` (2 subsequent siblings) 4 siblings, 2 replies; 15+ messages in thread From: Borislav Petkov @ 2024-01-30 10:59 UTC (permalink / raw) To: X86 ML; +Cc: Paul Gortmaker, LKML From: "Borislav Petkov (AMD)" <bp@alien8.de> There's no need to carve out bits of the NOP optimization functionality and look at JMP opcodes - simply do one more NOPs optimization pass at the end of the patching. A lot simpler code. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> --- arch/x86/kernel/alternative.c | 52 +++++++---------------------------- 1 file changed, 10 insertions(+), 42 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index d633eb59f2b6..2dd1c7fe0949 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -216,47 +216,12 @@ static int skip_nops(u8 *buf, int offset, int len) return offset; } -/* - * Optimize a sequence of NOPs, possibly preceded by an unconditional jump - * to the end of the NOP sequence into a single NOP. - */ -static bool __init_or_module -__optimize_nops(const u8 * const instr, u8 *buf, size_t len, struct insn *insn, int *next, int *prev, int *target) -{ - int i = *next - insn->length; - - switch (insn->opcode.bytes[0]) { - case JMP8_INSN_OPCODE: - case JMP32_INSN_OPCODE: - *prev = i; - *target = *next + insn->immediate.value; - return false; - } - - if (insn_is_nop(insn)) { - int nop = i; - - *next = skip_nops(buf, *next, len); - if (*target && *next == *target) - nop = *prev; - - add_nop(buf + nop, *next - nop); - DUMP_BYTES(ALT, buf, len, "%px: [%d:%d) optimized NOPs: ", instr, nop, *next); - return true; - } - - *target = 0; - return false; -} - /* * "noinline" to cause control flow change and thus invalidate I$ and * cause refetch after modification. */ static void __init_or_module noinline optimize_nops(const u8 * const instr, u8 *buf, size_t len) { - int prev, target = 0; - for (int next, i = 0; i < len; i = next) { struct insn insn; @@ -265,7 +230,14 @@ static void __init_or_module noinline optimize_nops(const u8 * const instr, u8 * next = i + insn.length; - __optimize_nops(instr, buf, len, &insn, &next, &prev, &target); + if (insn_is_nop(&insn)) { + int nop = i; + + next = skip_nops(buf, next, len); + + add_nop(buf + nop, next - nop); + DUMP_BYTES(ALT, buf, len, "%px: [%d:%d) optimized NOPs: ", instr, nop, next); + } } } @@ -342,8 +314,6 @@ bool need_reloc(unsigned long offset, u8 *src, size_t src_len) static void __init_or_module noinline apply_relocation(const u8 * const instr, u8 *buf, size_t len, u8 *src, size_t src_len) { - int prev, target = 0; - for (int next, i = 0; i < len; i = next) { struct insn insn; @@ -352,9 +322,6 @@ apply_relocation(const u8 * const instr, u8 *buf, size_t len, u8 *src, size_t sr next = i + insn.length; - if (__optimize_nops(instr, buf, len, &insn, &next, &prev, &target)) - continue; - switch (insn.opcode.bytes[0]) { case 0x0f: if (insn.opcode.bytes[1] < 0x80 || @@ -533,7 +500,8 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, for (; insn_buff_sz < a->instrlen; insn_buff_sz++) insn_buff[insn_buff_sz] = 0x90; - apply_relocation(instr, insn_buff, a->instrlen, replacement, a->replacementlen); + apply_relocation(instr, insn_buff, a->instrlen, replacement, insn_buff_sz); + optimize_nops(instr, insn_buff, insn_buff_sz); DUMP_BYTES(ALT, instr, a->instrlen, "%px: old_insn: ", instr); DUMP_BYTES(ALT, replacement, a->replacementlen, "%px: rpl_insn: ", replacement); -- 2.43.0 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [tip: x86/alternatives] x86/alternatives: Get rid of __optimize_nops() 2024-01-30 10:59 ` [PATCH 2/4] x86/alternatives: Get rid of __optimize_nops() Borislav Petkov @ 2024-02-13 15:35 ` tip-bot2 for Borislav Petkov (AMD) 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 1 sibling, 0 replies; 15+ messages in thread From: tip-bot2 for Borislav Petkov (AMD) @ 2024-02-13 15:35 UTC (permalink / raw) To: linux-tip-commits; +Cc: Borislav Petkov (AMD), x86, linux-kernel The following commit has been merged into the x86/alternatives branch of tip: Commit-ID: f9265b75648165f61467a7ea3c9bbae33be7ce27 Gitweb: https://git.kernel.org/tip/f9265b75648165f61467a7ea3c9bbae33be7ce27 Author: Borislav Petkov (AMD) <bp@alien8.de> AuthorDate: Tue, 30 Jan 2024 11:59:39 +01:00 Committer: Borislav Petkov (AMD) <bp@alien8.de> CommitterDate: Tue, 13 Feb 2024 16:25:46 +01:00 x86/alternatives: Get rid of __optimize_nops() There's no need to carve out bits of the NOP optimization functionality and look at JMP opcodes - simply do one more NOPs optimization pass at the end of patching. A lot simpler code. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20240130105941.19707-3-bp@alien8.de --- arch/x86/kernel/alternative.c | 52 ++++++---------------------------- 1 file changed, 10 insertions(+), 42 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 835e343..cdbece3 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -217,46 +217,11 @@ static int skip_nops(u8 *buf, int offset, int len) } /* - * Optimize a sequence of NOPs, possibly preceded by an unconditional jump - * to the end of the NOP sequence into a single NOP. - */ -static bool __init_or_module -__optimize_nops(const u8 * const instr, u8 *buf, size_t len, struct insn *insn, int *next, int *prev, int *target) -{ - int i = *next - insn->length; - - switch (insn->opcode.bytes[0]) { - case JMP8_INSN_OPCODE: - case JMP32_INSN_OPCODE: - *prev = i; - *target = *next + insn->immediate.value; - return false; - } - - if (insn_is_nop(insn)) { - int nop = i; - - *next = skip_nops(buf, *next, len); - if (*target && *next == *target) - nop = *prev; - - add_nop(buf + nop, *next - nop); - DUMP_BYTES(ALT, buf, len, "%px: [%d:%d) optimized NOPs: ", instr, nop, *next); - return true; - } - - *target = 0; - return false; -} - -/* * "noinline" to cause control flow change and thus invalidate I$ and * cause refetch after modification. */ static void __init_or_module noinline optimize_nops(const u8 * const instr, u8 *buf, size_t len) { - int prev, target = 0; - for (int next, i = 0; i < len; i = next) { struct insn insn; @@ -265,7 +230,14 @@ static void __init_or_module noinline optimize_nops(const u8 * const instr, u8 * next = i + insn.length; - __optimize_nops(instr, buf, len, &insn, &next, &prev, &target); + if (insn_is_nop(&insn)) { + int nop = i; + + next = skip_nops(buf, next, len); + + add_nop(buf + nop, next - nop); + DUMP_BYTES(ALT, buf, len, "%px: [%d:%d) optimized NOPs: ", instr, nop, next); + } } } @@ -342,8 +314,6 @@ bool need_reloc(unsigned long offset, u8 *src, size_t src_len) static void __init_or_module noinline apply_relocation(const u8 * const instr, u8 *buf, size_t len, u8 *src, size_t src_len) { - int prev, target = 0; - for (int next, i = 0; i < len; i = next) { struct insn insn; @@ -352,9 +322,6 @@ apply_relocation(const u8 * const instr, u8 *buf, size_t len, u8 *src, size_t sr next = i + insn.length; - if (__optimize_nops(instr, buf, len, &insn, &next, &prev, &target)) - continue; - switch (insn.opcode.bytes[0]) { case 0x0f: if (insn.opcode.bytes[1] < 0x80 || @@ -533,7 +500,8 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, for (; insn_buff_sz < a->instrlen; insn_buff_sz++) insn_buff[insn_buff_sz] = 0x90; - apply_relocation(instr, insn_buff, a->instrlen, replacement, a->replacementlen); + apply_relocation(instr, insn_buff, a->instrlen, replacement, insn_buff_sz); + optimize_nops(instr, insn_buff, insn_buff_sz); DUMP_BYTES(ALT, instr, a->instrlen, "%px: old_insn: ", instr); DUMP_BYTES(ALT, replacement, a->replacementlen, "%px: rpl_insn: ", replacement); ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [tip: x86/alternatives] x86/alternatives: Get rid of __optimize_nops() 2024-01-30 10:59 ` [PATCH 2/4] x86/alternatives: Get rid of __optimize_nops() Borislav Petkov 2024-02-13 15:35 ` [tip: x86/alternatives] " tip-bot2 for Borislav Petkov (AMD) @ 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 1 sibling, 0 replies; 15+ messages in thread From: tip-bot2 for Borislav Petkov (AMD) @ 2024-04-09 17:11 UTC (permalink / raw) To: linux-tip-commits; +Cc: Borislav Petkov (AMD), x86, linux-kernel The following commit has been merged into the x86/alternatives branch of tip: Commit-ID: da8f9cf7e721c690ca169fa88641a6c4cee5cae4 Gitweb: https://git.kernel.org/tip/da8f9cf7e721c690ca169fa88641a6c4cee5cae4 Author: Borislav Petkov (AMD) <bp@alien8.de> AuthorDate: Tue, 30 Jan 2024 11:59:39 +01:00 Committer: Borislav Petkov (AMD) <bp@alien8.de> CommitterDate: Tue, 09 Apr 2024 18:12:53 +02:00 x86/alternatives: Get rid of __optimize_nops() There's no need to carve out bits of the NOP optimization functionality and look at JMP opcodes - simply do one more NOPs optimization pass at the end of patching. A lot simpler code. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20240130105941.19707-3-bp@alien8.de --- arch/x86/kernel/alternative.c | 59 +++++++++------------------------- 1 file changed, 16 insertions(+), 43 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index ec94f13..4b3378c 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -217,46 +217,11 @@ static int skip_nops(u8 *buf, int offset, int len) } /* - * Optimize a sequence of NOPs, possibly preceded by an unconditional jump - * to the end of the NOP sequence into a single NOP. - */ -static bool -__optimize_nops(const u8 * const instr, u8 *buf, size_t len, struct insn *insn, int *next, int *prev, int *target) -{ - int i = *next - insn->length; - - switch (insn->opcode.bytes[0]) { - case JMP8_INSN_OPCODE: - case JMP32_INSN_OPCODE: - *prev = i; - *target = *next + insn->immediate.value; - return false; - } - - if (insn_is_nop(insn)) { - int nop = i; - - *next = skip_nops(buf, *next, len); - if (*target && *next == *target) - nop = *prev; - - add_nop(buf + nop, *next - nop); - DUMP_BYTES(ALT, buf, len, "%px: [%d:%d) optimized NOPs: ", instr, nop, *next); - return true; - } - - *target = 0; - return false; -} - -/* * "noinline" to cause control flow change and thus invalidate I$ and * cause refetch after modification. */ -static void __init_or_module noinline optimize_nops(const u8 * const instr, u8 *buf, size_t len) +static void noinline optimize_nops(const u8 * const instr, u8 *buf, size_t len) { - int prev, target = 0; - for (int next, i = 0; i < len; i = next) { struct insn insn; @@ -265,7 +230,14 @@ static void __init_or_module noinline optimize_nops(const u8 * const instr, u8 * next = i + insn.length; - __optimize_nops(instr, buf, len, &insn, &next, &prev, &target); + if (insn_is_nop(&insn)) { + int nop = i; + + next = skip_nops(buf, next, len); + + add_nop(buf + nop, next - nop); + DUMP_BYTES(ALT, buf, len, "%px: [%d:%d) optimized NOPs: ", instr, nop, next); + } } } @@ -339,10 +311,8 @@ bool need_reloc(unsigned long offset, u8 *src, size_t src_len) return (target < src || target > src + src_len); } -void apply_relocation(u8 *buf, const u8 * const instr, size_t instrlen, u8 *repl, size_t repl_len) +static void __apply_relocation(u8 *buf, const u8 * const instr, size_t instrlen, u8 *repl, size_t repl_len) { - int prev, target = 0; - for (int next, i = 0; i < instrlen; i = next) { struct insn insn; @@ -351,9 +321,6 @@ void apply_relocation(u8 *buf, const u8 * const instr, size_t instrlen, u8 *repl next = i + insn.length; - if (__optimize_nops(instr, buf, instrlen, &insn, &next, &prev, &target)) - continue; - switch (insn.opcode.bytes[0]) { case 0x0f: if (insn.opcode.bytes[1] < 0x80 || @@ -398,6 +365,12 @@ void apply_relocation(u8 *buf, const u8 * const instr, size_t instrlen, u8 *repl } } +void apply_relocation(u8 *buf, const u8 * const instr, size_t instrlen, u8 *repl, size_t repl_len) +{ + __apply_relocation(buf, instr, instrlen, repl, repl_len); + optimize_nops(instr, buf, repl_len); +} + /* Low-level backend functions usable from alternative code replacements. */ DEFINE_ASM_FUNC(nop_func, "", .entry.text); EXPORT_SYMBOL_GPL(nop_func); ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 3/4] x86/alternatives: Optimize optimize_nops() 2024-01-30 10:59 [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer Borislav Petkov 2024-01-30 10:59 ` [PATCH 1/4] x86/alternatives: Use a temporary buffer when optimizing NOPs Borislav Petkov 2024-01-30 10:59 ` [PATCH 2/4] x86/alternatives: Get rid of __optimize_nops() Borislav Petkov @ 2024-01-30 10:59 ` Borislav Petkov 2024-02-13 15:35 ` [tip: x86/alternatives] " tip-bot2 for Borislav Petkov (AMD) 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 2024-01-30 10:59 ` [PATCH 4/4] x86/alternatives: Sort local vars in apply_alternatives() Borislav Petkov 2024-01-31 16:17 ` [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer Paul Gortmaker 4 siblings, 2 replies; 15+ messages in thread From: Borislav Petkov @ 2024-01-30 10:59 UTC (permalink / raw) To: X86 ML; +Cc: Paul Gortmaker, LKML From: "Borislav Petkov (AMD)" <bp@alien8.de> Return early if NOPs have already been optimized. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> --- arch/x86/kernel/alternative.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 2dd1c7fe0949..68ee46c379c1 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -233,6 +233,10 @@ static void __init_or_module noinline optimize_nops(const u8 * const instr, u8 * if (insn_is_nop(&insn)) { int nop = i; + /* Has the NOP already been optimized? */ + if (i + insn.length == len) + return; + next = skip_nops(buf, next, len); add_nop(buf + nop, next - nop); -- 2.43.0 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [tip: x86/alternatives] x86/alternatives: Optimize optimize_nops() 2024-01-30 10:59 ` [PATCH 3/4] x86/alternatives: Optimize optimize_nops() Borislav Petkov @ 2024-02-13 15:35 ` tip-bot2 for Borislav Petkov (AMD) 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 1 sibling, 0 replies; 15+ messages in thread From: tip-bot2 for Borislav Petkov (AMD) @ 2024-02-13 15:35 UTC (permalink / raw) To: linux-tip-commits; +Cc: Borislav Petkov (AMD), x86, linux-kernel The following commit has been merged into the x86/alternatives branch of tip: Commit-ID: f7045230fa2933bf3b38c9c92b5ba46486ef33f9 Gitweb: https://git.kernel.org/tip/f7045230fa2933bf3b38c9c92b5ba46486ef33f9 Author: Borislav Petkov (AMD) <bp@alien8.de> AuthorDate: Tue, 30 Jan 2024 11:59:40 +01:00 Committer: Borislav Petkov (AMD) <bp@alien8.de> CommitterDate: Tue, 13 Feb 2024 16:26:56 +01:00 x86/alternatives: Optimize optimize_nops() Return early if NOPs have already been optimized. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20240130105941.19707-4-bp@alien8.de --- arch/x86/kernel/alternative.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index cdbece3..1ceaaab 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -233,6 +233,10 @@ static void __init_or_module noinline optimize_nops(const u8 * const instr, u8 * if (insn_is_nop(&insn)) { int nop = i; + /* Has the NOP already been optimized? */ + if (i + insn.length == len) + return; + next = skip_nops(buf, next, len); add_nop(buf + nop, next - nop); ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [tip: x86/alternatives] x86/alternatives: Optimize optimize_nops() 2024-01-30 10:59 ` [PATCH 3/4] x86/alternatives: Optimize optimize_nops() Borislav Petkov 2024-02-13 15:35 ` [tip: x86/alternatives] " tip-bot2 for Borislav Petkov (AMD) @ 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 1 sibling, 0 replies; 15+ messages in thread From: tip-bot2 for Borislav Petkov (AMD) @ 2024-04-09 17:11 UTC (permalink / raw) To: linux-tip-commits; +Cc: Borislav Petkov (AMD), x86, linux-kernel The following commit has been merged into the x86/alternatives branch of tip: Commit-ID: c3a3cb5c3d893d7ca75c773ddd107832f13e7b57 Gitweb: https://git.kernel.org/tip/c3a3cb5c3d893d7ca75c773ddd107832f13e7b57 Author: Borislav Petkov (AMD) <bp@alien8.de> AuthorDate: Tue, 30 Jan 2024 11:59:40 +01:00 Committer: Borislav Petkov (AMD) <bp@alien8.de> CommitterDate: Tue, 09 Apr 2024 18:15:03 +02:00 x86/alternatives: Optimize optimize_nops() Return early if NOPs have already been optimized. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20240130105941.19707-4-bp@alien8.de --- arch/x86/kernel/alternative.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 4b3378c..67dd7c3 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -233,6 +233,10 @@ static void noinline optimize_nops(const u8 * const instr, u8 *buf, size_t len) if (insn_is_nop(&insn)) { int nop = i; + /* Has the NOP already been optimized? */ + if (i + insn.length == len) + return; + next = skip_nops(buf, next, len); add_nop(buf + nop, next - nop); ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 4/4] x86/alternatives: Sort local vars in apply_alternatives() 2024-01-30 10:59 [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer Borislav Petkov ` (2 preceding siblings ...) 2024-01-30 10:59 ` [PATCH 3/4] x86/alternatives: Optimize optimize_nops() Borislav Petkov @ 2024-01-30 10:59 ` Borislav Petkov 2024-02-13 15:35 ` [tip: x86/alternatives] " tip-bot2 for Borislav Petkov (AMD) 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 2024-01-31 16:17 ` [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer Paul Gortmaker 4 siblings, 2 replies; 15+ messages in thread From: Borislav Petkov @ 2024-01-30 10:59 UTC (permalink / raw) To: X86 ML; +Cc: Paul Gortmaker, LKML From: "Borislav Petkov (AMD)" <bp@alien8.de> In a reverse x-mas tree. No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> --- arch/x86/kernel/alternative.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 68ee46c379c1..ee0f681ae107 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -440,9 +440,9 @@ static int alt_replace_call(u8 *instr, u8 *insn_buff, struct alt_instr *a) void __init_or_module noinline apply_alternatives(struct alt_instr *start, struct alt_instr *end) { - struct alt_instr *a; - u8 *instr, *replacement; u8 insn_buff[MAX_PATCH_LEN]; + u8 *instr, *replacement; + struct alt_instr *a; DPRINTK(ALT, "alt table %px, -> %px", start, end); -- 2.43.0 ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [tip: x86/alternatives] x86/alternatives: Sort local vars in apply_alternatives() 2024-01-30 10:59 ` [PATCH 4/4] x86/alternatives: Sort local vars in apply_alternatives() Borislav Petkov @ 2024-02-13 15:35 ` tip-bot2 for Borislav Petkov (AMD) 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 1 sibling, 0 replies; 15+ messages in thread From: tip-bot2 for Borislav Petkov (AMD) @ 2024-02-13 15:35 UTC (permalink / raw) To: linux-tip-commits; +Cc: Borislav Petkov (AMD), x86, linux-kernel The following commit has been merged into the x86/alternatives branch of tip: Commit-ID: cadb7de3170e22893b617276c297216039aad88d Gitweb: https://git.kernel.org/tip/cadb7de3170e22893b617276c297216039aad88d Author: Borislav Petkov (AMD) <bp@alien8.de> AuthorDate: Tue, 30 Jan 2024 11:59:41 +01:00 Committer: Borislav Petkov (AMD) <bp@alien8.de> CommitterDate: Tue, 13 Feb 2024 16:27:46 +01:00 x86/alternatives: Sort local vars in apply_alternatives() In a reverse x-mas tree. No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20240130105941.19707-5-bp@alien8.de --- arch/x86/kernel/alternative.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 1ceaaab..9aaf703 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -440,9 +440,9 @@ static int alt_replace_call(u8 *instr, u8 *insn_buff, struct alt_instr *a) void __init_or_module noinline apply_alternatives(struct alt_instr *start, struct alt_instr *end) { - struct alt_instr *a; - u8 *instr, *replacement; u8 insn_buff[MAX_PATCH_LEN]; + u8 *instr, *replacement; + struct alt_instr *a; DPRINTK(ALT, "alt table %px, -> %px", start, end); ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [tip: x86/alternatives] x86/alternatives: Sort local vars in apply_alternatives() 2024-01-30 10:59 ` [PATCH 4/4] x86/alternatives: Sort local vars in apply_alternatives() Borislav Petkov 2024-02-13 15:35 ` [tip: x86/alternatives] " tip-bot2 for Borislav Petkov (AMD) @ 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 1 sibling, 0 replies; 15+ messages in thread From: tip-bot2 for Borislav Petkov (AMD) @ 2024-04-09 17:11 UTC (permalink / raw) To: linux-tip-commits; +Cc: Borislav Petkov (AMD), x86, linux-kernel The following commit has been merged into the x86/alternatives branch of tip: Commit-ID: 05d277c9a9023e11d2f30a994bde08b854af52a0 Gitweb: https://git.kernel.org/tip/05d277c9a9023e11d2f30a994bde08b854af52a0 Author: Borislav Petkov (AMD) <bp@alien8.de> AuthorDate: Tue, 30 Jan 2024 11:59:41 +01:00 Committer: Borislav Petkov (AMD) <bp@alien8.de> CommitterDate: Tue, 09 Apr 2024 18:16:57 +02:00 x86/alternatives: Sort local vars in apply_alternatives() In a reverse x-mas tree. No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20240130105941.19707-5-bp@alien8.de --- arch/x86/kernel/alternative.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 67dd7c3..7555c15 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -445,9 +445,9 @@ static int alt_replace_call(u8 *instr, u8 *insn_buff, struct alt_instr *a) void __init_or_module noinline apply_alternatives(struct alt_instr *start, struct alt_instr *end) { - struct alt_instr *a; - u8 *instr, *replacement; u8 insn_buff[MAX_PATCH_LEN]; + u8 *instr, *replacement; + struct alt_instr *a; DPRINTK(ALT, "alt table %px, -> %px", start, end); ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer 2024-01-30 10:59 [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer Borislav Petkov ` (3 preceding siblings ...) 2024-01-30 10:59 ` [PATCH 4/4] x86/alternatives: Sort local vars in apply_alternatives() Borislav Petkov @ 2024-01-31 16:17 ` Paul Gortmaker 2024-01-31 16:25 ` Borislav Petkov 4 siblings, 1 reply; 15+ messages in thread From: Paul Gortmaker @ 2024-01-31 16:17 UTC (permalink / raw) To: Borislav Petkov; +Cc: X86 ML, LKML [[PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer] On 30/01/2024 (Tue 11:59) Borislav Petkov wrote: > From: "Borislav Petkov (AMD)" <bp@alien8.de> > > Hi, > > here's a small set which sprang out from my reacting to the fact that > NOPs optimization in the alternatives code needs to happen on > a temporary buffer like the other alternative operations - not in-place > and cause all kinds of fun. > > The result is this, which makes the alternatives code simpler and it is > a net win, size-wise: > > 1 file changed, 50 insertions(+), 72 deletions(-) > > > Constructive feedback is always welcome! So, I figured I would set up the same reproducer, on the same machine; build and test a known broken NOP rewrite kernel like v6.5.0 to confirm I could still reproduce the boot fail in approximately 2% of runs. And then move to testing this series. Well, much to my annoyance my plan broke down at step one. After about three hours and over 400 runs, I didn't get a single fail. I still had a known broken build from the original reporting in October of v6.5.7, so I let that run for over 300 iterations, and also didn't get any failures. I have to assume that even though I'm using the same host, same scripts, that because I was testing on Yocto master, other things have changed since October - maybe binutils, qemu, the runqemu script, ... In theory, I could try and reset Yocto back to October-ish but that is probably of diminishing returns. And I can't unwind the host machine distro updates that have happened since October. With hindsight and knowledge of what the issue was and how narrow the window was to trigger it, I guess this shouldn't be a surprise. So as a "next best" effort, I let this rc1-alt-v2 branch run overnight, and after over 2200 iterations, I didn't get any boot fails. Paul. -- > > Thx. > > Borislav Petkov (AMD) (4): > x86/alternatives: Use a temporary buffer when optimizing NOPs > x86/alternatives: Get rid of __optimize_nops() > x86/alternatives: Optimize optimize_nops() > x86/alternatives: Sort local vars in apply_alternatives() > > arch/x86/kernel/alternative.c | 122 ++++++++++++++-------------------- > 1 file changed, 50 insertions(+), 72 deletions(-) > > -- > 2.43.0 > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer 2024-01-31 16:17 ` [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer Paul Gortmaker @ 2024-01-31 16:25 ` Borislav Petkov 0 siblings, 0 replies; 15+ messages in thread From: Borislav Petkov @ 2024-01-31 16:25 UTC (permalink / raw) To: Paul Gortmaker; +Cc: X86 ML, LKML On Wed, Jan 31, 2024 at 11:17:27AM -0500, Paul Gortmaker wrote: > So as a "next best" effort, I let this rc1-alt-v2 branch run overnight, > and after over 2200 iterations, I didn't get any boot fails. Thanks a lot! As mentioned on IRC yesterday, the important thing is that this doesn't break any of your guests. And that is good enough. Much appreciated, thanks again! -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2024-04-09 17:11 UTC | newest] Thread overview: 15+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-01-30 10:59 [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer Borislav Petkov 2024-01-30 10:59 ` [PATCH 1/4] x86/alternatives: Use a temporary buffer when optimizing NOPs Borislav Petkov 2024-02-13 15:36 ` [tip: x86/alternatives] " tip-bot2 for Borislav Petkov (AMD) 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 2024-01-30 10:59 ` [PATCH 2/4] x86/alternatives: Get rid of __optimize_nops() Borislav Petkov 2024-02-13 15:35 ` [tip: x86/alternatives] " tip-bot2 for Borislav Petkov (AMD) 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 2024-01-30 10:59 ` [PATCH 3/4] x86/alternatives: Optimize optimize_nops() Borislav Petkov 2024-02-13 15:35 ` [tip: x86/alternatives] " tip-bot2 for Borislav Petkov (AMD) 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 2024-01-30 10:59 ` [PATCH 4/4] x86/alternatives: Sort local vars in apply_alternatives() Borislav Petkov 2024-02-13 15:35 ` [tip: x86/alternatives] " tip-bot2 for Borislav Petkov (AMD) 2024-04-09 17:11 ` tip-bot2 for Borislav Petkov (AMD) 2024-01-31 16:17 ` [PATCH 0/4] x86/alternatives: Do NOPs optimization on a temporary buffer Paul Gortmaker 2024-01-31 16:25 ` Borislav Petkov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox