* [PATCH v5 0/2] x86/alternative: Patch a single alternative location only once @ 2026-01-05 8:04 Juergen Gross 2026-01-05 8:04 ` [PATCH v5 1/2] x86/alternative: Use helper functions for patching alternatives Juergen Gross 2026-01-05 8:04 ` [PATCH v5 2/2] x86/alternative: Patch a single alternative location only once Juergen Gross 0 siblings, 2 replies; 5+ messages in thread From: Juergen Gross @ 2026-01-05 8:04 UTC (permalink / raw) To: linux-kernel, x86 Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin Instead of patching a single location potentially multiple times in case of nested ALTERNATIVE()s, do the patching only after having evaluated all alt_instr instances for that location. Changes in V2: - complete rework (Boris Petkov) Changes in V3: - split former V2 patch into 2 by introducing a helper function (Boris Petkov) - repost the small cleanup patch 1 which was taken before, but has somehow vanished from the tip x86/alternative branch (it is still in the tip master branch, but I couldn't find it in any other tip branch). Changes in V4: - use 3 helpers instead of 1 (Boris Petkov) Changes in V5: - dropped patch 1 of V4, as already applied - small cosmetic changes (Boris Petkov) Juergen Gross (2): x86/alternative: Use helper functions for patching alternatives x86/alternative: Patch a single alternative location only once arch/x86/kernel/alternative.c | 149 +++++++++++++++++++++------------- 1 file changed, 92 insertions(+), 57 deletions(-) -- 2.51.0 ^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v5 1/2] x86/alternative: Use helper functions for patching alternatives 2026-01-05 8:04 [PATCH v5 0/2] x86/alternative: Patch a single alternative location only once Juergen Gross @ 2026-01-05 8:04 ` Juergen Gross 2026-01-08 19:18 ` [tip: x86/alternatives] " tip-bot2 for Juergen Gross 2026-01-05 8:04 ` [PATCH v5 2/2] x86/alternative: Patch a single alternative location only once Juergen Gross 1 sibling, 1 reply; 5+ messages in thread From: Juergen Gross @ 2026-01-05 8:04 UTC (permalink / raw) To: linux-kernel, x86 Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin Tidy up apply_alternatives() by moving the main patching action of a single alternative instance into 3 helper functions: - analyze_patch_site() for selection whether patching should occur or not and to handle nested alternatives. - prep_patch_site() for applying any needed relocations and issuing debug prints for the site. - patch_site() doing the real patching action, including optimization of any padding NOPs. In prep_patch_site() use __apply_relocation() instead of text_poke_apply_relocation(), as the NOP optimization is now done in patch_site() for all cases. Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Juergen Gross <jgross@suse.com> --- V3: - new patch V4: - further split coding in more helpers (Borislav Petkov) V5: - apply cosmetic changes as suggested by Boris --- arch/x86/kernel/alternative.c | 142 +++++++++++++++++++++------------- 1 file changed, 87 insertions(+), 55 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 28518371d8bf..6e3eec048d19 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -586,6 +586,88 @@ static inline u8 * instr_va(struct alt_instr *i) return (u8 *)&i->instr_offset + i->instr_offset; } +struct patch_site { + u8 *instr; + struct alt_instr *alt; + u8 buff[MAX_PATCH_LEN]; + u8 len; +}; + +static void __init_or_module analyze_patch_site(struct patch_site *ps, + struct alt_instr *start, + struct alt_instr *end) +{ + struct alt_instr *r; + + ps->instr = instr_va(start); + ps->len = start->instrlen; + + /* + * In case of nested ALTERNATIVE()s the outer alternative might add + * more padding. To ensure consistent patching find the max padding for + * all alt_instr entries for this site (nested alternatives result in + * consecutive entries). + */ + for (r = start+1; r < end && instr_va(r) == ps->instr; r++) { + ps->len = max(ps->len, r->instrlen); + start->instrlen = r->instrlen = ps->len; + } + + BUG_ON(ps->len > sizeof(ps->buff)); + BUG_ON(start->cpuid >= (NCAPINTS + NBUGINTS) * 32); + + /* + * Patch if either: + * - feature is present + * - feature not present but ALT_FLAG_NOT is set to mean, + * patch if feature is *NOT* present. + */ + if (!boot_cpu_has(start->cpuid) == !(start->flags & ALT_FLAG_NOT)) + ps->alt = NULL; + else + ps->alt = start; +} + +static void __init_or_module prep_patch_site(struct patch_site *ps) +{ + struct alt_instr *alt = ps->alt; + u8 buff_sz; + u8 *repl; + + if (!alt) { + /* Nothing to patch, use original instruction. */ + memcpy(ps->buff, ps->instr, ps->len); + return; + } + + repl = (u8 *)&alt->repl_offset + alt->repl_offset; + DPRINTK(ALT, "feat: %d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d) flags: 0x%x", + alt->cpuid >> 5, alt->cpuid & 0x1f, + ps->instr, ps->instr, ps->len, + repl, alt->replacementlen, alt->flags); + + memcpy(ps->buff, repl, alt->replacementlen); + buff_sz = alt->replacementlen; + + if (alt->flags & ALT_FLAG_DIRECT_CALL) + buff_sz = alt_replace_call(ps->instr, ps->buff, alt); + + for (; buff_sz < ps->len; buff_sz++) + ps->buff[buff_sz] = 0x90; + + __apply_relocation(ps->buff, ps->instr, ps->len, repl, alt->replacementlen); + + DUMP_BYTES(ALT, ps->instr, ps->len, "%px: old_insn: ", ps->instr); + DUMP_BYTES(ALT, repl, alt->replacementlen, "%px: rpl_insn: ", repl); + DUMP_BYTES(ALT, ps->buff, ps->len, "%px: final_insn: ", ps->instr); +} + +static void __init_or_module patch_site(struct patch_site *ps) +{ + optimize_nops(ps->instr, ps->buff, ps->len); + text_poke_early(ps->instr, ps->buff, ps->len); +} + /* * Replace instructions with better alternatives for this CPU type. This runs * before SMP is initialized to avoid SMP problems with self modifying code. @@ -599,9 +681,7 @@ static inline u8 * instr_va(struct alt_instr *i) void __init_or_module noinline apply_alternatives(struct alt_instr *start, struct alt_instr *end) { - u8 insn_buff[MAX_PATCH_LEN]; - u8 *instr, *replacement; - struct alt_instr *a, *b; + struct alt_instr *a; DPRINTK(ALT, "alt table %px, -> %px", start, end); @@ -625,59 +705,11 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, * order. */ for (a = start; a < end; a++) { - unsigned int insn_buff_sz = 0; - - /* - * In case of nested ALTERNATIVE()s the outer alternative might - * add more padding. To ensure consistent patching find the max - * padding for all alt_instr entries for this site (nested - * alternatives result in consecutive entries). - */ - for (b = a+1; b < end && instr_va(b) == instr_va(a); b++) { - u8 len = max(a->instrlen, b->instrlen); - a->instrlen = b->instrlen = len; - } - - instr = instr_va(a); - replacement = (u8 *)&a->repl_offset + a->repl_offset; - BUG_ON(a->instrlen > sizeof(insn_buff)); - BUG_ON(a->cpuid >= (NCAPINTS + NBUGINTS) * 32); - - /* - * Patch if either: - * - feature is present - * - feature not present but ALT_FLAG_NOT is set to mean, - * patch if feature is *NOT* present. - */ - if (!boot_cpu_has(a->cpuid) == !(a->flags & ALT_FLAG_NOT)) { - memcpy(insn_buff, instr, a->instrlen); - optimize_nops(instr, insn_buff, a->instrlen); - text_poke_early(instr, insn_buff, a->instrlen); - continue; - } - - DPRINTK(ALT, "feat: %d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d) flags: 0x%x", - a->cpuid >> 5, - a->cpuid & 0x1f, - instr, instr, a->instrlen, - replacement, a->replacementlen, a->flags); - - memcpy(insn_buff, replacement, a->replacementlen); - insn_buff_sz = a->replacementlen; - - if (a->flags & ALT_FLAG_DIRECT_CALL) - insn_buff_sz = alt_replace_call(instr, insn_buff, a); - - for (; insn_buff_sz < a->instrlen; insn_buff_sz++) - insn_buff[insn_buff_sz] = 0x90; - - text_poke_apply_relocation(insn_buff, instr, a->instrlen, replacement, a->replacementlen); - - DUMP_BYTES(ALT, instr, a->instrlen, "%px: old_insn: ", instr); - DUMP_BYTES(ALT, replacement, a->replacementlen, "%px: rpl_insn: ", replacement); - DUMP_BYTES(ALT, insn_buff, insn_buff_sz, "%px: final_insn: ", instr); + struct patch_site ps; - text_poke_early(instr, insn_buff, insn_buff_sz); + analyze_patch_site(&ps, a, end); + prep_patch_site(&ps); + patch_site(&ps); } kasan_enable_current(); -- 2.51.0 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [tip: x86/alternatives] x86/alternative: Use helper functions for patching alternatives 2026-01-05 8:04 ` [PATCH v5 1/2] x86/alternative: Use helper functions for patching alternatives Juergen Gross @ 2026-01-08 19:18 ` tip-bot2 for Juergen Gross 0 siblings, 0 replies; 5+ messages in thread From: tip-bot2 for Juergen Gross @ 2026-01-08 19:18 UTC (permalink / raw) To: linux-tip-commits; +Cc: Borislav Petkov, Juergen Gross, x86, linux-kernel The following commit has been merged into the x86/alternatives branch of tip: Commit-ID: 544b4e15ed106b0e8cd2d584f576e3fda13d8f5f Gitweb: https://git.kernel.org/tip/544b4e15ed106b0e8cd2d584f576e3fda13d8f5f Author: Juergen Gross <jgross@suse.com> AuthorDate: Mon, 05 Jan 2026 09:04:51 +01:00 Committer: Borislav Petkov (AMD) <bp@alien8.de> CommitterDate: Wed, 07 Jan 2026 16:05:11 +01:00 x86/alternative: Use helper functions for patching alternatives Tidy up apply_alternatives() by moving the main patching action of a single alternative instance into 3 helper functions: - analyze_patch_site() for selection whether patching should occur or not and to handle nested alternatives. - prep_patch_site() for applying any needed relocations and issuing debug prints for the site. - patch_site() doing the real patching action, including optimization of any padding NOPs. In prep_patch_site() use __apply_relocation() instead of text_poke_apply_relocation(), as the NOP optimization is now done in patch_site() for all cases. Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://patch.msgid.link/20260105080452.5064-2-jgross@suse.com --- arch/x86/kernel/alternative.c | 142 ++++++++++++++++++++------------- 1 file changed, 87 insertions(+), 55 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 2851837..6e3eec0 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -586,6 +586,88 @@ static inline u8 * instr_va(struct alt_instr *i) return (u8 *)&i->instr_offset + i->instr_offset; } +struct patch_site { + u8 *instr; + struct alt_instr *alt; + u8 buff[MAX_PATCH_LEN]; + u8 len; +}; + +static void __init_or_module analyze_patch_site(struct patch_site *ps, + struct alt_instr *start, + struct alt_instr *end) +{ + struct alt_instr *r; + + ps->instr = instr_va(start); + ps->len = start->instrlen; + + /* + * In case of nested ALTERNATIVE()s the outer alternative might add + * more padding. To ensure consistent patching find the max padding for + * all alt_instr entries for this site (nested alternatives result in + * consecutive entries). + */ + for (r = start+1; r < end && instr_va(r) == ps->instr; r++) { + ps->len = max(ps->len, r->instrlen); + start->instrlen = r->instrlen = ps->len; + } + + BUG_ON(ps->len > sizeof(ps->buff)); + BUG_ON(start->cpuid >= (NCAPINTS + NBUGINTS) * 32); + + /* + * Patch if either: + * - feature is present + * - feature not present but ALT_FLAG_NOT is set to mean, + * patch if feature is *NOT* present. + */ + if (!boot_cpu_has(start->cpuid) == !(start->flags & ALT_FLAG_NOT)) + ps->alt = NULL; + else + ps->alt = start; +} + +static void __init_or_module prep_patch_site(struct patch_site *ps) +{ + struct alt_instr *alt = ps->alt; + u8 buff_sz; + u8 *repl; + + if (!alt) { + /* Nothing to patch, use original instruction. */ + memcpy(ps->buff, ps->instr, ps->len); + return; + } + + repl = (u8 *)&alt->repl_offset + alt->repl_offset; + DPRINTK(ALT, "feat: %d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d) flags: 0x%x", + alt->cpuid >> 5, alt->cpuid & 0x1f, + ps->instr, ps->instr, ps->len, + repl, alt->replacementlen, alt->flags); + + memcpy(ps->buff, repl, alt->replacementlen); + buff_sz = alt->replacementlen; + + if (alt->flags & ALT_FLAG_DIRECT_CALL) + buff_sz = alt_replace_call(ps->instr, ps->buff, alt); + + for (; buff_sz < ps->len; buff_sz++) + ps->buff[buff_sz] = 0x90; + + __apply_relocation(ps->buff, ps->instr, ps->len, repl, alt->replacementlen); + + DUMP_BYTES(ALT, ps->instr, ps->len, "%px: old_insn: ", ps->instr); + DUMP_BYTES(ALT, repl, alt->replacementlen, "%px: rpl_insn: ", repl); + DUMP_BYTES(ALT, ps->buff, ps->len, "%px: final_insn: ", ps->instr); +} + +static void __init_or_module patch_site(struct patch_site *ps) +{ + optimize_nops(ps->instr, ps->buff, ps->len); + text_poke_early(ps->instr, ps->buff, ps->len); +} + /* * Replace instructions with better alternatives for this CPU type. This runs * before SMP is initialized to avoid SMP problems with self modifying code. @@ -599,9 +681,7 @@ static inline u8 * instr_va(struct alt_instr *i) void __init_or_module noinline apply_alternatives(struct alt_instr *start, struct alt_instr *end) { - u8 insn_buff[MAX_PATCH_LEN]; - u8 *instr, *replacement; - struct alt_instr *a, *b; + struct alt_instr *a; DPRINTK(ALT, "alt table %px, -> %px", start, end); @@ -625,59 +705,11 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, * order. */ for (a = start; a < end; a++) { - unsigned int insn_buff_sz = 0; - - /* - * In case of nested ALTERNATIVE()s the outer alternative might - * add more padding. To ensure consistent patching find the max - * padding for all alt_instr entries for this site (nested - * alternatives result in consecutive entries). - */ - for (b = a+1; b < end && instr_va(b) == instr_va(a); b++) { - u8 len = max(a->instrlen, b->instrlen); - a->instrlen = b->instrlen = len; - } - - instr = instr_va(a); - replacement = (u8 *)&a->repl_offset + a->repl_offset; - BUG_ON(a->instrlen > sizeof(insn_buff)); - BUG_ON(a->cpuid >= (NCAPINTS + NBUGINTS) * 32); - - /* - * Patch if either: - * - feature is present - * - feature not present but ALT_FLAG_NOT is set to mean, - * patch if feature is *NOT* present. - */ - if (!boot_cpu_has(a->cpuid) == !(a->flags & ALT_FLAG_NOT)) { - memcpy(insn_buff, instr, a->instrlen); - optimize_nops(instr, insn_buff, a->instrlen); - text_poke_early(instr, insn_buff, a->instrlen); - continue; - } - - DPRINTK(ALT, "feat: %d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d) flags: 0x%x", - a->cpuid >> 5, - a->cpuid & 0x1f, - instr, instr, a->instrlen, - replacement, a->replacementlen, a->flags); - - memcpy(insn_buff, replacement, a->replacementlen); - insn_buff_sz = a->replacementlen; - - if (a->flags & ALT_FLAG_DIRECT_CALL) - insn_buff_sz = alt_replace_call(instr, insn_buff, a); - - for (; insn_buff_sz < a->instrlen; insn_buff_sz++) - insn_buff[insn_buff_sz] = 0x90; - - text_poke_apply_relocation(insn_buff, instr, a->instrlen, replacement, a->replacementlen); - - DUMP_BYTES(ALT, instr, a->instrlen, "%px: old_insn: ", instr); - DUMP_BYTES(ALT, replacement, a->replacementlen, "%px: rpl_insn: ", replacement); - DUMP_BYTES(ALT, insn_buff, insn_buff_sz, "%px: final_insn: ", instr); + struct patch_site ps; - text_poke_early(instr, insn_buff, insn_buff_sz); + analyze_patch_site(&ps, a, end); + prep_patch_site(&ps); + patch_site(&ps); } kasan_enable_current(); ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v5 2/2] x86/alternative: Patch a single alternative location only once 2026-01-05 8:04 [PATCH v5 0/2] x86/alternative: Patch a single alternative location only once Juergen Gross 2026-01-05 8:04 ` [PATCH v5 1/2] x86/alternative: Use helper functions for patching alternatives Juergen Gross @ 2026-01-05 8:04 ` Juergen Gross 2026-01-08 19:18 ` [tip: x86/alternatives] " tip-bot2 for Juergen Gross 1 sibling, 1 reply; 5+ messages in thread From: Juergen Gross @ 2026-01-05 8:04 UTC (permalink / raw) To: linux-kernel, x86 Cc: Juergen Gross, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin Instead of patching a single location potentially multiple times in case of nested ALTERNATIVE()s, do the patching only after having evaluated all alt_instr instances for that location. This has multiple advantages: - In case of replacing an indirect with a direct call using the ALT_FLAG_DIRECT_CALL flag, there is no longer the need to have that instance before any other instances at the same location (the original instruction is needed for finding the target of the direct call). This issue has been hit when trying to do paravirt patching similar to the following: ALTERNATIVE_2(PARAVIRT_CALL, // indirect call instr, feature, // native instruction ALT_CALL_INSTR, X86_FEATURE_XENPV) // Xen function In case "feature" was true, "instr" replaced the indirect call. Under Xen PV the patching to have a direct call failed, as the original indirect call was no longer there to find the call target. - In case of nested ALTERNATIVE()s there is no intermediate replacement visible. This avoids any problems in case e.g. an interrupt is happening between the single instances and the patched location is used during handling the interrupt. Signed-off-by: Juergen Gross <jgross@suse.com> --- V2: - complete rework (Boris Petkov) V3: - rebase to added patch 2 V5: - small cosmetic changes (Boris Petkov) - rebase due to changes in patch --- arch/x86/kernel/alternative.c | 49 +++++++++++++++++++---------------- 1 file changed, 26 insertions(+), 23 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 6e3eec048d19..693b59b2f7d0 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -593,39 +593,38 @@ struct patch_site { u8 len; }; -static void __init_or_module analyze_patch_site(struct patch_site *ps, - struct alt_instr *start, - struct alt_instr *end) +static struct alt_instr * __init_or_module analyze_patch_site(struct patch_site *ps, + struct alt_instr *start, + struct alt_instr *end) { - struct alt_instr *r; + struct alt_instr *alt = start; ps->instr = instr_va(start); - ps->len = start->instrlen; /* * In case of nested ALTERNATIVE()s the outer alternative might add * more padding. To ensure consistent patching find the max padding for * all alt_instr entries for this site (nested alternatives result in * consecutive entries). + * Find the last alt_instr eligible for patching at the site. */ - for (r = start+1; r < end && instr_va(r) == ps->instr; r++) { - ps->len = max(ps->len, r->instrlen); - start->instrlen = r->instrlen = ps->len; + for (; alt < end && instr_va(alt) == ps->instr; alt++) { + ps->len = max(ps->len, alt->instrlen); + + BUG_ON(alt->cpuid >= (NCAPINTS + NBUGINTS) * 32); + /* + * Patch if either: + * - feature is present + * - feature not present but ALT_FLAG_NOT is set to mean, + * patch if feature is *NOT* present. + */ + if (!boot_cpu_has(alt->cpuid) != !(alt->flags & ALT_FLAG_NOT)) + ps->alt = alt; } BUG_ON(ps->len > sizeof(ps->buff)); - BUG_ON(start->cpuid >= (NCAPINTS + NBUGINTS) * 32); - /* - * Patch if either: - * - feature is present - * - feature not present but ALT_FLAG_NOT is set to mean, - * patch if feature is *NOT* present. - */ - if (!boot_cpu_has(start->cpuid) == !(start->flags & ALT_FLAG_NOT)) - ps->alt = NULL; - else - ps->alt = start; + return alt; } static void __init_or_module prep_patch_site(struct patch_site *ps) @@ -704,10 +703,14 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, * So be careful if you want to change the scan order to any other * order. */ - for (a = start; a < end; a++) { - struct patch_site ps; - - analyze_patch_site(&ps, a, end); + a = start; + while (a < end) { + struct patch_site ps = { + .alt = NULL, + .len = 0 + }; + + a = analyze_patch_site(&ps, a, end); prep_patch_site(&ps); patch_site(&ps); } -- 2.51.0 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [tip: x86/alternatives] x86/alternative: Patch a single alternative location only once 2026-01-05 8:04 ` [PATCH v5 2/2] x86/alternative: Patch a single alternative location only once Juergen Gross @ 2026-01-08 19:18 ` tip-bot2 for Juergen Gross 0 siblings, 0 replies; 5+ messages in thread From: tip-bot2 for Juergen Gross @ 2026-01-08 19:18 UTC (permalink / raw) To: linux-tip-commits; +Cc: Juergen Gross, Borislav Petkov (AMD), x86, linux-kernel The following commit has been merged into the x86/alternatives branch of tip: Commit-ID: a4233c21e77375494223ade11da72523a0149d97 Gitweb: https://git.kernel.org/tip/a4233c21e77375494223ade11da72523a0149d97 Author: Juergen Gross <jgross@suse.com> AuthorDate: Mon, 05 Jan 2026 09:04:52 +01:00 Committer: Borislav Petkov (AMD) <bp@alien8.de> CommitterDate: Wed, 07 Jan 2026 16:13:00 +01:00 x86/alternative: Patch a single alternative location only once Instead of patching a single location potentially multiple times in case of nested ALTERNATIVE()s, do the patching only after having evaluated all alt_instr instances for that location. This has multiple advantages: - In case of replacing an indirect with a direct call using the ALT_FLAG_DIRECT_CALL flag, there is no longer the need to have that instance before any other instances at the same location (the original instruction is needed for finding the target of the direct call). This issue has been hit when trying to do paravirt patching similar to the following: ALTERNATIVE_2(PARAVIRT_CALL, // indirect call instr, feature, // native instruction ALT_CALL_INSTR, X86_FEATURE_XENPV) // Xen function In case "feature" was true, "instr" replaced the indirect call. Under Xen PV the patching to have a direct call failed, as the original indirect call was no longer there to find the call target. - In case of nested ALTERNATIVE()s there is no intermediate replacement visible. This avoids any problems in case e.g. an interrupt is happening between the single instances and the patched location is used during handling the interrupt. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://patch.msgid.link/20260105080452.5064-3-jgross@suse.com --- arch/x86/kernel/alternative.c | 49 ++++++++++++++++++---------------- 1 file changed, 26 insertions(+), 23 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 6e3eec0..693b59b 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -593,39 +593,38 @@ struct patch_site { u8 len; }; -static void __init_or_module analyze_patch_site(struct patch_site *ps, - struct alt_instr *start, - struct alt_instr *end) +static struct alt_instr * __init_or_module analyze_patch_site(struct patch_site *ps, + struct alt_instr *start, + struct alt_instr *end) { - struct alt_instr *r; + struct alt_instr *alt = start; ps->instr = instr_va(start); - ps->len = start->instrlen; /* * In case of nested ALTERNATIVE()s the outer alternative might add * more padding. To ensure consistent patching find the max padding for * all alt_instr entries for this site (nested alternatives result in * consecutive entries). + * Find the last alt_instr eligible for patching at the site. */ - for (r = start+1; r < end && instr_va(r) == ps->instr; r++) { - ps->len = max(ps->len, r->instrlen); - start->instrlen = r->instrlen = ps->len; + for (; alt < end && instr_va(alt) == ps->instr; alt++) { + ps->len = max(ps->len, alt->instrlen); + + BUG_ON(alt->cpuid >= (NCAPINTS + NBUGINTS) * 32); + /* + * Patch if either: + * - feature is present + * - feature not present but ALT_FLAG_NOT is set to mean, + * patch if feature is *NOT* present. + */ + if (!boot_cpu_has(alt->cpuid) != !(alt->flags & ALT_FLAG_NOT)) + ps->alt = alt; } BUG_ON(ps->len > sizeof(ps->buff)); - BUG_ON(start->cpuid >= (NCAPINTS + NBUGINTS) * 32); - /* - * Patch if either: - * - feature is present - * - feature not present but ALT_FLAG_NOT is set to mean, - * patch if feature is *NOT* present. - */ - if (!boot_cpu_has(start->cpuid) == !(start->flags & ALT_FLAG_NOT)) - ps->alt = NULL; - else - ps->alt = start; + return alt; } static void __init_or_module prep_patch_site(struct patch_site *ps) @@ -704,10 +703,14 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, * So be careful if you want to change the scan order to any other * order. */ - for (a = start; a < end; a++) { - struct patch_site ps; - - analyze_patch_site(&ps, a, end); + a = start; + while (a < end) { + struct patch_site ps = { + .alt = NULL, + .len = 0 + }; + + a = analyze_patch_site(&ps, a, end); prep_patch_site(&ps); patch_site(&ps); } ^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-01-08 19:18 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-01-05 8:04 [PATCH v5 0/2] x86/alternative: Patch a single alternative location only once Juergen Gross 2026-01-05 8:04 ` [PATCH v5 1/2] x86/alternative: Use helper functions for patching alternatives Juergen Gross 2026-01-08 19:18 ` [tip: x86/alternatives] " tip-bot2 for Juergen Gross 2026-01-05 8:04 ` [PATCH v5 2/2] x86/alternative: Patch a single alternative location only once Juergen Gross 2026-01-08 19:18 ` [tip: x86/alternatives] " tip-bot2 for Juergen Gross
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox