* [PATCH 6.6.y v1 1/6] x86/bugs: Add SRSO_USER_KERNEL_NO support
2026-04-28 21:46 [PATCH 6.6.y v1 0/6] SRSO handling for Zen5 CPUs Daniil Tatianin
@ 2026-04-28 21:46 ` Daniil Tatianin
2026-04-28 21:46 ` [PATCH 6.6.y v1 2/6] x86/srso: Print actual mitigation if requested mitigation isn't possible Daniil Tatianin
` (5 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Daniil Tatianin @ 2026-04-28 21:46 UTC (permalink / raw)
To: stable
Cc: Daniil Tatianin, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Greg Kroah-Hartman, Tom Lendacky, Sasha Levin,
Xin Li (Intel), Daniel Sneddon, Ahmed S. Darwish,
Nikunj A Dadhania, Nikolay Borisov
[ Upstream commit 877818802c3e970f67ccb53012facc78bef5f97a ]
If the machine has:
CPUID Fn8000_0021_EAX[30] (SRSO_USER_KERNEL_NO) -- If this bit is 1,
it indicates the CPU is not subject to the SRSO vulnerability across
user/kernel boundaries.
have it fall back to IBPB on VMEXIT only, in the case it is going to run
VMs:
Speculative Return Stack Overflow: Mitigation: IBPB on VMEXIT only
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
Link: https://lore.kernel.org/r/20241202120416.6054-2-bp@kernel.org
Signed-off-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/bugs.c | 4 ++++
arch/x86/kernel/cpu/common.c | 1 +
3 files changed, 6 insertions(+)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index ae4ea1f9594f7..154adb401a260 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -458,6 +458,7 @@
#define X86_FEATURE_SBPB (20*32+27) /* "" Selective Branch Prediction Barrier */
#define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* "" MSR_PRED_CMD[IBPB] flushes all branch type predictions */
#define X86_FEATURE_SRSO_NO (20*32+29) /* "" CPU is not affected by SRSO */
+#define X86_FEATURE_SRSO_USER_KERNEL_NO (20*32+30) /* CPU is not affected by SRSO across user/kernel boundaries */
/*
* Extended auxiliary flags: Linux defined - for features scattered in various
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ef1d3a5024ed4..5d6f18bf4ba7c 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2794,6 +2794,9 @@ static void __init srso_select_mitigation(void)
break;
case SRSO_CMD_SAFE_RET:
+ if (boot_cpu_has(X86_FEATURE_SRSO_USER_KERNEL_NO))
+ goto ibpb_on_vmexit;
+
if (IS_ENABLED(CONFIG_CPU_SRSO)) {
/*
* Enable the return thunk for generated code
@@ -2847,6 +2850,7 @@ static void __init srso_select_mitigation(void)
}
break;
+ibpb_on_vmexit:
case SRSO_CMD_IBPB_ON_VMEXIT:
if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) {
if (has_microcode) {
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index ef73ce697ec8a..9881d1791095b 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1341,6 +1341,7 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO | VMSCAPE),
VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO | VMSCAPE),
VULNBL_AMD(0x19, SRSO | TSA | VMSCAPE),
+ VULNBL_AMD(0x1a, SRSO),
{}
};
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 6.6.y v1 2/6] x86/srso: Print actual mitigation if requested mitigation isn't possible
2026-04-28 21:46 [PATCH 6.6.y v1 0/6] SRSO handling for Zen5 CPUs Daniil Tatianin
2026-04-28 21:46 ` [PATCH 6.6.y v1 1/6] x86/bugs: Add SRSO_USER_KERNEL_NO support Daniil Tatianin
@ 2026-04-28 21:46 ` Daniil Tatianin
2026-04-28 21:46 ` [PATCH 6.6.y v1 3/6] x86/srso: Remove 'pred_cmd' label Daniil Tatianin
` (4 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Daniil Tatianin @ 2026-04-28 21:46 UTC (permalink / raw)
To: stable
Cc: Daniil Tatianin, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Greg Kroah-Hartman, Tom Lendacky, Sasha Levin,
Xin Li (Intel), Daniel Sneddon, Ahmed S. Darwish,
Nikunj A Dadhania, Ingo Molnar
[ Upstream commit 3fc7b28e831f15274a5526197b54a73a88620584 ]
If the kernel wasn't compiled to support the requested option, print the
actual option that ends up getting used.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/7e7a12ea9d85a9f76ca16a3efb71f262dee46ab1.1693889988.git.jpoimboe@kernel.org
Signed-off-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
---
arch/x86/kernel/cpu/bugs.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 5d6f18bf4ba7c..07eb6294490b3 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2818,7 +2818,6 @@ static void __init srso_select_mitigation(void)
srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
} else {
pr_err("WARNING: kernel not compiled with CPU_SRSO.\n");
- goto pred_cmd;
}
break;
@@ -2846,7 +2845,6 @@ static void __init srso_select_mitigation(void)
}
} else {
pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n");
- goto pred_cmd;
}
break;
@@ -2866,7 +2864,6 @@ static void __init srso_select_mitigation(void)
}
} else {
pr_err("WARNING: kernel not compiled with CPU_IBPB_ENTRY.\n");
- goto pred_cmd;
}
break;
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 6.6.y v1 3/6] x86/srso: Remove 'pred_cmd' label
2026-04-28 21:46 [PATCH 6.6.y v1 0/6] SRSO handling for Zen5 CPUs Daniil Tatianin
2026-04-28 21:46 ` [PATCH 6.6.y v1 1/6] x86/bugs: Add SRSO_USER_KERNEL_NO support Daniil Tatianin
2026-04-28 21:46 ` [PATCH 6.6.y v1 2/6] x86/srso: Print actual mitigation if requested mitigation isn't possible Daniil Tatianin
@ 2026-04-28 21:46 ` Daniil Tatianin
2026-04-28 21:46 ` [PATCH 6.6.y v1 4/6] x86/bugs: Fix handling when SRSO mitigation is disabled Daniil Tatianin
` (3 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Daniil Tatianin @ 2026-04-28 21:46 UTC (permalink / raw)
To: stable
Cc: Daniil Tatianin, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Greg Kroah-Hartman, Tom Lendacky, Sasha Levin,
Xin Li (Intel), Daniel Sneddon, Ahmed S. Darwish,
Nikunj A Dadhania, Ingo Molnar
[ Upstream commit 55ca9010c4a988b48278f81ae4129deea52d2488 ]
SBPB is only enabled in two distinct cases:
1) when SRSO has been disabled with srso=off
2) when SRSO has been fixed (in future HW)
Simplify the control flow by getting rid of the 'pred_cmd' label and
moving the SBPB enablement check to the two corresponding code sites.
This makes it more clear when exactly SBPB gets enabled.
Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/bb20e8569cfa144def5e6f25e610804bc4974de2.1693889988.git.jpoimboe@kernel.org
Signed-off-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
---
arch/x86/kernel/cpu/bugs.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 07eb6294490b3..1fce8077de5f8 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2757,13 +2757,21 @@ static void __init srso_select_mitigation(void)
{
bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
- if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
- goto pred_cmd;
+ if (cpu_mitigations_off())
+ return;
+
+ if (!boot_cpu_has_bug(X86_BUG_SRSO)) {
+ if (boot_cpu_has(X86_FEATURE_SBPB))
+ x86_pred_cmd = PRED_CMD_SBPB;
+ return;
+ }
if (has_microcode) {
/*
* Zen1/2 with SMT off aren't vulnerable after the right
* IBPB microcode has been applied.
+ *
+ * Zen1/2 don't have SBPB, no need to try to enable it here.
*/
if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
@@ -2784,7 +2792,9 @@ static void __init srso_select_mitigation(void)
switch (srso_cmd) {
case SRSO_CMD_OFF:
- goto pred_cmd;
+ if (boot_cpu_has(X86_FEATURE_SBPB))
+ x86_pred_cmd = PRED_CMD_SBPB;
+ return;
case SRSO_CMD_MICROCODE:
if (has_microcode) {
@@ -2873,11 +2883,6 @@ static void __init srso_select_mitigation(void)
out:
pr_info("%s\n", srso_strings[srso_mitigation]);
-
-pred_cmd:
- if ((!boot_cpu_has_bug(X86_BUG_SRSO) || srso_cmd == SRSO_CMD_OFF) &&
- boot_cpu_has(X86_FEATURE_SBPB))
- x86_pred_cmd = PRED_CMD_SBPB;
}
#undef pr_fmt
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 6.6.y v1 4/6] x86/bugs: Fix handling when SRSO mitigation is disabled
2026-04-28 21:46 [PATCH 6.6.y v1 0/6] SRSO handling for Zen5 CPUs Daniil Tatianin
` (2 preceding siblings ...)
2026-04-28 21:46 ` [PATCH 6.6.y v1 3/6] x86/srso: Remove 'pred_cmd' label Daniil Tatianin
@ 2026-04-28 21:46 ` Daniil Tatianin
2026-04-28 21:46 ` [PATCH 6.6.y v1 5/6] x86/bugs: KVM: Add support for SRSO_MSR_FIX Daniil Tatianin
` (2 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Daniil Tatianin @ 2026-04-28 21:46 UTC (permalink / raw)
To: stable
Cc: Daniil Tatianin, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Greg Kroah-Hartman, Tom Lendacky, Sasha Levin,
Xin Li (Intel), Daniel Sneddon, Ahmed S. Darwish,
Nikunj A Dadhania, David Kaplan
[ Upstream commit 1dbb6b1495d472806fef1f4c94f5b3e4c89a3c1d ]
When the SRSO mitigation is disabled, either via mitigations=off or
spec_rstack_overflow=off, the warning about the lack of IBPB-enhancing
microcode is printed anyway.
This is unnecessary since the user has turned off the mitigation.
[ bp: Massage, drop SBPB rationale as it doesn't matter because when
mitigations are disabled x86_pred_cmd is not being used anyway. ]
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/r/20240904150711.193022-1-david.kaplan@amd.com
Signed-off-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
---
arch/x86/kernel/cpu/bugs.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 1fce8077de5f8..916f36e23724d 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2757,10 +2757,9 @@ static void __init srso_select_mitigation(void)
{
bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
- if (cpu_mitigations_off())
- return;
-
- if (!boot_cpu_has_bug(X86_BUG_SRSO)) {
+ if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
+ cpu_mitigations_off() ||
+ srso_cmd == SRSO_CMD_OFF) {
if (boot_cpu_has(X86_FEATURE_SBPB))
x86_pred_cmd = PRED_CMD_SBPB;
return;
@@ -2791,11 +2790,6 @@ static void __init srso_select_mitigation(void)
}
switch (srso_cmd) {
- case SRSO_CMD_OFF:
- if (boot_cpu_has(X86_FEATURE_SBPB))
- x86_pred_cmd = PRED_CMD_SBPB;
- return;
-
case SRSO_CMD_MICROCODE:
if (has_microcode) {
srso_mitigation = SRSO_MITIGATION_MICROCODE;
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 6.6.y v1 5/6] x86/bugs: KVM: Add support for SRSO_MSR_FIX
2026-04-28 21:46 [PATCH 6.6.y v1 0/6] SRSO handling for Zen5 CPUs Daniil Tatianin
` (3 preceding siblings ...)
2026-04-28 21:46 ` [PATCH 6.6.y v1 4/6] x86/bugs: Fix handling when SRSO mitigation is disabled Daniil Tatianin
@ 2026-04-28 21:46 ` Daniil Tatianin
2026-04-30 17:47 ` Sean Christopherson
2026-04-28 21:46 ` [PATCH 6.6.y v1 6/6] KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count transitions Daniil Tatianin
2026-04-29 17:50 ` [PATCH 6.6.y v1 0/6] SRSO handling for Zen5 CPUs Sasha Levin
6 siblings, 1 reply; 11+ messages in thread
From: Daniil Tatianin @ 2026-04-28 21:46 UTC (permalink / raw)
To: stable
Cc: Daniil Tatianin, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Greg Kroah-Hartman, Tom Lendacky, Sasha Levin,
Xin Li (Intel), Daniel Sneddon, Ahmed S. Darwish,
Nikunj A Dadhania, Sean Christopherson
[ Upstream commit 8442df2b49ed9bcd67833ad4f091d15ac91efd00 ]
Add support for
CPUID Fn8000_0021_EAX[31] (SRSO_MSR_FIX). If this bit is 1, it
indicates that software may use MSR BP_CFG[BpSpecReduce] to mitigate
SRSO.
Enable BpSpecReduce to mitigate SRSO across guest/host boundaries.
Switch back to enabling the bit when virtualization is enabled and to
clear the bit when virtualization is disabled because using a MSR slot
would clear the bit when the guest is exited and any training the guest
has done, would potentially influence the host kernel when execution
enters the kernel and hasn't VMRUN the guest yet.
More detail on the public thread in Link below.
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20241202120416.6054-1-bp@kernel.org
Signed-off-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
---
Documentation/admin-guide/hw-vuln/srso.rst | 13 ++++++++++++
arch/x86/include/asm/cpufeatures.h | 4 ++++
arch/x86/include/asm/msr-index.h | 1 +
arch/x86/kernel/cpu/bugs.c | 24 ++++++++++++++++++----
arch/x86/kvm/svm/svm.c | 6 ++++++
arch/x86/lib/msr.c | 2 ++
6 files changed, 46 insertions(+), 4 deletions(-)
diff --git a/Documentation/admin-guide/hw-vuln/srso.rst b/Documentation/admin-guide/hw-vuln/srso.rst
index e715bfc09879a..68011add73834 100644
--- a/Documentation/admin-guide/hw-vuln/srso.rst
+++ b/Documentation/admin-guide/hw-vuln/srso.rst
@@ -104,7 +104,20 @@ The possible values in this file are:
(spec_rstack_overflow=ibpb-vmexit)
+ * 'Mitigation: Reduced Speculation':
+ This mitigation gets automatically enabled when the above one "IBPB on
+ VMEXIT" has been selected and the CPU supports the BpSpecReduce bit.
+
+ It gets automatically enabled on machines which have the
+ SRSO_USER_KERNEL_NO=1 CPUID bit. In that case, the code logic is to switch
+ to the above =ibpb-vmexit mitigation because the user/kernel boundary is
+ not affected anymore and thus "safe RET" is not needed.
+
+ After enabling the IBPB on VMEXIT mitigation option, the BpSpecReduce bit
+ is detected (functionality present on all such machines) and that
+ practically overrides IBPB on VMEXIT as it has a lot less performance
+ impact and takes care of the guest->host attack vector too.
In order to exploit vulnerability, an attacker needs to:
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 154adb401a260..974c604dd81cd 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -459,6 +459,10 @@
#define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* "" MSR_PRED_CMD[IBPB] flushes all branch type predictions */
#define X86_FEATURE_SRSO_NO (20*32+29) /* "" CPU is not affected by SRSO */
#define X86_FEATURE_SRSO_USER_KERNEL_NO (20*32+30) /* CPU is not affected by SRSO across user/kernel boundaries */
+#define X86_FEATURE_SRSO_BP_SPEC_REDUCE (20*32+31) /*
+ * BP_CFG[BpSpecReduce] can be used to mitigate SRSO for VMs.
+ * (SRSO_MSR_FIX in the official doc).
+ */
/*
* Extended auxiliary flags: Linux defined - for features scattered in various
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index deb5fe0017763..236abf51876c4 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -674,6 +674,7 @@
/* Zen4 */
#define MSR_ZEN4_BP_CFG 0xc001102e
+#define MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT 4
#define MSR_ZEN4_BP_CFG_SHARED_BTB_FIX_BIT 5
/* Zen 2 */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 916f36e23724d..818034819ee66 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2706,6 +2706,7 @@ enum srso_mitigation {
SRSO_MITIGATION_SAFE_RET,
SRSO_MITIGATION_IBPB,
SRSO_MITIGATION_IBPB_ON_VMEXIT,
+ SRSO_MITIGATION_BP_SPEC_REDUCE,
};
enum srso_mitigation_cmd {
@@ -2723,7 +2724,8 @@ static const char * const srso_strings[] = {
[SRSO_MITIGATION_MICROCODE] = "Vulnerable: Microcode, no safe RET",
[SRSO_MITIGATION_SAFE_RET] = "Mitigation: Safe RET",
[SRSO_MITIGATION_IBPB] = "Mitigation: IBPB",
- [SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only"
+ [SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only",
+ [SRSO_MITIGATION_BP_SPEC_REDUCE] = "Mitigation: Reduced Speculation"
};
static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
@@ -2762,7 +2764,7 @@ static void __init srso_select_mitigation(void)
srso_cmd == SRSO_CMD_OFF) {
if (boot_cpu_has(X86_FEATURE_SBPB))
x86_pred_cmd = PRED_CMD_SBPB;
- return;
+ goto out;
}
if (has_microcode) {
@@ -2774,7 +2776,7 @@ static void __init srso_select_mitigation(void)
*/
if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
- return;
+ goto out;
}
if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
@@ -2854,6 +2856,12 @@ static void __init srso_select_mitigation(void)
ibpb_on_vmexit:
case SRSO_CMD_IBPB_ON_VMEXIT:
+ if (boot_cpu_has(X86_FEATURE_SRSO_BP_SPEC_REDUCE)) {
+ pr_notice("Reducing speculation to address VM/HV SRSO attack vector.\n");
+ srso_mitigation = SRSO_MITIGATION_BP_SPEC_REDUCE;
+ break;
+ }
+
if (IS_ENABLED(CONFIG_CPU_IBPB_ENTRY)) {
if (has_microcode) {
setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
@@ -2876,7 +2884,15 @@ static void __init srso_select_mitigation(void)
}
out:
- pr_info("%s\n", srso_strings[srso_mitigation]);
+ /*
+ * Clear the feature flag if this mitigation is not selected as that
+ * feature flag controls the BpSpecReduce MSR bit toggling in KVM.
+ */
+ if (srso_mitigation != SRSO_MITIGATION_BP_SPEC_REDUCE)
+ setup_clear_cpu_cap(X86_FEATURE_SRSO_BP_SPEC_REDUCE);
+
+ if (srso_mitigation != SRSO_MITIGATION_NONE)
+ pr_info("%s\n", srso_strings[srso_mitigation]);
}
#undef pr_fmt
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index ff65fe7387332..ecb77ac074ea1 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -623,6 +623,9 @@ static void svm_hardware_disable(void)
kvm_cpu_svm_disable();
amd_pmu_disable_virt();
+
+ if (cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE))
+ msr_clear_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT);
}
static int svm_hardware_enable(void)
@@ -703,6 +706,9 @@ static int svm_hardware_enable(void)
rdmsr(MSR_TSC_AUX, hostsa->tsc_aux, msr_hi);
}
+ if (cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE))
+ msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT);
+
return 0;
}
diff --git a/arch/x86/lib/msr.c b/arch/x86/lib/msr.c
index 47fd9bd6b91d8..f94d6f2b982d5 100644
--- a/arch/x86/lib/msr.c
+++ b/arch/x86/lib/msr.c
@@ -103,6 +103,7 @@ int msr_set_bit(u32 msr, u8 bit)
{
return __flip_bit(msr, bit, true);
}
+EXPORT_SYMBOL_GPL(msr_set_bit);
/**
* msr_clear_bit - Clear @bit in a MSR @msr.
@@ -118,6 +119,7 @@ int msr_clear_bit(u32 msr, u8 bit)
{
return __flip_bit(msr, bit, false);
}
+EXPORT_SYMBOL_GPL(msr_clear_bit);
#ifdef CONFIG_TRACEPOINTS
void do_trace_write_msr(unsigned int msr, u64 val, int failed)
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [PATCH 6.6.y v1 5/6] x86/bugs: KVM: Add support for SRSO_MSR_FIX
2026-04-28 21:46 ` [PATCH 6.6.y v1 5/6] x86/bugs: KVM: Add support for SRSO_MSR_FIX Daniil Tatianin
@ 2026-04-30 17:47 ` Sean Christopherson
0 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2026-04-30 17:47 UTC (permalink / raw)
To: Daniil Tatianin
Cc: stable, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Greg Kroah-Hartman, Tom Lendacky, Sasha Levin,
Xin Li (Intel), Daniel Sneddon, Ahmed S. Darwish,
Nikunj A Dadhania
On Wed, Apr 29, 2026, Daniil Tatianin wrote:
> [ Upstream commit 8442df2b49ed9bcd67833ad4f091d15ac91efd00 ]
>
> Add support for
>
> CPUID Fn8000_0021_EAX[31] (SRSO_MSR_FIX). If this bit is 1, it
> indicates that software may use MSR BP_CFG[BpSpecReduce] to mitigate
> SRSO.
>
> Enable BpSpecReduce to mitigate SRSO across guest/host boundaries.
>
> Switch back to enabling the bit when virtualization is enabled and to
> clear the bit when virtualization is disabled because using a MSR slot
> would clear the bit when the guest is exited and any training the guest
> has done, would potentially influence the host kernel when execution
> enters the kernel and hasn't VMRUN the guest yet.
>
> More detail on the public thread in Link below.
>
> Co-developed-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
> Link: https://lore.kernel.org/r/20241202120416.6054-1-bp@kernel.org
> Signed-off-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> ---
> Documentation/admin-guide/hw-vuln/srso.rst | 13 ++++++++++++
> arch/x86/include/asm/cpufeatures.h | 4 ++++
> arch/x86/include/asm/msr-index.h | 1 +
> arch/x86/kernel/cpu/bugs.c | 24 ++++++++++++++++++----
> arch/x86/kvm/svm/svm.c | 6 ++++++
> arch/x86/lib/msr.c | 2 ++
> 6 files changed, 46 insertions(+), 4 deletions(-)
For the KVM changes,
Acked-by: Sean Christopherson <seanjc@google.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 6.6.y v1 6/6] KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count transitions
2026-04-28 21:46 [PATCH 6.6.y v1 0/6] SRSO handling for Zen5 CPUs Daniil Tatianin
` (4 preceding siblings ...)
2026-04-28 21:46 ` [PATCH 6.6.y v1 5/6] x86/bugs: KVM: Add support for SRSO_MSR_FIX Daniil Tatianin
@ 2026-04-28 21:46 ` Daniil Tatianin
2026-04-30 17:47 ` Sean Christopherson
2026-04-29 17:50 ` [PATCH 6.6.y v1 0/6] SRSO handling for Zen5 CPUs Sasha Levin
6 siblings, 1 reply; 11+ messages in thread
From: Daniil Tatianin @ 2026-04-28 21:46 UTC (permalink / raw)
To: stable
Cc: Daniil Tatianin, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Greg Kroah-Hartman, Tom Lendacky, Sasha Levin,
Xin Li (Intel), Daniel Sneddon, Ahmed S. Darwish,
Nikunj A Dadhania, Michael Larabel, Sean Christopherson
[ Upstream commit e3417ab75ab2e7dca6372a1bfa26b1be3ac5889e ]
Set the magic BP_SPEC_REDUCE bit to mitigate SRSO when running VMs if and
only if KVM has at least one active VM. Leaving the bit set at all times
unfortunately degrades performance by a wee bit more than expected.
Use a dedicated spinlock and counter instead of hooking virtualization
enablement, as changing the behavior of kvm.enable_virt_at_load based on
SRSO_BP_SPEC_REDUCE is painful, and has its own drawbacks, e.g. could
result in performance issues for flows that are sensitive to VM creation
latency.
Defer setting BP_SPEC_REDUCE until VMRUN is imminent to avoid impacting
performance on CPUs that aren't running VMs, e.g. if a setup is using
housekeeping CPUs. Setting BP_SPEC_REDUCE in task context, i.e. without
blasting IPIs to all CPUs, also helps avoid serializing 1<=>N transitions
without incurring a gross amount of complexity (see the Link for details
on how ugly coordinating via IPIs gets).
Link: https://lore.kernel.org/all/aBOnzNCngyS_pQIW@google.com
Fixes: 8442df2b49ed ("x86/bugs: KVM: Add support for SRSO_MSR_FIX")
Reported-by: Michael Larabel <Michael@michaellarabel.com>
Closes: https://www.phoronix.com/review/linux-615-amd-regression
Cc: Borislav Petkov <bp@alien8.de>
Tested-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20250505180300.973137-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
---
arch/x86/kvm/svm/svm.c | 71 ++++++++++++++++++++++++++++++++++++++----
arch/x86/kvm/svm/svm.h | 2 ++
2 files changed, 67 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index ecb77ac074ea1..4a319e4fc51bd 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -623,9 +623,6 @@ static void svm_hardware_disable(void)
kvm_cpu_svm_disable();
amd_pmu_disable_virt();
-
- if (cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE))
- msr_clear_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT);
}
static int svm_hardware_enable(void)
@@ -706,9 +703,6 @@ static int svm_hardware_enable(void)
rdmsr(MSR_TSC_AUX, hostsa->tsc_aux, msr_hi);
}
- if (cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE))
- msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT);
-
return 0;
}
@@ -1533,6 +1527,63 @@ static void svm_vcpu_free(struct kvm_vcpu *vcpu)
__free_pages(virt_to_page(svm->msrpm), get_order(MSRPM_SIZE));
}
+#ifdef CONFIG_CPU_MITIGATIONS
+static DEFINE_SPINLOCK(srso_lock);
+static atomic_t srso_nr_vms;
+
+static void svm_srso_clear_bp_spec_reduce(void *ign)
+{
+ struct svm_cpu_data *sd = this_cpu_ptr(&svm_data);
+
+ if (!sd->bp_spec_reduce_set)
+ return;
+
+ msr_clear_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT);
+ sd->bp_spec_reduce_set = false;
+}
+
+static void svm_srso_vm_destroy(void)
+{
+ if (!cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE))
+ return;
+
+ if (atomic_dec_return(&srso_nr_vms))
+ return;
+
+ guard(spinlock)(&srso_lock);
+
+ /*
+ * Verify a new VM didn't come along, acquire the lock, and increment
+ * the count before this task acquired the lock.
+ */
+ if (atomic_read(&srso_nr_vms))
+ return;
+
+ on_each_cpu(svm_srso_clear_bp_spec_reduce, NULL, 1);
+}
+
+static void svm_srso_vm_init(void)
+{
+ if (!cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE))
+ return;
+
+ /*
+ * Acquire the lock on 0 => 1 transitions to ensure a potential 1 => 0
+ * transition, i.e. destroying the last VM, is fully complete, e.g. so
+ * that a delayed IPI doesn't clear BP_SPEC_REDUCE after a vCPU runs.
+ */
+ if (atomic_inc_not_zero(&srso_nr_vms))
+ return;
+
+ guard(spinlock)(&srso_lock);
+
+ atomic_inc(&srso_nr_vms);
+}
+#else
+static void svm_srso_vm_init(void) { }
+static void svm_srso_vm_destroy(void) { }
+#endif
+
static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
@@ -1569,6 +1620,11 @@ static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
(!boot_cpu_has(X86_FEATURE_V_TSC_AUX) || !sev_es_guest(vcpu->kvm)))
kvm_set_user_return_msr(tsc_aux_uret_slot, svm->tsc_aux, -1ull);
+ if (cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE) &&
+ !sd->bp_spec_reduce_set) {
+ sd->bp_spec_reduce_set = true;
+ msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT);
+ }
svm->guest_state_loaded = true;
}
@@ -5001,6 +5057,8 @@ static void svm_vm_destroy(struct kvm *kvm)
{
avic_vm_destroy(kvm);
sev_vm_destroy(kvm);
+
+ svm_srso_vm_destroy();
}
static int svm_vm_init(struct kvm *kvm)
@@ -5014,6 +5072,7 @@ static int svm_vm_init(struct kvm *kvm)
return ret;
}
+ svm_srso_vm_init();
return 0;
}
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 0b4344595db37..d5548ea995f1c 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -300,6 +300,8 @@ struct svm_cpu_data {
u32 next_asid;
u32 min_asid;
+ bool bp_spec_reduce_set;
+
struct page *save_area;
unsigned long save_area_pa;
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [PATCH 6.6.y v1 6/6] KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count transitions
2026-04-28 21:46 ` [PATCH 6.6.y v1 6/6] KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count transitions Daniil Tatianin
@ 2026-04-30 17:47 ` Sean Christopherson
0 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2026-04-30 17:47 UTC (permalink / raw)
To: Daniil Tatianin
Cc: stable, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Greg Kroah-Hartman, Tom Lendacky, Sasha Levin,
Xin Li (Intel), Daniel Sneddon, Ahmed S. Darwish,
Nikunj A Dadhania, Michael Larabel
On Wed, Apr 29, 2026, Daniil Tatianin wrote:
> [ Upstream commit e3417ab75ab2e7dca6372a1bfa26b1be3ac5889e ]
>
> Set the magic BP_SPEC_REDUCE bit to mitigate SRSO when running VMs if and
> only if KVM has at least one active VM. Leaving the bit set at all times
> unfortunately degrades performance by a wee bit more than expected.
>
> Use a dedicated spinlock and counter instead of hooking virtualization
> enablement, as changing the behavior of kvm.enable_virt_at_load based on
> SRSO_BP_SPEC_REDUCE is painful, and has its own drawbacks, e.g. could
> result in performance issues for flows that are sensitive to VM creation
> latency.
>
> Defer setting BP_SPEC_REDUCE until VMRUN is imminent to avoid impacting
> performance on CPUs that aren't running VMs, e.g. if a setup is using
> housekeeping CPUs. Setting BP_SPEC_REDUCE in task context, i.e. without
> blasting IPIs to all CPUs, also helps avoid serializing 1<=>N transitions
> without incurring a gross amount of complexity (see the Link for details
> on how ugly coordinating via IPIs gets).
>
> Link: https://lore.kernel.org/all/aBOnzNCngyS_pQIW@google.com
> Fixes: 8442df2b49ed ("x86/bugs: KVM: Add support for SRSO_MSR_FIX")
> Reported-by: Michael Larabel <Michael@michaellarabel.com>
> Closes: https://www.phoronix.com/review/linux-615-amd-regression
> Cc: Borislav Petkov <bp@alien8.de>
> Tested-by: Borislav Petkov (AMD) <bp@alien8.de>
> Link: https://lore.kernel.org/r/20250505180300.973137-1-seanjc@google.com
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Daniil Tatianin <d-tatianin@yandex-team.ru>
> ---
Acked-by: Sean Christopherson <seanjc@google.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 6.6.y v1 0/6] SRSO handling for Zen5 CPUs
2026-04-28 21:46 [PATCH 6.6.y v1 0/6] SRSO handling for Zen5 CPUs Daniil Tatianin
` (5 preceding siblings ...)
2026-04-28 21:46 ` [PATCH 6.6.y v1 6/6] KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count transitions Daniil Tatianin
@ 2026-04-29 17:50 ` Sasha Levin
2026-04-30 17:49 ` Sean Christopherson
6 siblings, 1 reply; 11+ messages in thread
From: Sasha Levin @ 2026-04-29 17:50 UTC (permalink / raw)
To: stable
Cc: Daniil Tatianin, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, H. Peter Anvin, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Greg Kroah-Hartman, Tom Lendacky, Xin Li (Intel),
Daniel Sneddon, Ahmed S. Darwish, Nikunj A Dadhania,
Sean Christopherson
On Wed, Apr 29, 2026 at 12:46:04AM +0300, Daniil Tatianin wrote:
> This series backports a few SRSO handling features for Zen5 CPUs from the
> mainline kernel. The only important ones are
> "x86/bugs: KVM: Add support for SRSO_MSR_FIX" and
> "x86/bugs: Add SRSO_USER_KERNEL_NO support". The rest are added to avoid
> conflicts when applying the aforementioned patches.
>
> Changes since v0:
> - Add e3417ab75ab2 ("KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count transitions")
> to fix a performance regression introduced by 8442df2b49ed ("x86/bugs: KVM: Add support for SRSO_MSR_FIX")
> (Suggested by Sean Christopherson)
Sean, are you OK with this 6.6.y backport as it stands?
--
Thanks,
Sasha
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH 6.6.y v1 0/6] SRSO handling for Zen5 CPUs
2026-04-29 17:50 ` [PATCH 6.6.y v1 0/6] SRSO handling for Zen5 CPUs Sasha Levin
@ 2026-04-30 17:49 ` Sean Christopherson
0 siblings, 0 replies; 11+ messages in thread
From: Sean Christopherson @ 2026-04-30 17:49 UTC (permalink / raw)
To: Sasha Levin
Cc: stable, Daniil Tatianin, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, H. Peter Anvin, Peter Zijlstra,
Josh Poimboeuf, Pawan Gupta, Greg Kroah-Hartman, Tom Lendacky,
Xin Li (Intel), Daniel Sneddon, Ahmed S. Darwish,
Nikunj A Dadhania
On Wed, Apr 29, 2026, Sasha Levin wrote:
> On Wed, Apr 29, 2026 at 12:46:04AM +0300, Daniil Tatianin wrote:
> > This series backports a few SRSO handling features for Zen5 CPUs from the
> > mainline kernel. The only important ones are
> > "x86/bugs: KVM: Add support for SRSO_MSR_FIX" and
> > "x86/bugs: Add SRSO_USER_KERNEL_NO support". The rest are added to avoid
> > conflicts when applying the aforementioned patches.
> >
> > Changes since v0:
> > - Add e3417ab75ab2 ("KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count transitions")
> > to fix a performance regression introduced by 8442df2b49ed ("x86/bugs: KVM: Add support for SRSO_MSR_FIX")
> > (Suggested by Sean Christopherson)
>
> Sean, are you OK with this 6.6.y backport as it stands?
Looks good from a KVM perspective, but someone that knows the x86/bugs side of
things should take a look at the non-KVM changes.
^ permalink raw reply [flat|nested] 11+ messages in thread