* [PATCH v5 00/16] Attack vector controls (part 1)
@ 2025-04-18 16:17 David Kaplan
2025-04-18 16:17 ` [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation David Kaplan
` (17 more replies)
0 siblings, 18 replies; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
This is an updated version of the first half of the attack vector series
which focuses on restructuring arch/x86/kernel/cpu/bugs.c.
For more info the attack vector series, please see v4 at
https://lore.kernel.org/all/20250310164023.779191-1-david.kaplan@amd.com/.
These patches restructure the existing mitigation selection logic to use a
uniform set of functions. First, the "select" function is called for each
mitigation to select an appropriate mitigation. Unless a mitigation is
explicitly selected or disabled with a command line option, the default
mitigation is AUTO and the "select" function will then choose the best
mitigation. After the "select" function is called for each mitigation,
some mitigations define an "update" function which can be used to update
the selection, based on the choices made by other mitigations. Finally,
the "apply" function is called which enables the chosen mitigation.
This structure simplifies the mitigation control logic, especially when
there are dependencies between multiple vulnerabilities.
This is mostly code restructuring without functional changes, except where
noted.
Compared to v4 this only includes bug fixes/cleanup.
David Kaplan (16):
x86/bugs: Restructure MDS mitigation
x86/bugs: Restructure TAA mitigation
x86/bugs: Restructure MMIO mitigation
x86/bugs: Restructure RFDS mitigation
x86/bugs: Remove md_clear_*_mitigation()
x86/bugs: Restructure SRBDS mitigation
x86/bugs: Restructure GDS mitigation
x86/bugs: Restructure spectre_v1 mitigation
x86/bugs: Allow retbleed=stuff only on Intel
x86/bugs: Restructure retbleed mitigation
x86/bugs: Restructure spectre_v2_user mitigation
x86/bugs: Restructure BHI mitigation
x86/bugs: Restructure spectre_v2 mitigation
x86/bugs: Restructure SSB mitigation
x86/bugs: Restructure L1TF mitigation
x86/bugs: Restructure SRSO mitigation
arch/x86/include/asm/processor.h | 1 +
arch/x86/kernel/cpu/bugs.c | 1112 +++++++++++++++++-------------
arch/x86/kvm/vmx/vmx.c | 2 +
3 files changed, 644 insertions(+), 471 deletions(-)
base-commit: 33aa28024418782f644d8924026f1db21b3354a6
--
2.34.1
^ permalink raw reply [flat|nested] 65+ messages in thread
* [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-04-18 20:42 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 02/16] x86/bugs: Restructure TAA mitigation David Kaplan
` (16 subsequent siblings)
17 siblings, 2 replies; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure MDS mitigation selection to use select/update/apply
functions to create consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 58 ++++++++++++++++++++++++++++++++++++--
1 file changed, 56 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 9387f5f9de12..4295502ea082 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -34,6 +34,25 @@
#include "cpu.h"
+/*
+ * Speculation Vulnerability Handling
+ *
+ * Each vulnerability is handled with the following functions:
+ * <vuln>_select_mitigation() -- Selects a mitigation to use. This should
+ * take into account all relevant command line
+ * options.
+ * <vuln>_update_mitigation() -- This is called after all vulnerabilities have
+ * selected a mitigation, in case the selection
+ * may want to change based on other choices
+ * made. This function is optional.
+ * <vuln>_apply_mitigation() -- Enable the selected mitigation.
+ *
+ * The compile-time mitigation in all cases should be AUTO. An explicit
+ * command-line option can override AUTO. If no such option is
+ * provided, <vuln>_select_mitigation() will override AUTO to the best
+ * mitigation option.
+ */
+
static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init retbleed_select_mitigation(void);
@@ -41,6 +60,8 @@ static void __init spectre_v2_user_select_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
+static void __init mds_update_mitigation(void);
+static void __init mds_apply_mitigation(void);
static void __init md_clear_update_mitigation(void);
static void __init md_clear_select_mitigation(void);
static void __init taa_select_mitigation(void);
@@ -172,6 +193,7 @@ void __init cpu_select_mitigations(void)
spectre_v2_user_select_mitigation();
ssb_select_mitigation();
l1tf_select_mitigation();
+ mds_select_mitigation();
md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -182,6 +204,14 @@ void __init cpu_select_mitigations(void)
*/
srso_select_mitigation();
gds_select_mitigation();
+
+ /*
+ * After mitigations are selected, some may need to update their
+ * choices.
+ */
+ mds_update_mitigation();
+
+ mds_apply_mitigation();
}
/*
@@ -284,6 +314,9 @@ enum rfds_mitigations {
static enum rfds_mitigations rfds_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO : RFDS_MITIGATION_OFF;
+/* Set if any of MDS/TAA/MMIO/RFDS are going to enable VERW. */
+static bool verw_mitigation_selected __ro_after_init;
+
static void __init mds_select_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
@@ -294,12 +327,34 @@ static void __init mds_select_mitigation(void)
if (mds_mitigation == MDS_MITIGATION_AUTO)
mds_mitigation = MDS_MITIGATION_FULL;
+ if (mds_mitigation == MDS_MITIGATION_OFF)
+ return;
+
+ verw_mitigation_selected = true;
+}
+
+static void __init mds_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
+ return;
+
+ /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
+ if (verw_mitigation_selected)
+ mds_mitigation = MDS_MITIGATION_FULL;
+
if (mds_mitigation == MDS_MITIGATION_FULL) {
if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
mds_mitigation = MDS_MITIGATION_VMWERV;
+ }
- setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+ pr_info("%s\n", mds_strings[mds_mitigation]);
+}
+static void __init mds_apply_mitigation(void)
+{
+ if (mds_mitigation == MDS_MITIGATION_FULL ||
+ mds_mitigation == MDS_MITIGATION_VMWERV) {
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
(mds_nosmt || cpu_mitigations_auto_nosmt()))
cpu_smt_disable(false);
@@ -599,7 +654,6 @@ static void __init md_clear_update_mitigation(void)
static void __init md_clear_select_mitigation(void)
{
- mds_select_mitigation();
taa_select_mitigation();
mmio_select_mitigation();
rfds_select_mitigation();
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 02/16] x86/bugs: Restructure TAA mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
2025-04-18 16:17 ` [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-04-19 12:36 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation David Kaplan
` (15 subsequent siblings)
17 siblings, 2 replies; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure TAA mitigation to use select/update/apply functions to
create consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 94 ++++++++++++++++++++++++--------------
1 file changed, 59 insertions(+), 35 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4295502ea082..c0ba034ae1f9 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -65,6 +65,8 @@ static void __init mds_apply_mitigation(void);
static void __init md_clear_update_mitigation(void);
static void __init md_clear_select_mitigation(void);
static void __init taa_select_mitigation(void);
+static void __init taa_update_mitigation(void);
+static void __init taa_apply_mitigation(void);
static void __init mmio_select_mitigation(void);
static void __init srbds_select_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
@@ -194,6 +196,7 @@ void __init cpu_select_mitigations(void)
ssb_select_mitigation();
l1tf_select_mitigation();
mds_select_mitigation();
+ taa_select_mitigation();
md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -210,8 +213,10 @@ void __init cpu_select_mitigations(void)
* choices.
*/
mds_update_mitigation();
+ taa_update_mitigation();
mds_apply_mitigation();
+ taa_apply_mitigation();
}
/*
@@ -394,6 +399,11 @@ static const char * const taa_strings[] = {
[TAA_MITIGATION_TSX_DISABLED] = "Mitigation: TSX disabled",
};
+static bool __init taa_vulnerable(void)
+{
+ return boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM);
+}
+
static void __init taa_select_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_TAA)) {
@@ -407,48 +417,63 @@ static void __init taa_select_mitigation(void)
return;
}
- if (cpu_mitigations_off()) {
+ if (cpu_mitigations_off())
taa_mitigation = TAA_MITIGATION_OFF;
- return;
- }
- /*
- * TAA mitigation via VERW is turned off if both
- * tsx_async_abort=off and mds=off are specified.
- */
- if (taa_mitigation == TAA_MITIGATION_OFF &&
- mds_mitigation == MDS_MITIGATION_OFF)
+ /* Microcode will be checked in taa_update_mitigation(). */
+ if (taa_mitigation == TAA_MITIGATION_AUTO)
+ taa_mitigation = TAA_MITIGATION_VERW;
+
+ if (taa_mitigation != TAA_MITIGATION_OFF)
+ verw_mitigation_selected = true;
+}
+
+static void __init taa_update_mitigation(void)
+{
+ if (!taa_vulnerable() || cpu_mitigations_off())
return;
- if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+ if (verw_mitigation_selected)
taa_mitigation = TAA_MITIGATION_VERW;
- else
- taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
- /*
- * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
- * A microcode update fixes this behavior to clear CPU buffers. It also
- * adds support for MSR_IA32_TSX_CTRL which is enumerated by the
- * ARCH_CAP_TSX_CTRL_MSR bit.
- *
- * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
- * update is required.
- */
- if ( (x86_arch_cap_msr & ARCH_CAP_MDS_NO) &&
- !(x86_arch_cap_msr & ARCH_CAP_TSX_CTRL_MSR))
- taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+ if (taa_mitigation == TAA_MITIGATION_VERW) {
+ /* Check if the requisite ucode is available. */
+ if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
+ taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
- /*
- * TSX is enabled, select alternate mitigation for TAA which is
- * the same as MDS. Enable MDS static branch to clear CPU buffers.
- *
- * For guests that can't determine whether the correct microcode is
- * present on host, enable the mitigation for UCODE_NEEDED as well.
- */
- setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+ /*
+ * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
+ * A microcode update fixes this behavior to clear CPU buffers. It also
+ * adds support for MSR_IA32_TSX_CTRL which is enumerated by the
+ * ARCH_CAP_TSX_CTRL_MSR bit.
+ *
+ * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
+ * update is required.
+ */
+ if ((x86_arch_cap_msr & ARCH_CAP_MDS_NO) &&
+ !(x86_arch_cap_msr & ARCH_CAP_TSX_CTRL_MSR))
+ taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+ }
- if (taa_nosmt || cpu_mitigations_auto_nosmt())
- cpu_smt_disable(false);
+ pr_info("%s\n", taa_strings[taa_mitigation]);
+}
+
+static void __init taa_apply_mitigation(void)
+{
+ if (taa_mitigation == TAA_MITIGATION_VERW ||
+ taa_mitigation == TAA_MITIGATION_UCODE_NEEDED) {
+ /*
+ * TSX is enabled, select alternate mitigation for TAA which is
+ * the same as MDS. Enable MDS static branch to clear CPU buffers.
+ *
+ * For guests that can't determine whether the correct microcode is
+ * present on host, enable the mitigation for UCODE_NEEDED as well.
+ */
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+
+ if (taa_nosmt || cpu_mitigations_auto_nosmt())
+ cpu_smt_disable(false);
+ }
}
static int __init tsx_async_abort_parse_cmdline(char *str)
@@ -654,7 +679,6 @@ static void __init md_clear_update_mitigation(void)
static void __init md_clear_select_mitigation(void)
{
- taa_select_mitigation();
mmio_select_mitigation();
rfds_select_mitigation();
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
2025-04-18 16:17 ` [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation David Kaplan
2025-04-18 16:17 ` [PATCH v5 02/16] x86/bugs: Restructure TAA mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-04-24 20:19 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 04/16] x86/bugs: Restructure RFDS mitigation David Kaplan
` (14 subsequent siblings)
17 siblings, 2 replies; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure MMIO mitigation to use select/update/apply functions to
create consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 74 +++++++++++++++++++++++++-------------
1 file changed, 50 insertions(+), 24 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c0ba034ae1f9..28b55a7457bc 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -68,6 +68,8 @@ static void __init taa_select_mitigation(void);
static void __init taa_update_mitigation(void);
static void __init taa_apply_mitigation(void);
static void __init mmio_select_mitigation(void);
+static void __init mmio_update_mitigation(void);
+static void __init mmio_apply_mitigation(void);
static void __init srbds_select_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
@@ -197,6 +199,7 @@ void __init cpu_select_mitigations(void)
l1tf_select_mitigation();
mds_select_mitigation();
taa_select_mitigation();
+ mmio_select_mitigation();
md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -214,9 +217,11 @@ void __init cpu_select_mitigations(void)
*/
mds_update_mitigation();
taa_update_mitigation();
+ mmio_update_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
+ mmio_apply_mitigation();
}
/*
@@ -516,25 +521,62 @@ static void __init mmio_select_mitigation(void)
return;
}
+ /* Microcode will be checked in mmio_update_mitigation(). */
+ if (mmio_mitigation == MMIO_MITIGATION_AUTO)
+ mmio_mitigation = MMIO_MITIGATION_VERW;
+
if (mmio_mitigation == MMIO_MITIGATION_OFF)
return;
/*
* Enable CPU buffer clear mitigation for host and VMM, if also affected
- * by MDS or TAA. Otherwise, enable mitigation for VMM only.
+ * by MDS or TAA.
*/
- if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
- boot_cpu_has(X86_FEATURE_RTM)))
- setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+ if (boot_cpu_has_bug(X86_BUG_MDS) || taa_vulnerable())
+ verw_mitigation_selected = true;
+}
+
+static void __init mmio_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) || cpu_mitigations_off())
+ return;
+
+ if (verw_mitigation_selected)
+ mmio_mitigation = MMIO_MITIGATION_VERW;
+
+ if (mmio_mitigation == MMIO_MITIGATION_VERW) {
+ /*
+ * Check if the system has the right microcode.
+ *
+ * CPU Fill buffer clear mitigation is enumerated by either an explicit
+ * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
+ * affected systems.
+ */
+ if (!((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
+ (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
+ boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
+ !(x86_arch_cap_msr & ARCH_CAP_MDS_NO))))
+ mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
+ }
+
+ pr_info("%s\n", mmio_strings[mmio_mitigation]);
+}
+
+static void __init mmio_apply_mitigation(void)
+{
+ if (mmio_mitigation == MMIO_MITIGATION_OFF)
+ return;
/*
- * X86_FEATURE_CLEAR_CPU_BUF could be enabled by other VERW based
- * mitigations, disable KVM-only mitigation in that case.
+ * Only enable the VMM mitigation if the CPU buffer clear mitigation is
+ * not being used.
*/
- if (boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
+ if (verw_mitigation_selected) {
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
static_branch_disable(&cpu_buf_vm_clear);
- else
+ } else {
static_branch_enable(&cpu_buf_vm_clear);
+ }
/*
* If Processor-MMIO-Stale-Data bug is present and Fill Buffer data can
@@ -544,21 +586,6 @@ static void __init mmio_select_mitigation(void)
if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
static_branch_enable(&mds_idle_clear);
- /*
- * Check if the system has the right microcode.
- *
- * CPU Fill buffer clear mitigation is enumerated by either an explicit
- * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
- * affected systems.
- */
- if ((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
- (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
- boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
- !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)))
- mmio_mitigation = MMIO_MITIGATION_VERW;
- else
- mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
-
if (mmio_nosmt || cpu_mitigations_auto_nosmt())
cpu_smt_disable(false);
}
@@ -679,7 +706,6 @@ static void __init md_clear_update_mitigation(void)
static void __init md_clear_select_mitigation(void)
{
- mmio_select_mitigation();
rfds_select_mitigation();
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 04/16] x86/bugs: Restructure RFDS mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (2 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-04-27 15:09 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 05/16] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
` (13 subsequent siblings)
17 siblings, 2 replies; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure RFDS mitigation to use select/update/apply functions to
create consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 41 +++++++++++++++++++++++++++++++++-----
1 file changed, 36 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 28b55a7457bc..303718689aac 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -70,6 +70,9 @@ static void __init taa_apply_mitigation(void);
static void __init mmio_select_mitigation(void);
static void __init mmio_update_mitigation(void);
static void __init mmio_apply_mitigation(void);
+static void __init rfds_select_mitigation(void);
+static void __init rfds_update_mitigation(void);
+static void __init rfds_apply_mitigation(void);
static void __init srbds_select_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
@@ -200,6 +203,7 @@ void __init cpu_select_mitigations(void)
mds_select_mitigation();
taa_select_mitigation();
mmio_select_mitigation();
+ rfds_select_mitigation();
md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -218,10 +222,12 @@ void __init cpu_select_mitigations(void)
mds_update_mitigation();
taa_update_mitigation();
mmio_update_mitigation();
+ rfds_update_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
+ rfds_apply_mitigation();
}
/*
@@ -620,22 +626,48 @@ static const char * const rfds_strings[] = {
[RFDS_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode",
};
+static bool __init rfds_has_ucode(void)
+{
+ return (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR);
+}
+
static void __init rfds_select_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_RFDS) || cpu_mitigations_off()) {
rfds_mitigation = RFDS_MITIGATION_OFF;
return;
}
+
+ if (rfds_mitigation == RFDS_MITIGATION_AUTO)
+ rfds_mitigation = RFDS_MITIGATION_VERW;
+
if (rfds_mitigation == RFDS_MITIGATION_OFF)
return;
- if (rfds_mitigation == RFDS_MITIGATION_AUTO)
+ if (rfds_has_ucode())
+ verw_mitigation_selected = true;
+}
+
+static void __init rfds_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_RFDS) || cpu_mitigations_off())
+ return;
+
+ if (verw_mitigation_selected)
rfds_mitigation = RFDS_MITIGATION_VERW;
- if (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR)
+ if (rfds_mitigation == RFDS_MITIGATION_VERW) {
+ if (!rfds_has_ucode())
+ rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
+ }
+
+ pr_info("%s\n", rfds_strings[rfds_mitigation]);
+}
+
+static void __init rfds_apply_mitigation(void)
+{
+ if (rfds_mitigation == RFDS_MITIGATION_VERW)
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
- else
- rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
}
static __init int rfds_parse_cmdline(char *str)
@@ -706,7 +738,6 @@ static void __init md_clear_update_mitigation(void)
static void __init md_clear_select_mitigation(void)
{
- rfds_select_mitigation();
/*
* As these mitigations are inter-related and rely on VERW instruction
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 05/16] x86/bugs: Remove md_clear_*_mitigation()
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (3 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 04/16] x86/bugs: Restructure RFDS mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 06/16] x86/bugs: Restructure SRBDS mitigation David Kaplan
` (12 subsequent siblings)
17 siblings, 1 reply; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
The functionality in md_clear_update_mitigation() and
md_clear_select_mitigation() is now integrated into the select/update
functions for the MDS, TAA, MMIO, and RFDS vulnerabilities.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 63 --------------------------------------
1 file changed, 63 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 303718689aac..ae6619416ce1 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -62,8 +62,6 @@ static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init mds_update_mitigation(void);
static void __init mds_apply_mitigation(void);
-static void __init md_clear_update_mitigation(void);
-static void __init md_clear_select_mitigation(void);
static void __init taa_select_mitigation(void);
static void __init taa_update_mitigation(void);
static void __init taa_apply_mitigation(void);
@@ -204,7 +202,6 @@ void __init cpu_select_mitigations(void)
taa_select_mitigation();
mmio_select_mitigation();
rfds_select_mitigation();
- md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -687,66 +684,6 @@ static __init int rfds_parse_cmdline(char *str)
}
early_param("reg_file_data_sampling", rfds_parse_cmdline);
-#undef pr_fmt
-#define pr_fmt(fmt) "" fmt
-
-static void __init md_clear_update_mitigation(void)
-{
- if (cpu_mitigations_off())
- return;
-
- if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
- goto out;
-
- /*
- * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO
- * Stale Data mitigation, if necessary.
- */
- if (mds_mitigation == MDS_MITIGATION_OFF &&
- boot_cpu_has_bug(X86_BUG_MDS)) {
- mds_mitigation = MDS_MITIGATION_FULL;
- mds_select_mitigation();
- }
- if (taa_mitigation == TAA_MITIGATION_OFF &&
- boot_cpu_has_bug(X86_BUG_TAA)) {
- taa_mitigation = TAA_MITIGATION_VERW;
- taa_select_mitigation();
- }
- /*
- * MMIO_MITIGATION_OFF is not checked here so that cpu_buf_vm_clear
- * gets updated correctly as per X86_FEATURE_CLEAR_CPU_BUF state.
- */
- if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
- mmio_mitigation = MMIO_MITIGATION_VERW;
- mmio_select_mitigation();
- }
- if (rfds_mitigation == RFDS_MITIGATION_OFF &&
- boot_cpu_has_bug(X86_BUG_RFDS)) {
- rfds_mitigation = RFDS_MITIGATION_VERW;
- rfds_select_mitigation();
- }
-out:
- if (boot_cpu_has_bug(X86_BUG_MDS))
- pr_info("MDS: %s\n", mds_strings[mds_mitigation]);
- if (boot_cpu_has_bug(X86_BUG_TAA))
- pr_info("TAA: %s\n", taa_strings[taa_mitigation]);
- if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
- pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]);
- if (boot_cpu_has_bug(X86_BUG_RFDS))
- pr_info("Register File Data Sampling: %s\n", rfds_strings[rfds_mitigation]);
-}
-
-static void __init md_clear_select_mitigation(void)
-{
-
- /*
- * As these mitigations are inter-related and rely on VERW instruction
- * to clear the microarchitural buffers, update and print their status
- * after mitigation selection is done for each of these vulnerabilities.
- */
- md_clear_update_mitigation();
-}
-
#undef pr_fmt
#define pr_fmt(fmt) "SRBDS: " fmt
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 06/16] x86/bugs: Restructure SRBDS mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (4 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 05/16] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 07/16] x86/bugs: Restructure GDS mitigation David Kaplan
` (11 subsequent siblings)
17 siblings, 1 reply; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure SRBDS to use select/apply functions to create consistent
vulnerability handling.
Define new AUTO mitigation for SRBDS.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ae6619416ce1..942db170eb4e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -72,6 +72,7 @@ static void __init rfds_select_mitigation(void);
static void __init rfds_update_mitigation(void);
static void __init rfds_apply_mitigation(void);
static void __init srbds_select_mitigation(void);
+static void __init srbds_apply_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
static void __init gds_select_mitigation(void);
@@ -225,6 +226,7 @@ void __init cpu_select_mitigations(void)
taa_apply_mitigation();
mmio_apply_mitigation();
rfds_apply_mitigation();
+ srbds_apply_mitigation();
}
/*
@@ -689,6 +691,7 @@ early_param("reg_file_data_sampling", rfds_parse_cmdline);
enum srbds_mitigations {
SRBDS_MITIGATION_OFF,
+ SRBDS_MITIGATION_AUTO,
SRBDS_MITIGATION_UCODE_NEEDED,
SRBDS_MITIGATION_FULL,
SRBDS_MITIGATION_TSX_OFF,
@@ -696,7 +699,7 @@ enum srbds_mitigations {
};
static enum srbds_mitigations srbds_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_SRBDS) ? SRBDS_MITIGATION_FULL : SRBDS_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_SRBDS) ? SRBDS_MITIGATION_AUTO : SRBDS_MITIGATION_OFF;
static const char * const srbds_strings[] = {
[SRBDS_MITIGATION_OFF] = "Vulnerable",
@@ -747,8 +750,13 @@ void update_srbds_msr(void)
static void __init srbds_select_mitigation(void)
{
- if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+ if (!boot_cpu_has_bug(X86_BUG_SRBDS) || cpu_mitigations_off()) {
+ srbds_mitigation = SRBDS_MITIGATION_OFF;
return;
+ }
+
+ if (srbds_mitigation == SRBDS_MITIGATION_AUTO)
+ srbds_mitigation = SRBDS_MITIGATION_FULL;
/*
* Check to see if this is one of the MDS_NO systems supporting TSX that
@@ -762,13 +770,17 @@ static void __init srbds_select_mitigation(void)
srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
- else if (cpu_mitigations_off() || srbds_off)
+ else if (srbds_off)
srbds_mitigation = SRBDS_MITIGATION_OFF;
- update_srbds_msr();
pr_info("%s\n", srbds_strings[srbds_mitigation]);
}
+static void __init srbds_apply_mitigation(void)
+{
+ update_srbds_msr();
+}
+
static int __init srbds_parse_cmdline(char *str)
{
if (!str)
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 07/16] x86/bugs: Restructure GDS mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (5 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 06/16] x86/bugs: Restructure SRBDS mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 08/16] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
` (10 subsequent siblings)
17 siblings, 1 reply; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure GDS mitigation to use select/apply functions to create
consistent vulnerability handling.
Define new AUTO mitigation for GDS.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 43 +++++++++++++++++++++++++-------------
1 file changed, 29 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 942db170eb4e..57f9ebf90472 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -76,6 +76,7 @@ static void __init srbds_apply_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
static void __init gds_select_mitigation(void);
+static void __init gds_apply_mitigation(void);
/* The base value of the SPEC_CTRL MSR without task-specific bits set */
u64 x86_spec_ctrl_base;
@@ -227,6 +228,7 @@ void __init cpu_select_mitigations(void)
mmio_apply_mitigation();
rfds_apply_mitigation();
srbds_apply_mitigation();
+ gds_apply_mitigation();
}
/*
@@ -827,6 +829,7 @@ early_param("l1d_flush", l1d_flush_parse_cmdline);
enum gds_mitigations {
GDS_MITIGATION_OFF,
+ GDS_MITIGATION_AUTO,
GDS_MITIGATION_UCODE_NEEDED,
GDS_MITIGATION_FORCE,
GDS_MITIGATION_FULL,
@@ -835,7 +838,7 @@ enum gds_mitigations {
};
static enum gds_mitigations gds_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_FULL : GDS_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_AUTO : GDS_MITIGATION_OFF;
static const char * const gds_strings[] = {
[GDS_MITIGATION_OFF] = "Vulnerable",
@@ -876,6 +879,7 @@ void update_gds_msr(void)
case GDS_MITIGATION_FORCE:
case GDS_MITIGATION_UCODE_NEEDED:
case GDS_MITIGATION_HYPERVISOR:
+ case GDS_MITIGATION_AUTO:
return;
}
@@ -899,26 +903,21 @@ static void __init gds_select_mitigation(void)
if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
gds_mitigation = GDS_MITIGATION_HYPERVISOR;
- goto out;
+ return;
}
if (cpu_mitigations_off())
gds_mitigation = GDS_MITIGATION_OFF;
/* Will verify below that mitigation _can_ be disabled */
+ if (gds_mitigation == GDS_MITIGATION_AUTO)
+ gds_mitigation = GDS_MITIGATION_FULL;
+
/* No microcode */
if (!(x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)) {
- if (gds_mitigation == GDS_MITIGATION_FORCE) {
- /*
- * This only needs to be done on the boot CPU so do it
- * here rather than in update_gds_msr()
- */
- setup_clear_cpu_cap(X86_FEATURE_AVX);
- pr_warn("Microcode update needed! Disabling AVX as mitigation.\n");
- } else {
+ if (gds_mitigation != GDS_MITIGATION_FORCE)
gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
- }
- goto out;
+ return;
}
/* Microcode has mitigation, use it */
@@ -939,9 +938,25 @@ static void __init gds_select_mitigation(void)
*/
gds_mitigation = GDS_MITIGATION_FULL_LOCKED;
}
+}
+
+static void __init gds_apply_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_GDS))
+ return;
+
+ /* Microcode is present */
+ if (x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)
+ update_gds_msr();
+ else if (gds_mitigation == GDS_MITIGATION_FORCE) {
+ /*
+ * This only needs to be done on the boot CPU so do it
+ * here rather than in update_gds_msr()
+ */
+ setup_clear_cpu_cap(X86_FEATURE_AVX);
+ pr_warn("Microcode update needed! Disabling AVX as mitigation.\n");
+ }
- update_gds_msr();
-out:
pr_info("%s\n", gds_strings[gds_mitigation]);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 08/16] x86/bugs: Restructure spectre_v1 mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (6 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 07/16] x86/bugs: Restructure GDS mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 09/16] x86/bugs: Allow retbleed=stuff only on Intel David Kaplan
` (9 subsequent siblings)
17 siblings, 1 reply; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure spectre_v1 to use select/apply functions to create
consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 57f9ebf90472..72e04938fdcb 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -54,6 +54,7 @@
*/
static void __init spectre_v1_select_mitigation(void);
+static void __init spectre_v1_apply_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init retbleed_select_mitigation(void);
static void __init spectre_v2_user_select_mitigation(void);
@@ -223,6 +224,7 @@ void __init cpu_select_mitigations(void)
mmio_update_mitigation();
rfds_update_mitigation();
+ spectre_v1_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -1017,10 +1019,14 @@ static bool smap_works_speculatively(void)
static void __init spectre_v1_select_mitigation(void)
{
- if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) {
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
+}
+
+static void __init spectre_v1_apply_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
return;
- }
if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 09/16] x86/bugs: Allow retbleed=stuff only on Intel
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (7 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 08/16] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-04-27 15:38 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 10/16] x86/bugs: Restructure retbleed mitigation David Kaplan
` (8 subsequent siblings)
17 siblings, 2 replies; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
The retbleed=stuff mitigation is only applicable for Intel CPUs affected
by retbleed. If this option is selected for another vendor, print a
warning and fall back to the AUTO option.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 72e04938fdcb..84d3f6b3d1eb 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1187,6 +1187,10 @@ static void __init retbleed_select_mitigation(void)
case RETBLEED_CMD_STUFF:
if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) &&
spectre_v2_enabled == SPECTRE_V2_RETPOLINE) {
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
+ pr_err("WARNING: retbleed=stuff only supported for Intel CPUs.\n");
+ goto do_cmd_auto;
+ }
retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
} else {
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 10/16] x86/bugs: Restructure retbleed mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (8 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 09/16] x86/bugs: Allow retbleed=stuff only on Intel David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-04-28 18:59 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 11/16] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
` (7 subsequent siblings)
17 siblings, 2 replies; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure retbleed mitigation to use select/update/apply functions to
create consistent vulnerability handling. The retbleed_update_mitigation()
simplifies the dependency between spectre_v2 and retbleed.
The command line options now directly select a preferred mitigation
which simplifies the logic.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 180 ++++++++++++++++++-------------------
1 file changed, 90 insertions(+), 90 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 84d3f6b3d1eb..248b6065f4bc 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -57,6 +57,8 @@ static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v1_apply_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init retbleed_select_mitigation(void);
+static void __init retbleed_update_mitigation(void);
+static void __init retbleed_apply_mitigation(void);
static void __init spectre_v2_user_select_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
@@ -187,11 +189,6 @@ void __init cpu_select_mitigations(void)
/* Select the proper CPU mitigations before patching alternatives: */
spectre_v1_select_mitigation();
spectre_v2_select_mitigation();
- /*
- * retbleed_select_mitigation() relies on the state set by
- * spectre_v2_select_mitigation(); specifically it wants to know about
- * spectre_v2=ibrs.
- */
retbleed_select_mitigation();
/*
* spectre_v2_user_select_mitigation() relies on the state set by
@@ -219,12 +216,14 @@ void __init cpu_select_mitigations(void)
* After mitigations are selected, some may need to update their
* choices.
*/
+ retbleed_update_mitigation();
mds_update_mitigation();
taa_update_mitigation();
mmio_update_mitigation();
rfds_update_mitigation();
spectre_v1_apply_mitigation();
+ retbleed_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -1081,6 +1080,7 @@ enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE;
enum retbleed_mitigation {
RETBLEED_MITIGATION_NONE,
+ RETBLEED_MITIGATION_AUTO,
RETBLEED_MITIGATION_UNRET,
RETBLEED_MITIGATION_IBPB,
RETBLEED_MITIGATION_IBRS,
@@ -1088,14 +1088,6 @@ enum retbleed_mitigation {
RETBLEED_MITIGATION_STUFF,
};
-enum retbleed_mitigation_cmd {
- RETBLEED_CMD_OFF,
- RETBLEED_CMD_AUTO,
- RETBLEED_CMD_UNRET,
- RETBLEED_CMD_IBPB,
- RETBLEED_CMD_STUFF,
-};
-
static const char * const retbleed_strings[] = {
[RETBLEED_MITIGATION_NONE] = "Vulnerable",
[RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk",
@@ -1106,9 +1098,7 @@ static const char * const retbleed_strings[] = {
};
static enum retbleed_mitigation retbleed_mitigation __ro_after_init =
- RETBLEED_MITIGATION_NONE;
-static enum retbleed_mitigation_cmd retbleed_cmd __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_CMD_AUTO : RETBLEED_CMD_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_MITIGATION_AUTO : RETBLEED_MITIGATION_NONE;
static int __ro_after_init retbleed_nosmt = false;
@@ -1125,15 +1115,15 @@ static int __init retbleed_parse_cmdline(char *str)
}
if (!strcmp(str, "off")) {
- retbleed_cmd = RETBLEED_CMD_OFF;
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
} else if (!strcmp(str, "auto")) {
- retbleed_cmd = RETBLEED_CMD_AUTO;
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
} else if (!strcmp(str, "unret")) {
- retbleed_cmd = RETBLEED_CMD_UNRET;
+ retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
} else if (!strcmp(str, "ibpb")) {
- retbleed_cmd = RETBLEED_CMD_IBPB;
+ retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
} else if (!strcmp(str, "stuff")) {
- retbleed_cmd = RETBLEED_CMD_STUFF;
+ retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
} else if (!strcmp(str, "nosmt")) {
retbleed_nosmt = true;
} else if (!strcmp(str, "force")) {
@@ -1154,57 +1144,42 @@ early_param("retbleed", retbleed_parse_cmdline);
static void __init retbleed_select_mitigation(void)
{
- bool mitigate_smt = false;
-
- if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
- return;
-
- switch (retbleed_cmd) {
- case RETBLEED_CMD_OFF:
+ if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off()) {
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
return;
+ }
- case RETBLEED_CMD_UNRET:
- if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
- retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
- } else {
+ switch (retbleed_mitigation) {
+ case RETBLEED_MITIGATION_UNRET:
+ if (!IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
pr_err("WARNING: kernel not compiled with MITIGATION_UNRET_ENTRY.\n");
- goto do_cmd_auto;
}
break;
-
- case RETBLEED_CMD_IBPB:
+ case RETBLEED_MITIGATION_IBPB:
if (!boot_cpu_has(X86_FEATURE_IBPB)) {
pr_err("WARNING: CPU does not support IBPB.\n");
- goto do_cmd_auto;
- } else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
- retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
- } else {
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+ } else if (!IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
- goto do_cmd_auto;
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
}
break;
-
- case RETBLEED_CMD_STUFF:
- if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) &&
- spectre_v2_enabled == SPECTRE_V2_RETPOLINE) {
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
- pr_err("WARNING: retbleed=stuff only supported for Intel CPUs.\n");
- goto do_cmd_auto;
- }
- retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
-
- } else {
- if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING))
- pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
- else
- pr_err("WARNING: kernel not compiled with MITIGATION_CALL_DEPTH_TRACKING.\n");
-
- goto do_cmd_auto;
+ case RETBLEED_MITIGATION_STUFF:
+ if (!IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING)) {
+ pr_err("WARNING: kernel not compiled with MITIGATION_CALL_DEPTH_TRACKING.\n");
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+ } else if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
+ pr_err("WARNING: retbleed=stuff only supported for Intel CPUs.\n");
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
}
break;
+ default:
+ break;
+ }
-do_cmd_auto:
- case RETBLEED_CMD_AUTO:
+ if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO) {
+ /* Intel mitigation selected in retbleed_update_mitigation() */
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
@@ -1212,18 +1187,65 @@ static void __init retbleed_select_mitigation(void)
else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
boot_cpu_has(X86_FEATURE_IBPB))
retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
+ else
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
}
+ }
+}
- /*
- * The Intel mitigation (IBRS or eIBRS) was already selected in
- * spectre_v2_select_mitigation(). 'retbleed_mitigation' will
- * be set accordingly below.
- */
+static void __init retbleed_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
+ return;
- break;
+ if (retbleed_mitigation == RETBLEED_MITIGATION_NONE)
+ goto out;
+
+ /*
+ * retbleed=stuff is only allowed on Intel. If stuffing can't be used
+ * then a different mitigation will be selected below.
+ */
+ if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF) {
+ if (spectre_v2_enabled != SPECTRE_V2_RETPOLINE) {
+ pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+ }
+ }
+ /*
+ * Let IBRS trump all on Intel without affecting the effects of the
+ * retbleed= cmdline option except for call depth based stuffing
+ */
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+ switch (spectre_v2_enabled) {
+ case SPECTRE_V2_IBRS:
+ retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
+ break;
+ case SPECTRE_V2_EIBRS:
+ case SPECTRE_V2_EIBRS_RETPOLINE:
+ case SPECTRE_V2_EIBRS_LFENCE:
+ retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
+ break;
+ default:
+ if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
+ pr_err(RETBLEED_INTEL_MSG);
+ }
+ /* If nothing has set the mitigation yet, default to NONE. */
+ if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO)
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
}
+out:
+ pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
+}
+
+
+static void __init retbleed_apply_mitigation(void)
+{
+ bool mitigate_smt = false;
switch (retbleed_mitigation) {
+ case RETBLEED_MITIGATION_NONE:
+ return;
+
case RETBLEED_MITIGATION_UNRET:
setup_force_cpu_cap(X86_FEATURE_RETHUNK);
setup_force_cpu_cap(X86_FEATURE_UNRET);
@@ -1273,28 +1295,6 @@ static void __init retbleed_select_mitigation(void)
if (mitigate_smt && !boot_cpu_has(X86_FEATURE_STIBP) &&
(retbleed_nosmt || cpu_mitigations_auto_nosmt()))
cpu_smt_disable(false);
-
- /*
- * Let IBRS trump all on Intel without affecting the effects of the
- * retbleed= cmdline option except for call depth based stuffing
- */
- if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
- switch (spectre_v2_enabled) {
- case SPECTRE_V2_IBRS:
- retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
- break;
- case SPECTRE_V2_EIBRS:
- case SPECTRE_V2_EIBRS_RETPOLINE:
- case SPECTRE_V2_EIBRS_LFENCE:
- retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
- break;
- default:
- if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
- pr_err(RETBLEED_INTEL_MSG);
- }
- }
-
- pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
}
#undef pr_fmt
@@ -1851,8 +1851,8 @@ static void __init spectre_v2_select_mitigation(void)
if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
boot_cpu_has_bug(X86_BUG_RETBLEED) &&
- retbleed_cmd != RETBLEED_CMD_OFF &&
- retbleed_cmd != RETBLEED_CMD_STUFF &&
+ retbleed_mitigation != RETBLEED_MITIGATION_NONE &&
+ retbleed_mitigation != RETBLEED_MITIGATION_STUFF &&
boot_cpu_has(X86_FEATURE_IBRS) &&
boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
mode = SPECTRE_V2_IBRS;
@@ -1960,7 +1960,7 @@ static void __init spectre_v2_select_mitigation(void)
(boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)) {
- if (retbleed_cmd != RETBLEED_CMD_IBPB) {
+ if (retbleed_mitigation != RETBLEED_MITIGATION_IBPB) {
setup_force_cpu_cap(X86_FEATURE_USE_IBPB_FW);
pr_info("Enabling Speculation Barrier for firmware calls\n");
}
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 11/16] x86/bugs: Restructure spectre_v2_user mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (9 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 10/16] x86/bugs: Restructure retbleed mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-04-29 8:47 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 12/16] x86/bugs: Restructure BHI mitigation David Kaplan
` (6 subsequent siblings)
17 siblings, 2 replies; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure spectre_v2_user to use select/update/apply functions to
create consistent vulnerability handling.
The IBPB/STIBP choices are first decided based on the spectre_v2_user
command line but can be modified by the spectre_v2 command line option
as well.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 155 +++++++++++++++++++++----------------
1 file changed, 89 insertions(+), 66 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 248b6065f4bc..bb20cfb81015 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -60,6 +60,8 @@ static void __init retbleed_select_mitigation(void);
static void __init retbleed_update_mitigation(void);
static void __init retbleed_apply_mitigation(void);
static void __init spectre_v2_user_select_mitigation(void);
+static void __init spectre_v2_user_update_mitigation(void);
+static void __init spectre_v2_user_apply_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
@@ -190,11 +192,6 @@ void __init cpu_select_mitigations(void)
spectre_v1_select_mitigation();
spectre_v2_select_mitigation();
retbleed_select_mitigation();
- /*
- * spectre_v2_user_select_mitigation() relies on the state set by
- * retbleed_select_mitigation(); specifically the STIBP selection is
- * forced for UNRET or IBPB.
- */
spectre_v2_user_select_mitigation();
ssb_select_mitigation();
l1tf_select_mitigation();
@@ -217,6 +214,11 @@ void __init cpu_select_mitigations(void)
* choices.
*/
retbleed_update_mitigation();
+ /*
+ * spectre_v2_user_update_mitigation() depends on
+ * retbleed_update_mitigation().
+ */
+ spectre_v2_user_update_mitigation();
mds_update_mitigation();
taa_update_mitigation();
mmio_update_mitigation();
@@ -224,6 +226,7 @@ void __init cpu_select_mitigations(void)
spectre_v1_apply_mitigation();
retbleed_apply_mitigation();
+ spectre_v2_user_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -1374,6 +1377,8 @@ enum spectre_v2_mitigation_cmd {
SPECTRE_V2_CMD_IBRS,
};
+static enum spectre_v2_mitigation_cmd spectre_v2_cmd __ro_after_init = SPECTRE_V2_CMD_AUTO;
+
enum spectre_v2_user_cmd {
SPECTRE_V2_USER_CMD_NONE,
SPECTRE_V2_USER_CMD_AUTO,
@@ -1412,31 +1417,19 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure)
pr_info("spectre_v2_user=%s forced on command line.\n", reason);
}
-static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd;
-
static enum spectre_v2_user_cmd __init
spectre_v2_parse_user_cmdline(void)
{
- enum spectre_v2_user_cmd mode;
char arg[20];
int ret, i;
- mode = IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2) ?
- SPECTRE_V2_USER_CMD_AUTO : SPECTRE_V2_USER_CMD_NONE;
-
- switch (spectre_v2_cmd) {
- case SPECTRE_V2_CMD_NONE:
+ if (cpu_mitigations_off() || !IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2))
return SPECTRE_V2_USER_CMD_NONE;
- case SPECTRE_V2_CMD_FORCE:
- return SPECTRE_V2_USER_CMD_FORCE;
- default:
- break;
- }
ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
arg, sizeof(arg));
if (ret < 0)
- return mode;
+ return SPECTRE_V2_USER_CMD_AUTO;
for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) {
if (match_option(arg, ret, v2_user_options[i].option)) {
@@ -1447,7 +1440,7 @@ spectre_v2_parse_user_cmdline(void)
}
pr_err("Unknown user space protection option (%s). Switching to default\n", arg);
- return mode;
+ return SPECTRE_V2_USER_CMD_AUTO;
}
static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
@@ -1458,7 +1451,6 @@ static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
static void __init
spectre_v2_user_select_mitigation(void)
{
- enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
enum spectre_v2_user_cmd cmd;
if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
@@ -1467,48 +1459,65 @@ spectre_v2_user_select_mitigation(void)
cmd = spectre_v2_parse_user_cmdline();
switch (cmd) {
case SPECTRE_V2_USER_CMD_NONE:
- goto set_mode;
+ return;
case SPECTRE_V2_USER_CMD_FORCE:
- mode = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
break;
case SPECTRE_V2_USER_CMD_AUTO:
case SPECTRE_V2_USER_CMD_PRCTL:
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
+ break;
case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
- mode = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
break;
case SPECTRE_V2_USER_CMD_SECCOMP:
+ if (IS_ENABLED(CONFIG_SECCOMP))
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_SECCOMP;
+ else
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_stibp = spectre_v2_user_ibpb;
+ break;
case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
if (IS_ENABLED(CONFIG_SECCOMP))
- mode = SPECTRE_V2_USER_SECCOMP;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_SECCOMP;
else
- mode = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
break;
}
- /* Initialize Indirect Branch Prediction Barrier */
- if (boot_cpu_has(X86_FEATURE_IBPB)) {
- static_branch_enable(&switch_vcpu_ibpb);
+ /*
+ * At this point, an STIBP mode other than "off" has been set.
+ * If STIBP support is not being forced, check if STIBP always-on
+ * is preferred.
+ */
+ if ((spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL ||
+ spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP) &&
+ boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
+ spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT_PREFERRED;
- spectre_v2_user_ibpb = mode;
- switch (cmd) {
- case SPECTRE_V2_USER_CMD_NONE:
- break;
- case SPECTRE_V2_USER_CMD_FORCE:
- case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
- case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
- static_branch_enable(&switch_mm_always_ibpb);
- spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
- break;
- case SPECTRE_V2_USER_CMD_PRCTL:
- case SPECTRE_V2_USER_CMD_AUTO:
- case SPECTRE_V2_USER_CMD_SECCOMP:
- static_branch_enable(&switch_mm_cond_ibpb);
- break;
- }
+ if (!boot_cpu_has(X86_FEATURE_IBPB))
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_NONE;
- pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
- static_key_enabled(&switch_mm_always_ibpb) ?
- "always-on" : "conditional");
+ if (!boot_cpu_has(X86_FEATURE_STIBP))
+ spectre_v2_user_stibp = SPECTRE_V2_USER_NONE;
+}
+
+static void __init spectre_v2_user_update_mitigation(void)
+{
+ if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
+ return;
+
+ /* The spectre_v2 cmd line can override spectre_v2_user options */
+ if (spectre_v2_cmd == SPECTRE_V2_CMD_NONE) {
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_NONE;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_NONE;
+ } else if (spectre_v2_cmd == SPECTRE_V2_CMD_FORCE) {
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
}
/*
@@ -1526,30 +1535,44 @@ spectre_v2_user_select_mitigation(void)
if (!boot_cpu_has(X86_FEATURE_STIBP) ||
!cpu_smt_possible() ||
(spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
- !boot_cpu_has(X86_FEATURE_AUTOIBRS)))
+ !boot_cpu_has(X86_FEATURE_AUTOIBRS))) {
+ spectre_v2_user_stibp = SPECTRE_V2_USER_NONE;
return;
+ }
- /*
- * At this point, an STIBP mode other than "off" has been set.
- * If STIBP support is not being forced, check if STIBP always-on
- * is preferred.
- */
- if (mode != SPECTRE_V2_USER_STRICT &&
- boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
- mode = SPECTRE_V2_USER_STRICT_PREFERRED;
-
- if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
- retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
- if (mode != SPECTRE_V2_USER_STRICT &&
- mode != SPECTRE_V2_USER_STRICT_PREFERRED)
+ if (spectre_v2_user_stibp != SPECTRE_V2_USER_NONE &&
+ (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
+ retbleed_mitigation == RETBLEED_MITIGATION_IBPB)) {
+ if (spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT &&
+ spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT_PREFERRED)
pr_info("Selecting STIBP always-on mode to complement retbleed mitigation\n");
- mode = SPECTRE_V2_USER_STRICT_PREFERRED;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT_PREFERRED;
}
+ pr_info("%s\n", spectre_v2_user_strings[spectre_v2_user_stibp]);
+}
- spectre_v2_user_stibp = mode;
+static void __init spectre_v2_user_apply_mitigation(void)
+{
+ /* Initialize Indirect Branch Prediction Barrier */
+ if (spectre_v2_user_ibpb != SPECTRE_V2_USER_NONE) {
+ static_branch_enable(&switch_vcpu_ibpb);
-set_mode:
- pr_info("%s\n", spectre_v2_user_strings[mode]);
+ switch (spectre_v2_user_ibpb) {
+ case SPECTRE_V2_USER_STRICT:
+ static_branch_enable(&switch_mm_always_ibpb);
+ break;
+ case SPECTRE_V2_USER_PRCTL:
+ case SPECTRE_V2_USER_SECCOMP:
+ static_branch_enable(&switch_mm_cond_ibpb);
+ break;
+ default:
+ break;
+ }
+
+ pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
+ static_key_enabled(&switch_mm_always_ibpb) ?
+ "always-on" : "conditional");
+ }
}
static const char * const spectre_v2_strings[] = {
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 12/16] x86/bugs: Restructure BHI mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (10 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 11/16] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 13/16] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
` (5 subsequent siblings)
17 siblings, 1 reply; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure BHI mitigation to use select/update/apply functions to create
consistent vulnerability handling. BHI mitigation was previously selected
from within spectre_v2_select_mitigation() and now is selected from
cpu_select_mitigation() like with all others.
Define new AUTO mitigation for BHI.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 31 +++++++++++++++++++++++++++----
1 file changed, 27 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index bb20cfb81015..b7063f58ae88 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -82,6 +82,9 @@ static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
static void __init gds_select_mitigation(void);
static void __init gds_apply_mitigation(void);
+static void __init bhi_select_mitigation(void);
+static void __init bhi_update_mitigation(void);
+static void __init bhi_apply_mitigation(void);
/* The base value of the SPEC_CTRL MSR without task-specific bits set */
u64 x86_spec_ctrl_base;
@@ -208,6 +211,7 @@ void __init cpu_select_mitigations(void)
*/
srso_select_mitigation();
gds_select_mitigation();
+ bhi_select_mitigation();
/*
* After mitigations are selected, some may need to update their
@@ -223,6 +227,7 @@ void __init cpu_select_mitigations(void)
taa_update_mitigation();
mmio_update_mitigation();
rfds_update_mitigation();
+ bhi_update_mitigation();
spectre_v1_apply_mitigation();
retbleed_apply_mitigation();
@@ -233,6 +238,7 @@ void __init cpu_select_mitigations(void)
rfds_apply_mitigation();
srbds_apply_mitigation();
gds_apply_mitigation();
+ bhi_apply_mitigation();
}
/*
@@ -1792,12 +1798,13 @@ static bool __init spec_ctrl_bhi_dis(void)
enum bhi_mitigations {
BHI_MITIGATION_OFF,
+ BHI_MITIGATION_AUTO,
BHI_MITIGATION_ON,
BHI_MITIGATION_VMEXIT_ONLY,
};
static enum bhi_mitigations bhi_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_SPECTRE_BHI) ? BHI_MITIGATION_ON : BHI_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_SPECTRE_BHI) ? BHI_MITIGATION_AUTO : BHI_MITIGATION_OFF;
static int __init spectre_bhi_parse_cmdline(char *str)
{
@@ -1818,6 +1825,25 @@ static int __init spectre_bhi_parse_cmdline(char *str)
early_param("spectre_bhi", spectre_bhi_parse_cmdline);
static void __init bhi_select_mitigation(void)
+{
+ if (!boot_cpu_has(X86_BUG_BHI) || cpu_mitigations_off())
+ bhi_mitigation = BHI_MITIGATION_OFF;
+
+ if (bhi_mitigation == BHI_MITIGATION_AUTO)
+ bhi_mitigation = BHI_MITIGATION_ON;
+}
+
+static void __init bhi_update_mitigation(void)
+{
+ if (spectre_v2_cmd == SPECTRE_V2_CMD_NONE)
+ bhi_mitigation = BHI_MITIGATION_OFF;
+
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
+ spectre_v2_cmd == SPECTRE_V2_CMD_AUTO)
+ bhi_mitigation = BHI_MITIGATION_OFF;
+}
+
+static void __init bhi_apply_mitigation(void)
{
if (bhi_mitigation == BHI_MITIGATION_OFF)
return;
@@ -1959,9 +1985,6 @@ static void __init spectre_v2_select_mitigation(void)
mode == SPECTRE_V2_RETPOLINE)
spec_ctrl_disable_kernel_rrsba();
- if (boot_cpu_has(X86_BUG_BHI))
- bhi_select_mitigation();
-
spectre_v2_enabled = mode;
pr_info("%s\n", spectre_v2_strings[mode]);
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 13/16] x86/bugs: Restructure spectre_v2 mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (11 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 12/16] x86/bugs: Restructure BHI mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-04-29 10:46 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 14/16] x86/bugs: Restructure SSB mitigation David Kaplan
` (4 subsequent siblings)
17 siblings, 2 replies; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure spectre_v2 to use select/update/apply functions to create
consistent vulnerability handling.
The spectre_v2 mitigation may be updated based on the selected retbleed
mitigation.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 80 +++++++++++++++++++++++---------------
1 file changed, 49 insertions(+), 31 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b7063f58ae88..8fe00fe987d5 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -56,6 +56,8 @@
static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v1_apply_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
+static void __init spectre_v2_update_mitigation(void);
+static void __init spectre_v2_apply_mitigation(void);
static void __init retbleed_select_mitigation(void);
static void __init retbleed_update_mitigation(void);
static void __init retbleed_apply_mitigation(void);
@@ -217,6 +219,12 @@ void __init cpu_select_mitigations(void)
* After mitigations are selected, some may need to update their
* choices.
*/
+ spectre_v2_update_mitigation();
+ /*
+ * retbleed_update_mitigation() relies on the state set by
+ * spectre_v2_update_mitigation(); specifically it wants to know about
+ * spectre_v2=ibrs.
+ */
retbleed_update_mitigation();
/*
* spectre_v2_user_update_mitigation() depends on
@@ -230,6 +238,7 @@ void __init cpu_select_mitigations(void)
bhi_update_mitigation();
spectre_v1_apply_mitigation();
+ spectre_v2_apply_mitigation();
retbleed_apply_mitigation();
spectre_v2_user_apply_mitigation();
mds_apply_mitigation();
@@ -1876,18 +1885,18 @@ static void __init bhi_apply_mitigation(void)
static void __init spectre_v2_select_mitigation(void)
{
- enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
+ spectre_v2_cmd = spectre_v2_parse_cmdline();
/*
* If the CPU is not affected and the command line mode is NONE or AUTO
* then nothing to do.
*/
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
- (cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO))
+ (spectre_v2_cmd == SPECTRE_V2_CMD_NONE || spectre_v2_cmd == SPECTRE_V2_CMD_AUTO))
return;
- switch (cmd) {
+ switch (spectre_v2_cmd) {
case SPECTRE_V2_CMD_NONE:
return;
@@ -1898,16 +1907,6 @@ static void __init spectre_v2_select_mitigation(void)
break;
}
- if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
- boot_cpu_has_bug(X86_BUG_RETBLEED) &&
- retbleed_mitigation != RETBLEED_MITIGATION_NONE &&
- retbleed_mitigation != RETBLEED_MITIGATION_STUFF &&
- boot_cpu_has(X86_FEATURE_IBRS) &&
- boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
- mode = SPECTRE_V2_IBRS;
- break;
- }
-
mode = spectre_v2_select_retpoline();
break;
@@ -1941,10 +1940,32 @@ static void __init spectre_v2_select_mitigation(void)
break;
}
- if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
+ spectre_v2_enabled = mode;
+}
+
+static void __init spectre_v2_update_mitigation(void)
+{
+ if (spectre_v2_cmd == SPECTRE_V2_CMD_AUTO) {
+ if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
+ boot_cpu_has_bug(X86_BUG_RETBLEED) &&
+ retbleed_mitigation != RETBLEED_MITIGATION_NONE &&
+ retbleed_mitigation != RETBLEED_MITIGATION_STUFF &&
+ boot_cpu_has(X86_FEATURE_IBRS) &&
+ boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+ spectre_v2_enabled = SPECTRE_V2_IBRS;
+ }
+ }
+
+ if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) && !cpu_mitigations_off())
+ pr_info("%s\n", spectre_v2_strings[spectre_v2_enabled]);
+}
+
+static void __init spectre_v2_apply_mitigation(void)
+{
+ if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
- if (spectre_v2_in_ibrs_mode(mode)) {
+ if (spectre_v2_in_ibrs_mode(spectre_v2_enabled)) {
if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
} else {
@@ -1953,8 +1974,10 @@ static void __init spectre_v2_select_mitigation(void)
}
}
- switch (mode) {
+ switch (spectre_v2_enabled) {
case SPECTRE_V2_NONE:
+ return;
+
case SPECTRE_V2_EIBRS:
break;
@@ -1980,15 +2003,12 @@ static void __init spectre_v2_select_mitigation(void)
* JMPs gets protection against BHI and Intramode-BTI, but RET
* prediction from a non-RSB predictor is still a risk.
*/
- if (mode == SPECTRE_V2_EIBRS_LFENCE ||
- mode == SPECTRE_V2_EIBRS_RETPOLINE ||
- mode == SPECTRE_V2_RETPOLINE)
+ if (spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE ||
+ spectre_v2_enabled == SPECTRE_V2_EIBRS_RETPOLINE ||
+ spectre_v2_enabled == SPECTRE_V2_RETPOLINE)
spec_ctrl_disable_kernel_rrsba();
- spectre_v2_enabled = mode;
- pr_info("%s\n", spectre_v2_strings[mode]);
-
- spectre_v2_select_rsb_mitigation(mode);
+ spectre_v2_select_rsb_mitigation(spectre_v2_enabled);
/*
* Retpoline protects the kernel, but doesn't protect firmware. IBRS
@@ -1996,10 +2016,10 @@ static void __init spectre_v2_select_mitigation(void)
* firmware calls only when IBRS / Enhanced / Automatic IBRS aren't
* otherwise enabled.
*
- * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
- * the user might select retpoline on the kernel command line and if
- * the CPU supports Enhanced IBRS, kernel might un-intentionally not
- * enable IBRS around firmware calls.
+ * Use "spectre_v2_enabled" to check Enhanced IBRS instead of
+ * boot_cpu_has(), because the user might select retpoline on the kernel
+ * command line and if the CPU supports Enhanced IBRS, kernel might
+ * un-intentionally not enable IBRS around firmware calls.
*/
if (boot_cpu_has_bug(X86_BUG_RETBLEED) &&
boot_cpu_has(X86_FEATURE_IBPB) &&
@@ -2011,13 +2031,11 @@ static void __init spectre_v2_select_mitigation(void)
pr_info("Enabling Speculation Barrier for firmware calls\n");
}
- } else if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) {
+ } else if (boot_cpu_has(X86_FEATURE_IBRS) &&
+ !spectre_v2_in_ibrs_mode(spectre_v2_enabled)) {
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
-
- /* Set up IBPB and STIBP depending on the general spectre V2 command */
- spectre_v2_cmd = cmd;
}
static void update_stibp_msr(void * __unused)
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 14/16] x86/bugs: Restructure SSB mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (12 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 13/16] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-04-29 12:54 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 15/16] x86/bugs: Restructure L1TF mitigation David Kaplan
` (3 subsequent siblings)
17 siblings, 2 replies; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure SSB to use select/apply functions to create consistent
vulnerability handling.
Remove __ssb_select_mitigation() and split the functionality between the
select/apply functions.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 36 +++++++++++++++++-------------------
1 file changed, 17 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 8fe00fe987d5..e526d06171cd 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -65,6 +65,7 @@ static void __init spectre_v2_user_select_mitigation(void);
static void __init spectre_v2_user_update_mitigation(void);
static void __init spectre_v2_user_apply_mitigation(void);
static void __init ssb_select_mitigation(void);
+static void __init ssb_apply_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init mds_update_mitigation(void);
@@ -241,6 +242,7 @@ void __init cpu_select_mitigations(void)
spectre_v2_apply_mitigation();
retbleed_apply_mitigation();
spectre_v2_user_apply_mitigation();
+ ssb_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -2224,19 +2226,18 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
return cmd;
}
-static enum ssb_mitigation __init __ssb_select_mitigation(void)
+static void ssb_select_mitigation(void)
{
- enum ssb_mitigation mode = SPEC_STORE_BYPASS_NONE;
enum ssb_mitigation_cmd cmd;
if (!boot_cpu_has(X86_FEATURE_SSBD))
- return mode;
+ goto out;
cmd = ssb_parse_cmdline();
if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS) &&
(cmd == SPEC_STORE_BYPASS_CMD_NONE ||
cmd == SPEC_STORE_BYPASS_CMD_AUTO))
- return mode;
+ return;
switch (cmd) {
case SPEC_STORE_BYPASS_CMD_SECCOMP:
@@ -2245,28 +2246,35 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
* enabled.
*/
if (IS_ENABLED(CONFIG_SECCOMP))
- mode = SPEC_STORE_BYPASS_SECCOMP;
+ ssb_mode = SPEC_STORE_BYPASS_SECCOMP;
else
- mode = SPEC_STORE_BYPASS_PRCTL;
+ ssb_mode = SPEC_STORE_BYPASS_PRCTL;
break;
case SPEC_STORE_BYPASS_CMD_ON:
- mode = SPEC_STORE_BYPASS_DISABLE;
+ ssb_mode = SPEC_STORE_BYPASS_DISABLE;
break;
case SPEC_STORE_BYPASS_CMD_AUTO:
case SPEC_STORE_BYPASS_CMD_PRCTL:
- mode = SPEC_STORE_BYPASS_PRCTL;
+ ssb_mode = SPEC_STORE_BYPASS_PRCTL;
break;
case SPEC_STORE_BYPASS_CMD_NONE:
break;
}
+out:
+ if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
+ pr_info("%s\n", ssb_strings[ssb_mode]);
+}
+
+static void __init ssb_apply_mitigation(void)
+{
/*
* We have three CPU feature flags that are in play here:
* - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
* - X86_FEATURE_SSBD - CPU is able to turn off speculative store bypass
* - X86_FEATURE_SPEC_STORE_BYPASS_DISABLE - engage the mitigation
*/
- if (mode == SPEC_STORE_BYPASS_DISABLE) {
+ if (ssb_mode == SPEC_STORE_BYPASS_DISABLE) {
setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE);
/*
* Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
@@ -2280,16 +2288,6 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
update_spec_ctrl(x86_spec_ctrl_base);
}
}
-
- return mode;
-}
-
-static void ssb_select_mitigation(void)
-{
- ssb_mode = __ssb_select_mitigation();
-
- if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
- pr_info("%s\n", ssb_strings[ssb_mode]);
}
#undef pr_fmt
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 15/16] x86/bugs: Restructure L1TF mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (13 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 14/16] x86/bugs: Restructure SSB mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 16/16] x86/bugs: Restructure SRSO mitigation David Kaplan
` (2 subsequent siblings)
17 siblings, 1 reply; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure L1TF to use select/apply functions to create consistent
vulnerability handling.
Define new AUTO mitigation for L1TF.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/include/asm/processor.h | 1 +
arch/x86/kernel/cpu/bugs.c | 25 +++++++++++++++++++------
arch/x86/kvm/vmx/vmx.c | 2 ++
3 files changed, 22 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index eaa7214d6953..62705783ca3c 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -735,6 +735,7 @@ void store_cpu_caps(struct cpuinfo_x86 *info);
enum l1tf_mitigations {
L1TF_MITIGATION_OFF,
+ L1TF_MITIGATION_AUTO,
L1TF_MITIGATION_FLUSH_NOWARN,
L1TF_MITIGATION_FLUSH,
L1TF_MITIGATION_FLUSH_NOSMT,
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index e526d06171cd..5f718537ba70 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -67,6 +67,7 @@ static void __init spectre_v2_user_apply_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init ssb_apply_mitigation(void);
static void __init l1tf_select_mitigation(void);
+static void __init l1tf_apply_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init mds_update_mitigation(void);
static void __init mds_apply_mitigation(void);
@@ -243,6 +244,7 @@ void __init cpu_select_mitigations(void)
retbleed_apply_mitigation();
spectre_v2_user_apply_mitigation();
ssb_apply_mitigation();
+ l1tf_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -2543,7 +2545,7 @@ EXPORT_SYMBOL_GPL(itlb_multihit_kvm_mitigation);
/* Default mitigation for L1TF-affected CPUs */
enum l1tf_mitigations l1tf_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_L1TF) ? L1TF_MITIGATION_FLUSH : L1TF_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_L1TF) ? L1TF_MITIGATION_AUTO : L1TF_MITIGATION_OFF;
#if IS_ENABLED(CONFIG_KVM_INTEL)
EXPORT_SYMBOL_GPL(l1tf_mitigation);
#endif
@@ -2590,23 +2592,34 @@ static void override_cache_bits(struct cpuinfo_x86 *c)
}
static void __init l1tf_select_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_L1TF) || cpu_mitigations_off()) {
+ l1tf_mitigation = L1TF_MITIGATION_OFF;
+ return;
+ }
+
+ if (l1tf_mitigation == L1TF_MITIGATION_AUTO) {
+ if (cpu_mitigations_auto_nosmt())
+ l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
+ else
+ l1tf_mitigation = L1TF_MITIGATION_FLUSH;
+ }
+}
+
+static void __init l1tf_apply_mitigation(void)
{
u64 half_pa;
if (!boot_cpu_has_bug(X86_BUG_L1TF))
return;
- if (cpu_mitigations_off())
- l1tf_mitigation = L1TF_MITIGATION_OFF;
- else if (cpu_mitigations_auto_nosmt())
- l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
-
override_cache_bits(&boot_cpu_data);
switch (l1tf_mitigation) {
case L1TF_MITIGATION_OFF:
case L1TF_MITIGATION_FLUSH_NOWARN:
case L1TF_MITIGATION_FLUSH:
+ case L1TF_MITIGATION_AUTO:
break;
case L1TF_MITIGATION_FLUSH_NOSMT:
case L1TF_MITIGATION_FULL:
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1547bfacd40f..1b2a783f9ad9 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -273,6 +273,7 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
case L1TF_MITIGATION_OFF:
l1tf = VMENTER_L1D_FLUSH_NEVER;
break;
+ case L1TF_MITIGATION_AUTO:
case L1TF_MITIGATION_FLUSH_NOWARN:
case L1TF_MITIGATION_FLUSH:
case L1TF_MITIGATION_FLUSH_NOSMT:
@@ -7704,6 +7705,7 @@ int vmx_vm_init(struct kvm *kvm)
case L1TF_MITIGATION_FLUSH_NOWARN:
/* 'I explicitly don't care' is set */
break;
+ case L1TF_MITIGATION_AUTO:
case L1TF_MITIGATION_FLUSH:
case L1TF_MITIGATION_FLUSH_NOSMT:
case L1TF_MITIGATION_FULL:
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [PATCH v5 16/16] x86/bugs: Restructure SRSO mitigation
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (14 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 15/16] x86/bugs: Restructure L1TF mitigation David Kaplan
@ 2025-04-18 16:17 ` David Kaplan
2025-04-29 16:50 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 20:03 ` [PATCH v5 00/16] Attack vector controls (part 1) Ingo Molnar
2025-04-22 5:22 ` Josh Poimboeuf
17 siblings, 2 replies; 65+ messages in thread
From: David Kaplan @ 2025-04-18 16:17 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure SRSO to use select/update/apply functions to create
consistent vulnerability handling. Like with retbleed, the command line
options directly select mitigations which can later be modified.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 212 +++++++++++++++++--------------------
1 file changed, 99 insertions(+), 113 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 5f718537ba70..85d27ba2c83c 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -84,6 +84,8 @@ static void __init srbds_select_mitigation(void);
static void __init srbds_apply_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
+static void __init srso_update_mitigation(void);
+static void __init srso_apply_mitigation(void);
static void __init gds_select_mitigation(void);
static void __init gds_apply_mitigation(void);
static void __init bhi_select_mitigation(void);
@@ -208,11 +210,6 @@ void __init cpu_select_mitigations(void)
rfds_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
-
- /*
- * srso_select_mitigation() depends and must run after
- * retbleed_select_mitigation().
- */
srso_select_mitigation();
gds_select_mitigation();
bhi_select_mitigation();
@@ -238,6 +235,8 @@ void __init cpu_select_mitigations(void)
mmio_update_mitigation();
rfds_update_mitigation();
bhi_update_mitigation();
+ /* srso_update_mitigation() depends on retbleed_update_mitigation(). */
+ srso_update_mitigation();
spectre_v1_apply_mitigation();
spectre_v2_apply_mitigation();
@@ -250,6 +249,7 @@ void __init cpu_select_mitigations(void)
mmio_apply_mitigation();
rfds_apply_mitigation();
srbds_apply_mitigation();
+ srso_apply_mitigation();
gds_apply_mitigation();
bhi_apply_mitigation();
}
@@ -2679,6 +2679,7 @@ early_param("l1tf", l1tf_cmdline);
enum srso_mitigation {
SRSO_MITIGATION_NONE,
+ SRSO_MITIGATION_AUTO,
SRSO_MITIGATION_UCODE_NEEDED,
SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
SRSO_MITIGATION_MICROCODE,
@@ -2688,14 +2689,6 @@ enum srso_mitigation {
SRSO_MITIGATION_BP_SPEC_REDUCE,
};
-enum srso_mitigation_cmd {
- SRSO_CMD_OFF,
- SRSO_CMD_MICROCODE,
- SRSO_CMD_SAFE_RET,
- SRSO_CMD_IBPB,
- SRSO_CMD_IBPB_ON_VMEXIT,
-};
-
static const char * const srso_strings[] = {
[SRSO_MITIGATION_NONE] = "Vulnerable",
[SRSO_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode",
@@ -2707,8 +2700,7 @@ static const char * const srso_strings[] = {
[SRSO_MITIGATION_BP_SPEC_REDUCE] = "Mitigation: Reduced Speculation"
};
-static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
-static enum srso_mitigation_cmd srso_cmd __ro_after_init = SRSO_CMD_SAFE_RET;
+static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_AUTO;
static int __init srso_parse_cmdline(char *str)
{
@@ -2716,15 +2708,15 @@ static int __init srso_parse_cmdline(char *str)
return -EINVAL;
if (!strcmp(str, "off"))
- srso_cmd = SRSO_CMD_OFF;
+ srso_mitigation = SRSO_MITIGATION_NONE;
else if (!strcmp(str, "microcode"))
- srso_cmd = SRSO_CMD_MICROCODE;
+ srso_mitigation = SRSO_MITIGATION_MICROCODE;
else if (!strcmp(str, "safe-ret"))
- srso_cmd = SRSO_CMD_SAFE_RET;
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET;
else if (!strcmp(str, "ibpb"))
- srso_cmd = SRSO_CMD_IBPB;
+ srso_mitigation = SRSO_MITIGATION_IBPB;
else if (!strcmp(str, "ibpb-vmexit"))
- srso_cmd = SRSO_CMD_IBPB_ON_VMEXIT;
+ srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
else
pr_err("Ignoring unknown SRSO option (%s).", str);
@@ -2738,130 +2730,80 @@ static void __init srso_select_mitigation(void)
{
bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
- if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
- cpu_mitigations_off() ||
- srso_cmd == SRSO_CMD_OFF) {
- if (boot_cpu_has(X86_FEATURE_SBPB))
- x86_pred_cmd = PRED_CMD_SBPB;
- goto out;
- }
+ if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
+ srso_mitigation = SRSO_MITIGATION_NONE;
+
+ if (srso_mitigation == SRSO_MITIGATION_NONE)
+ return;
+
+ if (srso_mitigation == SRSO_MITIGATION_AUTO)
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET;
if (has_microcode) {
/*
* Zen1/2 with SMT off aren't vulnerable after the right
* IBPB microcode has been applied.
- *
- * Zen1/2 don't have SBPB, no need to try to enable it here.
*/
if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
- goto out;
- }
-
- if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
- srso_mitigation = SRSO_MITIGATION_IBPB;
- goto out;
+ srso_mitigation = SRSO_MITIGATION_NONE;
+ return;
}
} else {
pr_warn("IBPB-extending microcode not applied!\n");
pr_warn(SRSO_NOTICE);
-
- /* may be overwritten by SRSO_CMD_SAFE_RET below */
- srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
}
- switch (srso_cmd) {
- case SRSO_CMD_MICROCODE:
- if (has_microcode) {
- srso_mitigation = SRSO_MITIGATION_MICROCODE;
- pr_warn(SRSO_NOTICE);
- }
- break;
-
- case SRSO_CMD_SAFE_RET:
- if (boot_cpu_has(X86_FEATURE_SRSO_USER_KERNEL_NO))
+ switch (srso_mitigation) {
+ case SRSO_MITIGATION_SAFE_RET:
+ if (boot_cpu_has(X86_FEATURE_SRSO_USER_KERNEL_NO)) {
+ srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
goto ibpb_on_vmexit;
+ }
- if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
- /*
- * Enable the return thunk for generated code
- * like ftrace, static_call, etc.
- */
- setup_force_cpu_cap(X86_FEATURE_RETHUNK);
- setup_force_cpu_cap(X86_FEATURE_UNRET);
-
- if (boot_cpu_data.x86 == 0x19) {
- setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
- x86_return_thunk = srso_alias_return_thunk;
- } else {
- setup_force_cpu_cap(X86_FEATURE_SRSO);
- x86_return_thunk = srso_return_thunk;
- }
- if (has_microcode)
- srso_mitigation = SRSO_MITIGATION_SAFE_RET;
- else
- srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
- } else {
+ if (!IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
+ srso_mitigation = SRSO_MITIGATION_NONE;
}
- break;
- case SRSO_CMD_IBPB:
- if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
- if (has_microcode) {
- setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
- setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
- srso_mitigation = SRSO_MITIGATION_IBPB;
-
- /*
- * IBPB on entry already obviates the need for
- * software-based untraining so clear those in case some
- * other mitigation like Retbleed has selected them.
- */
- setup_clear_cpu_cap(X86_FEATURE_UNRET);
- setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
-
- /*
- * There is no need for RSB filling: write_ibpb() ensures
- * all predictions, including the RSB, are invalidated,
- * regardless of IBPB implementation.
- */
- setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
- }
- } else {
- pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
- }
+ if (!has_microcode)
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
break;
-
ibpb_on_vmexit:
- case SRSO_CMD_IBPB_ON_VMEXIT:
+ case SRSO_MITIGATION_IBPB_ON_VMEXIT:
if (boot_cpu_has(X86_FEATURE_SRSO_BP_SPEC_REDUCE)) {
pr_notice("Reducing speculation to address VM/HV SRSO attack vector.\n");
srso_mitigation = SRSO_MITIGATION_BP_SPEC_REDUCE;
break;
}
-
- if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
- if (has_microcode) {
- setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
- srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
-
- /*
- * There is no need for RSB filling: write_ibpb() ensures
- * all predictions, including the RSB, are invalidated,
- * regardless of IBPB implementation.
- */
- setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
- }
- } else {
+ fallthrough;
+ case SRSO_MITIGATION_IBPB:
+ if (!IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
+ srso_mitigation = SRSO_MITIGATION_NONE;
}
+
+ if (!has_microcode)
+ srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
break;
default:
break;
}
+}
-out:
+static void __init srso_update_mitigation(void)
+{
+ /* If retbleed is using IBPB, that works for SRSO as well */
+ if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB &&
+ boot_cpu_has(X86_FEATURE_IBPB_BRTYPE))
+ srso_mitigation = SRSO_MITIGATION_IBPB;
+
+ if (boot_cpu_has_bug(X86_BUG_SRSO) && !cpu_mitigations_off())
+ pr_info("%s\n", srso_strings[srso_mitigation]);
+}
+
+static void __init srso_apply_mitigation(void)
+{
/*
* Clear the feature flag if this mitigation is not selected as that
* feature flag controls the BpSpecReduce MSR bit toggling in KVM.
@@ -2869,8 +2811,52 @@ static void __init srso_select_mitigation(void)
if (srso_mitigation != SRSO_MITIGATION_BP_SPEC_REDUCE)
setup_clear_cpu_cap(X86_FEATURE_SRSO_BP_SPEC_REDUCE);
- if (srso_mitigation != SRSO_MITIGATION_NONE)
- pr_info("%s\n", srso_strings[srso_mitigation]);
+ if (srso_mitigation == SRSO_MITIGATION_NONE) {
+ if (boot_cpu_has(X86_FEATURE_SBPB))
+ x86_pred_cmd = PRED_CMD_SBPB;
+ return;
+ }
+
+ switch (srso_mitigation) {
+ case SRSO_MITIGATION_SAFE_RET:
+ case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED:
+ /*
+ * Enable the return thunk for generated code
+ * like ftrace, static_call, etc.
+ */
+ setup_force_cpu_cap(X86_FEATURE_RETHUNK);
+ setup_force_cpu_cap(X86_FEATURE_UNRET);
+
+ if (boot_cpu_data.x86 == 0x19) {
+ setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
+ x86_return_thunk = srso_alias_return_thunk;
+ } else {
+ setup_force_cpu_cap(X86_FEATURE_SRSO);
+ x86_return_thunk = srso_return_thunk;
+ }
+ break;
+ case SRSO_MITIGATION_IBPB:
+ setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
+ /*
+ * IBPB on entry already obviates the need for
+ * software-based untraining so clear those in case some
+ * other mitigation like Retbleed has selected them.
+ */
+ setup_clear_cpu_cap(X86_FEATURE_UNRET);
+ setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
+ fallthrough;
+ case SRSO_MITIGATION_IBPB_ON_VMEXIT:
+ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+ /*
+ * There is no need for RSB filling: entry_ibpb() ensures
+ * all predictions, including the RSB, are invalidated,
+ * regardless of IBPB implementation.
+ */
+ setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
+ break;
+ default:
+ break;
+ }
}
#undef pr_fmt
--
2.34.1
^ permalink raw reply related [flat|nested] 65+ messages in thread
* Re: [PATCH v5 00/16] Attack vector controls (part 1)
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (15 preceding siblings ...)
2025-04-18 16:17 ` [PATCH v5 16/16] x86/bugs: Restructure SRSO mitigation David Kaplan
@ 2025-04-18 20:03 ` Ingo Molnar
2025-04-18 21:33 ` Borislav Petkov
2025-04-22 5:22 ` Josh Poimboeuf
17 siblings, 1 reply; 65+ messages in thread
From: Ingo Molnar @ 2025-04-18 20:03 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
linux-kernel
* David Kaplan <david.kaplan@amd.com> wrote:
> This is an updated version of the first half of the attack vector series
> which focuses on restructuring arch/x86/kernel/cpu/bugs.c.
>
> For more info the attack vector series, please see v4 at
> https://lore.kernel.org/all/20250310164023.779191-1-david.kaplan@amd.com/.
>
> These patches restructure the existing mitigation selection logic to use a
> uniform set of functions. First, the "select" function is called for each
> mitigation to select an appropriate mitigation. Unless a mitigation is
> explicitly selected or disabled with a command line option, the default
> mitigation is AUTO and the "select" function will then choose the best
> mitigation. After the "select" function is called for each mitigation,
> some mitigations define an "update" function which can be used to update
> the selection, based on the choices made by other mitigations. Finally,
> the "apply" function is called which enables the chosen mitigation.
>
> This structure simplifies the mitigation control logic, especially when
> there are dependencies between multiple vulnerabilities.
>
> This is mostly code restructuring without functional changes, except where
> noted.
>
> Compared to v4 this only includes bug fixes/cleanup.
>
> David Kaplan (16):
> x86/bugs: Restructure MDS mitigation
> x86/bugs: Restructure TAA mitigation
> x86/bugs: Restructure MMIO mitigation
> x86/bugs: Restructure RFDS mitigation
> x86/bugs: Remove md_clear_*_mitigation()
> x86/bugs: Restructure SRBDS mitigation
> x86/bugs: Restructure GDS mitigation
> x86/bugs: Restructure spectre_v1 mitigation
> x86/bugs: Allow retbleed=stuff only on Intel
> x86/bugs: Restructure retbleed mitigation
> x86/bugs: Restructure spectre_v2_user mitigation
> x86/bugs: Restructure BHI mitigation
> x86/bugs: Restructure spectre_v2 mitigation
> x86/bugs: Restructure SSB mitigation
> x86/bugs: Restructure L1TF mitigation
> x86/bugs: Restructure SRSO mitigation
>
> arch/x86/include/asm/processor.h | 1 +
> arch/x86/kernel/cpu/bugs.c | 1112 +++++++++++++++++-------------
> arch/x86/kvm/vmx/vmx.c | 2 +
> 3 files changed, 644 insertions(+), 471 deletions(-)
So I really like this cleanup & restructuring.
A namespace suggestion.
Instead of _op_mitigation postfixes:
static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v1_apply_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init retbleed_select_mitigation(void);
static void __init retbleed_update_mitigation(void);
static void __init retbleed_apply_mitigation(void);
static void __init spectre_v2_user_select_mitigation(void);
static void __init spectre_v2_user_update_mitigation(void);
static void __init spectre_v2_user_apply_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init ssb_apply_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init l1tf_apply_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init mds_update_mitigation(void);
static void __init mds_apply_mitigation(void);
static void __init taa_select_mitigation(void);
static void __init taa_update_mitigation(void);
static void __init taa_apply_mitigation(void);
static void __init mmio_select_mitigation(void);
static void __init mmio_update_mitigation(void);
static void __init mmio_apply_mitigation(void);
static void __init rfds_select_mitigation(void);
static void __init rfds_update_mitigation(void);
static void __init rfds_apply_mitigation(void);
static void __init srbds_select_mitigation(void);
static void __init srbds_apply_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
static void __init srso_update_mitigation(void);
static void __init srso_apply_mitigation(void);
static void __init gds_select_mitigation(void);
static void __init gds_apply_mitigation(void);
static void __init bhi_select_mitigation(void);
static void __init bhi_update_mitigation(void);
static void __init bhi_apply_mitigation(void);
Wouldn't it be nicer to have mitigation_op_ prefixes, like most kernel
subsystems use for their function names:
static void __init mitigation_select_spectre_v1(void);
static void __init mitigation_enable_spectre_v1(void);
static void __init mitigation_select_spectre_v2(void);
static void __init mitigation_select_retbleed(void);
static void __init mitigation_update_retbleed(void);
static void __init mitigation_enable_retbleed(void);
static void __init mitigation_select_spectre_v2_user(void);
static void __init mitigation_update_spectre_v2_user(void);
static void __init mitigation_enable_spectre_v2_user(void);
static void __init mitigation_select_ssb(void);
static void __init mitigation_enable_ssb(void);
static void __init mitigation_select_l1tf(void);
static void __init mitigation_enable_l1tf(void);
static void __init mitigation_select_mds(void);
static void __init mitigation_update_mds(void);
static void __init mitigation_enable_mds(void);
static void __init mitigation_select_taa(void);
static void __init mitigation_update_taa(void);
static void __init mitigation_enable_taa(void);
static void __init mitigation_select_mmio(void);
static void __init mitigation_update_mmio(void);
static void __init mitigation_enable_mmio(void);
static void __init mitigation_select_rfds(void);
static void __init mitigation_update_rfds(void);
static void __init mitigation_enable_rfds(void);
static void __init mitigation_select_srbds(void);
static void __init mitigation_enable_srbds(void);
static void __init mitigation_select_l1d_flush(void);
static void __init mitigation_select_srso(void);
static void __init mitigation_update_srso(void);
static void __init mitigation_enable_srso(void);
static void __init mitigation_select_gds(void);
static void __init mitigation_enable_gds(void);
static void __init mitigation_select_bhi(void);
static void __init mitigation_update_bhi(void);
static void __init mitigation_enable_bhi(void);
(Note that I changed '_apply' to '_enable', to get three 6-letter verbs. ;-)
We already do this for the Kconfig knobs:
CONFIG_MITIGATION_PAGE_TABLE_ISOLATION=y
CONFIG_MITIGATION_RETPOLINE=y
CONFIG_MITIGATION_RETHUNK=y
CONFIG_MITIGATION_UNRET_ENTRY=y
CONFIG_MITIGATION_CALL_DEPTH_TRACKING=y
CONFIG_MITIGATION_IBPB_ENTRY=y
CONFIG_MITIGATION_IBRS_ENTRY=y
CONFIG_MITIGATION_SRSO=y
CONFIG_MITIGATION_SLS=y
CONFIG_MITIGATION_GDS=y
CONFIG_MITIGATION_RFDS=y
CONFIG_MITIGATION_SPECTRE_BHI=y
CONFIG_MITIGATION_MDS=y
CONFIG_MITIGATION_TAA=y
CONFIG_MITIGATION_MMIO_STALE_DATA=y
CONFIG_MITIGATION_L1TF=y
CONFIG_MITIGATION_RETBLEED=y
CONFIG_MITIGATION_SPECTRE_V1=y
CONFIG_MITIGATION_SPECTRE_V2=y
CONFIG_MITIGATION_SRBDS=y
CONFIG_MITIGATION_SSB=y
and in particular when these functions are used in blocks (as they
often are), it looks much cleaner and more organized:
# Before:
/* Select the proper CPU mitigations before patching alternatives: */
spectre_v1_select_mitigation();
spectre_v2_select_mitigation();
retbleed_select_mitigation();
spectre_v2_user_select_mitigation();
ssb_select_mitigation();
l1tf_select_mitigation();
mds_select_mitigation();
taa_update_mitigation();
taa_select_mitigation();
mmio_select_mitigation();
rfds_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
srso_select_mitigation();
gds_select_mitigation();
bhi_select_mitigation();
# After:
/* Select the proper CPU mitigations before patching alternatives: */
mitigation_select_spectre_v1();
mitigation_select_spectre_v2();
mitigation_select_retbleed();
mitigation_select_spectre_v2_user();
mitigation_select_ssb();
mitigation_select_l1tf();
mitigation_select_mds();
mitigation_update_taa();
mitigation_select_taa();
mitigation_select_mmio();
mitigation_select_rfds();
mitigation_select_srbds();
mitigation_select_l1d_flush();
mitigation_select_srso();
mitigation_select_gds();
mitigation_select_bhi();
Right?
Bonus quiz: I've snuck a subtle bug into the code sequence above. In
which block is it easier to find visually? :-)
Thanks,
Ingo
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation
2025-04-18 16:17 ` [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation David Kaplan
@ 2025-04-18 20:42 ` Borislav Petkov
2025-04-20 21:00 ` Kaplan, David
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
1 sibling, 1 reply; 65+ messages in thread
From: Borislav Petkov @ 2025-04-18 20:42 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Fri, Apr 18, 2025 at 11:17:06AM -0500, David Kaplan wrote:
> @@ -284,6 +314,9 @@ enum rfds_mitigations {
> static enum rfds_mitigations rfds_mitigation __ro_after_init =
> IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO : RFDS_MITIGATION_OFF;
>
> +/* Set if any of MDS/TAA/MMIO/RFDS are going to enable VERW. */
> +static bool verw_mitigation_selected __ro_after_init;
> +
Yeah, pls pull that one up - see diff at the end.
> static void __init mds_select_mitigation(void)
> {
> if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
> @@ -294,12 +327,34 @@ static void __init mds_select_mitigation(void)
> if (mds_mitigation == MDS_MITIGATION_AUTO)
> mds_mitigation = MDS_MITIGATION_FULL;
>
> + if (mds_mitigation == MDS_MITIGATION_OFF)
> + return;
> +
> + verw_mitigation_selected = true;
> +}
> +
> +static void __init mds_update_mitigation(void)
> +{
> + if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
> + return;
Can we simply do
if (mds_mitigation == MDS_MITIGATION_OFF)
return;
here?
We already checked the X86_BUG and cpu_mitigations_off() in the select
function.
> +
> + /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
A version of that comment is already over verw_mitigation_selected's
definition.
> + if (verw_mitigation_selected)
> + mds_mitigation = MDS_MITIGATION_FULL;
So we have this here now:
if (mds_mitigation == MDS_MITIGATION_OFF)
return;
if (verw_mitigation_selected)
mds_mitigation = MDS_MITIGATION_FULL;
or what you have:
if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
return;
/* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
if (verw_mitigation_selected)
mds_mitigation = MDS_MITIGATION_FULL;
Now, if the CPU is not affected by MDS, this second branch won't ever get set
because we will return earlier.
Which then means that "If TAA, MMIO, or RFDS are being mitigated, MDS gets
mitigated too" is not really true.
IOW, I'm wondering if this would be the more fitting order:
static void __init mds_update_mitigation(void)
{
if (verw_mitigation_selected)
mds_mitigation = MDS_MITIGATION_FULL;
if (mds_mitigation == MDS_MITIGATION_OFF)
return;
I.e., if *any* mitigation did set verw_mitigation_selected, even if the CPU is
not affected by MDS, it must set mds_mitigation to FULL too.
Hmmm?
---
All of the changes ontop:
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4295502ea082..61b9aaea8d09 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -87,6 +87,9 @@ static DEFINE_MUTEX(spec_ctrl_mutex);
void (*x86_return_thunk)(void) __ro_after_init = __x86_return_thunk;
+/* Set if any of MDS/TAA/MMIO/RFDS are going to enable VERW. */
+static bool verw_mitigation_selected __ro_after_init;
+
/* Update SPEC_CTRL MSR and its cached copy unconditionally */
static void update_spec_ctrl(u64 val)
{
@@ -314,9 +317,6 @@ enum rfds_mitigations {
static enum rfds_mitigations rfds_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO : RFDS_MITIGATION_OFF;
-/* Set if any of MDS/TAA/MMIO/RFDS are going to enable VERW. */
-static bool verw_mitigation_selected __ro_after_init;
-
static void __init mds_select_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
@@ -324,24 +324,23 @@ static void __init mds_select_mitigation(void)
return;
}
- if (mds_mitigation == MDS_MITIGATION_AUTO)
- mds_mitigation = MDS_MITIGATION_FULL;
-
if (mds_mitigation == MDS_MITIGATION_OFF)
return;
+ if (mds_mitigation == MDS_MITIGATION_AUTO)
+ mds_mitigation = MDS_MITIGATION_FULL;
+
verw_mitigation_selected = true;
}
static void __init mds_update_mitigation(void)
{
- if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
- return;
-
- /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
if (verw_mitigation_selected)
mds_mitigation = MDS_MITIGATION_FULL;
+ if (mds_mitigation == MDS_MITIGATION_OFF)
+ return;
+
if (mds_mitigation == MDS_MITIGATION_FULL) {
if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
mds_mitigation = MDS_MITIGATION_VMWERV;
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply related [flat|nested] 65+ messages in thread
* Re: [PATCH v5 00/16] Attack vector controls (part 1)
2025-04-18 20:03 ` [PATCH v5 00/16] Attack vector controls (part 1) Ingo Molnar
@ 2025-04-18 21:33 ` Borislav Petkov
2025-04-22 9:46 ` Ingo Molnar
0 siblings, 1 reply; 65+ messages in thread
From: Borislav Petkov @ 2025-04-18 21:33 UTC (permalink / raw)
To: Ingo Molnar
Cc: David Kaplan, Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
linux-kernel
On Fri, Apr 18, 2025 at 10:03:42PM +0200, Ingo Molnar wrote:
> /* Select the proper CPU mitigations before patching alternatives: */
> mitigation_select_spectre_v1();
> mitigation_select_spectre_v2();
> mitigation_select_retbleed();
> mitigation_select_spectre_v2_user();
> mitigation_select_ssb();
> mitigation_select_l1tf();
> mitigation_select_mds();
> mitigation_update_taa();
> mitigation_select_taa();
> mitigation_select_mmio();
> mitigation_select_rfds();
> mitigation_select_srbds();
> mitigation_select_l1d_flush();
> mitigation_select_srso();
> mitigation_select_gds();
> mitigation_select_bhi();
The bad side of that is that you have a whole set of letters
- "mitigation_select" - before the *actual* name which is the only thing one
is interested in. With the vectors, one is now interested in the operation too
- select, update or apply.
I'd make *which* mitigation is a lot more visible instead of that useless
amassing of letters - especially since all those functions are internal.
spectre_v1_select()
spectre_v2_select()
...
then
...
spectre_v1_update()
...
mmio_update()
and then
...
srso_apply()
l1tf_apply()
...
and so on. Short, sweet, not a lot of bla, that useless "mitigation"
regurgitating is gone and the code is readable again.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 02/16] x86/bugs: Restructure TAA mitigation
2025-04-18 16:17 ` [PATCH v5 02/16] x86/bugs: Restructure TAA mitigation David Kaplan
@ 2025-04-19 12:36 ` Borislav Petkov
2025-04-20 21:03 ` Kaplan, David
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
1 sibling, 1 reply; 65+ messages in thread
From: Borislav Petkov @ 2025-04-19 12:36 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Fri, Apr 18, 2025 at 11:17:07AM -0500, David Kaplan wrote:
> @@ -394,6 +399,11 @@ static const char * const taa_strings[] = {
> [TAA_MITIGATION_TSX_DISABLED] = "Mitigation: TSX disabled",
> };
>
> +static bool __init taa_vulnerable(void)
> +{
> + return boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM);
> +}
> +
> static void __init taa_select_mitigation(void)
> {
> if (!boot_cpu_has_bug(X86_BUG_TAA)) {
Shouldn't you use !taa_vulnerable() here directly too, since we're introducing
it as a helper?
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 65+ messages in thread
* RE: [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation
2025-04-18 20:42 ` Borislav Petkov
@ 2025-04-20 21:00 ` Kaplan, David
2025-04-22 8:19 ` Borislav Petkov
0 siblings, 1 reply; 65+ messages in thread
From: Kaplan, David @ 2025-04-20 21:00 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Friday, April 18, 2025 3:43 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Fri, Apr 18, 2025 at 11:17:06AM -0500, David Kaplan wrote:
> > @@ -284,6 +314,9 @@ enum rfds_mitigations { static enum
> > rfds_mitigations rfds_mitigation __ro_after_init =
> > IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO
> :
> > RFDS_MITIGATION_OFF;
> >
> > +/* Set if any of MDS/TAA/MMIO/RFDS are going to enable VERW. */
> > +static bool verw_mitigation_selected __ro_after_init;
> > +
>
> Yeah, pls pull that one up - see diff at the end.
>
> > static void __init mds_select_mitigation(void) {
> > if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
> > @@ -294,12 +327,34 @@ static void __init mds_select_mitigation(void)
> > if (mds_mitigation == MDS_MITIGATION_AUTO)
> > mds_mitigation = MDS_MITIGATION_FULL;
> >
> > + if (mds_mitigation == MDS_MITIGATION_OFF)
> > + return;
> > +
> > + verw_mitigation_selected = true; }
> > +
> > +static void __init mds_update_mitigation(void) {
> > + if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
> > + return;
>
> Can we simply do
>
> if (mds_mitigation == MDS_MITIGATION_OFF)
> return;
>
> here?
>
> We already checked the X86_BUG and cpu_mitigations_off() in the select function.
No, the point of mds_update_mitigation() is to enable mds mitigations if one of the other similar bugs (TAA/MMIO/RFDS) is being mitigated.
So even if mds_mitigation was MDS_MITIGATION_OFF, it might need to change to something else because one of the other bugs was mitigated.
>
> > +
> > + /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated
> > + too. */
>
> A version of that comment is already over verw_mitigation_selected's definition.
I could remove it here, although I wonder if it's worth keeping given the confusion above. Or perhaps it can be rephrased to specifically talk about how mds gets mitigated even if it happened to be disabled.
>
> > + if (verw_mitigation_selected)
> > + mds_mitigation = MDS_MITIGATION_FULL;
>
> So we have this here now:
>
> if (mds_mitigation == MDS_MITIGATION_OFF)
> return;
>
> if (verw_mitigation_selected)
> mds_mitigation = MDS_MITIGATION_FULL;
>
> or what you have:
>
> if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
> return;
>
> /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
> if (verw_mitigation_selected)
> mds_mitigation = MDS_MITIGATION_FULL;
>
>
>
> Now, if the CPU is not affected by MDS, this second branch won't ever get set
> because we will return earlier.
>
> Which then means that "If TAA, MMIO, or RFDS are being mitigated, MDS gets
> mitigated too" is not really true.
>
> IOW, I'm wondering if this would be the more fitting order:
>
> static void __init mds_update_mitigation(void) {
> if (verw_mitigation_selected)
> mds_mitigation = MDS_MITIGATION_FULL;
>
> if (mds_mitigation == MDS_MITIGATION_OFF)
> return;
>
> I.e., if *any* mitigation did set verw_mitigation_selected, even if the CPU is not
> affected by MDS, it must set mds_mitigation to FULL too.
>
> Hmmm?
>
I'm not sure this is right, it certainly diverges from upstream where mds is only marked as mitigated if the CPU is actually vulnerable to mds. I also think that imo it generally does not make sense to mark a bug as mitigated if the CPU isn't vulnerable (seems to increase risk of future bugs in the logic).
Thanks
--David Kaplan
^ permalink raw reply [flat|nested] 65+ messages in thread
* RE: [PATCH v5 02/16] x86/bugs: Restructure TAA mitigation
2025-04-19 12:36 ` Borislav Petkov
@ 2025-04-20 21:03 ` Kaplan, David
2025-04-22 8:56 ` Borislav Petkov
0 siblings, 1 reply; 65+ messages in thread
From: Kaplan, David @ 2025-04-20 21:03 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Saturday, April 19, 2025 7:37 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v5 02/16] x86/bugs: Restructure TAA mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Fri, Apr 18, 2025 at 11:17:07AM -0500, David Kaplan wrote:
> > @@ -394,6 +399,11 @@ static const char * const taa_strings[] = {
> > [TAA_MITIGATION_TSX_DISABLED] = "Mitigation: TSX disabled",
> > };
> >
> > +static bool __init taa_vulnerable(void) {
> > + return boot_cpu_has_bug(X86_BUG_TAA) &&
> > +boot_cpu_has(X86_FEATURE_RTM); }
> > +
> > static void __init taa_select_mitigation(void) {
> > if (!boot_cpu_has_bug(X86_BUG_TAA)) {
>
> Shouldn't you use !taa_vulnerable() here directly too, since we're introducing it as a
> helper?
>
No, because taa_vulnerable() requires both X86_BUG_TAA and X86_FEATURE_RTM.
In taa_select_mitigation() there is a difference depending on which of these doesn't exist which sets the mitigation to either OFF (if unaffected) or TSX_DISABLED (if no RTM).
taa_vulnerable() is used however in taa_update_mitigation().
--David Kaplan
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 00/16] Attack vector controls (part 1)
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
` (16 preceding siblings ...)
2025-04-18 20:03 ` [PATCH v5 00/16] Attack vector controls (part 1) Ingo Molnar
@ 2025-04-22 5:22 ` Josh Poimboeuf
17 siblings, 0 replies; 65+ messages in thread
From: Josh Poimboeuf @ 2025-04-22 5:22 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Fri, Apr 18, 2025 at 11:17:05AM -0500, David Kaplan wrote:
> This is an updated version of the first half of the attack vector series
> which focuses on restructuring arch/x86/kernel/cpu/bugs.c.
>
> For more info the attack vector series, please see v4 at
> https://lore.kernel.org/all/20250310164023.779191-1-david.kaplan@amd.com/.
>
> These patches restructure the existing mitigation selection logic to use a
> uniform set of functions. First, the "select" function is called for each
> mitigation to select an appropriate mitigation. Unless a mitigation is
> explicitly selected or disabled with a command line option, the default
> mitigation is AUTO and the "select" function will then choose the best
> mitigation. After the "select" function is called for each mitigation,
> some mitigations define an "update" function which can be used to update
> the selection, based on the choices made by other mitigations. Finally,
> the "apply" function is called which enables the chosen mitigation.
>
> This structure simplifies the mitigation control logic, especially when
> there are dependencies between multiple vulnerabilities.
>
> This is mostly code restructuring without functional changes, except where
> noted.
>
> Compared to v4 this only includes bug fixes/cleanup.
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
--
Josh
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation
2025-04-20 21:00 ` Kaplan, David
@ 2025-04-22 8:19 ` Borislav Petkov
2025-04-22 14:32 ` Kaplan, David
0 siblings, 1 reply; 65+ messages in thread
From: Borislav Petkov @ 2025-04-22 8:19 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Sun, Apr 20, 2025 at 09:00:56PM +0000, Kaplan, David wrote:
> I'm not sure this is right, it certainly diverges from upstream where
> mds is only marked as mitigated if the CPU is actually vulnerable to
> mds. I also think that imo it generally does not make sense to mark
> a bug as mitigated if the CPU isn't vulnerable (seems to increase risk
> of future bugs in the logic).
Hmm, it still looks weird to me. So let's imagine the CPU is NOT
affected by MDS. The select function will leave it to OFF.
Then, some other select function will set verw_mitigation_selected.
Now, the mds_update_mitigation() comes in, X86_BUG_MDS is still NOT set
so we leave mds_mitigation to OFF even though it *technically* gets
mitigated?
I guess the reporting aspect does make sense - we don't want to start
reporting MDS-unaffected CPUs as being MDS mitigated because they're not
- not really. We just use their mitigation to mitigate other vulns.
Then this comment which explains the logic of verw_mitigation_selected:
/* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
should probably say that if the CPU is affected by MDS *in any way*
- the BUG bit is set - then it gets full mitigation.
And this should be the case for all inter-related VERW mitigations: if
the CPU is in any way affected, it gets mitigated too. If it is not,
then it gets only *reported* that it is not affected but the mitigation
technique can be used for others.
Does that make sense?
I'm basically thinking out loud here, trying to explain (to myself
mostly:)) how this verw_mitigation_selected is going to be employed.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 02/16] x86/bugs: Restructure TAA mitigation
2025-04-20 21:03 ` Kaplan, David
@ 2025-04-22 8:56 ` Borislav Petkov
0 siblings, 0 replies; 65+ messages in thread
From: Borislav Petkov @ 2025-04-22 8:56 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Sun, Apr 20, 2025 at 09:03:25PM +0000, Kaplan, David wrote:
> No, because taa_vulnerable() requires both X86_BUG_TAA and X86_FEATURE_RTM.
>
> In taa_select_mitigation() there is a difference depending on which of these doesn't exist which sets the mitigation to either OFF (if unaffected) or TSX_DISABLED (if no RTM).
>
Yah, another mis-designed thing from back then. If RTM is disabled, then
TAA is mitigated. Period.
And us making how a vuln is mitigated into a separate thing is
unnecessary complication IMO. IOW, TAA_MITIGATION_TSX_DISABLED shouldn've
been done, in hindsight.
But whatever, another topic.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 00/16] Attack vector controls (part 1)
2025-04-18 21:33 ` Borislav Petkov
@ 2025-04-22 9:46 ` Ingo Molnar
2025-04-22 13:59 ` Borislav Petkov
0 siblings, 1 reply; 65+ messages in thread
From: Ingo Molnar @ 2025-04-22 9:46 UTC (permalink / raw)
To: Borislav Petkov
Cc: David Kaplan, Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
linux-kernel
* Borislav Petkov <bp@alien8.de> wrote:
> On Fri, Apr 18, 2025 at 10:03:42PM +0200, Ingo Molnar wrote:
> > /* Select the proper CPU mitigations before patching alternatives: */
> > mitigation_select_spectre_v1();
> > mitigation_select_spectre_v2();
> > mitigation_select_retbleed();
> > mitigation_select_spectre_v2_user();
> > mitigation_select_ssb();
> > mitigation_select_l1tf();
> > mitigation_select_mds();
> > mitigation_update_taa();
> > mitigation_select_taa();
> > mitigation_select_mmio();
> > mitigation_select_rfds();
> > mitigation_select_srbds();
> > mitigation_select_l1d_flush();
> > mitigation_select_srso();
> > mitigation_select_gds();
> > mitigation_select_bhi();
>
> The bad side of that is that you have a whole set of letters
> - "mitigation_select" - before the *actual* name which is the only thing one
> is interested in. With the vectors, one is now interested in the operation too
> - select, update or apply.
I have three counter-arguments:
1)
The above pattern is not a big problem really, as the human brain has
no trouble ignoring well-structured syntactic repetitions on the left
side and will focus on the right side column.
Unlike the current status quo, which your reply didn't quote, so I'll
quote it again:
static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init retbleed_select_mitigation(void);
static void __init spectre_v2_user_select_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init md_clear_update_mitigation(void);
static void __init md_clear_select_mitigation(void);
static void __init taa_select_mitigation(void);
static void __init mmio_select_mitigation(void);
static void __init srbds_select_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
static void __init gds_select_mitigation(void);
Which is far more visually messy.
2)
As to shortening the function names to reduce (but not eliminate ...)
the mess:
That extra mitigation_ prefix or _mitigation postfix might sound like
repetitious when used in mass-calls like above - but it's really useful
when reading the actual definitions:
> Short, sweet,
... there's such a thing as too short, too sweet, too ambiguous, which
is why we ended up with the current name of gds_select_mitigation() et
al to begin with.
But let's see an example. Consider:
static void __init gds_select(void)
{
u64 mcu_ctrl;
What do I select? Some GDS detail? Or the main mitigation itself?
Nothing really tells me.
While with:
static void __init mitigation_select_gds(void)
{
u64 mcu_ctrl;
It's immediately clear that this is the main function that selects the
GDS mitigation.
3)
A proper namespace also makes it *much* easier to grep for specific
primitives.
With your suggested 'gds_select()' naming, if I want to search for all
gds_ primitives, I get:
starship:~/tip> git grep gds_ arch/x86/kernel/cpu/bugs.c
arch/x86/kernel/cpu/bugs.c:static void __init gds_select_mitigation(void);
arch/x86/kernel/cpu/bugs.c: gds_select_mitigation();
arch/x86/kernel/cpu/bugs.c:enum gds_mitigations {
arch/x86/kernel/cpu/bugs.c:static enum gds_mitigations gds_mitigation __ro_after_init =
arch/x86/kernel/cpu/bugs.c:static const char * const gds_strings[] = {
arch/x86/kernel/cpu/bugs.c:bool gds_ucode_mitigated(void)
arch/x86/kernel/cpu/bugs.c: return (gds_mitigation == GDS_MITIGATION_FULL ||
arch/x86/kernel/cpu/bugs.c: gds_mitigation == GDS_MITIGATION_FULL_LOCKED);
arch/x86/kernel/cpu/bugs.c:EXPORT_SYMBOL_GPL(gds_ucode_mitigated);
arch/x86/kernel/cpu/bugs.c:void update_gds_msr(void)
arch/x86/kernel/cpu/bugs.c: switch (gds_mitigation) {
arch/x86/kernel/cpu/bugs.c:static void __init gds_select_mitigation(void)
arch/x86/kernel/cpu/bugs.c: gds_mitigation = GDS_MITIGATION_HYPERVISOR;
arch/x86/kernel/cpu/bugs.c: gds_mitigation = GDS_MITIGATION_OFF;
arch/x86/kernel/cpu/bugs.c: if (gds_mitigation == GDS_MITIGATION_FORCE) {
arch/x86/kernel/cpu/bugs.c: * here rather than in update_gds_msr()
arch/x86/kernel/cpu/bugs.c: gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
arch/x86/kernel/cpu/bugs.c: if (gds_mitigation == GDS_MITIGATION_FORCE)
arch/x86/kernel/cpu/bugs.c: gds_mitigation = GDS_MITIGATION_FULL;
arch/x86/kernel/cpu/bugs.c: if (gds_mitigation == GDS_MITIGATION_OFF)
arch/x86/kernel/cpu/bugs.c: * but others are then update_gds_msr() will WARN() of the state
arch/x86/kernel/cpu/bugs.c: * mismatch. If the boot CPU is locked update_gds_msr() will
arch/x86/kernel/cpu/bugs.c: gds_mitigation = GDS_MITIGATION_FULL_LOCKED;
arch/x86/kernel/cpu/bugs.c: update_gds_msr();
arch/x86/kernel/cpu/bugs.c: pr_info("%s\n", gds_strings[gds_mitigation]);
arch/x86/kernel/cpu/bugs.c:static int __init gds_parse_cmdline(char *str)
arch/x86/kernel/cpu/bugs.c: gds_mitigation = GDS_MITIGATION_OFF;
arch/x86/kernel/cpu/bugs.c: gds_mitigation = GDS_MITIGATION_FORCE;
arch/x86/kernel/cpu/bugs.c:early_param("gather_data_sampling", gds_parse_cmdline);
arch/x86/kernel/cpu/bugs.c:static ssize_t gds_show_state(char *buf)
arch/x86/kernel/cpu/bugs.c: return sysfs_emit(buf, "%s\n", gds_strings[gds_mitigation]);
arch/x86/kernel/cpu/bugs.c: return gds_show_state(buf);
Or, if I limit this to function calls only:
arch/x86/kernel/cpu/bugs.c:static void __init gds_select_mitigation(void);
arch/x86/kernel/cpu/bugs.c: gds_select_mitigation();
arch/x86/kernel/cpu/bugs.c:bool gds_ucode_mitigated(void)
arch/x86/kernel/cpu/bugs.c:void update_gds_msr(void)
arch/x86/kernel/cpu/bugs.c:static void __init gds_select_mitigation(void)
arch/x86/kernel/cpu/bugs.c: * here rather than in update_gds_msr()
arch/x86/kernel/cpu/bugs.c: * but others are then update_gds_msr() will WARN() of the state
arch/x86/kernel/cpu/bugs.c: * mismatch. If the boot CPU is locked update_gds_msr() will
arch/x86/kernel/cpu/bugs.c: update_gds_msr();
arch/x86/kernel/cpu/bugs.c:static int __init gds_parse_cmdline(char *str)
arch/x86/kernel/cpu/bugs.c:static ssize_t gds_show_state(char *buf)
arch/x86/kernel/cpu/bugs.c: return gds_show_state(buf);
Versus a targeted, obvious, untuitive search for all mitigation_
functions related to GDS:
starship:~/tip> git grep -E 'mitigation_.*_gds' arch/x86/kernel/cpu/bugs.c
arch/x86/kernel/cpu/bugs.c:static void __init mitigation_select_gds(void);
arch/x86/kernel/cpu/bugs.c: mitigation_select_gds();
arch/x86/kernel/cpu/bugs.c:static void __init mitigation_select_gds(void)
Because a proper namespace is a far more easier to search.
Hierarchical lexicographic organization of function names is basically
a code organization 101 concept, and I didn't think it would be
particularly controversial. :-)
Thanks,
Ingo
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 00/16] Attack vector controls (part 1)
2025-04-22 9:46 ` Ingo Molnar
@ 2025-04-22 13:59 ` Borislav Petkov
0 siblings, 0 replies; 65+ messages in thread
From: Borislav Petkov @ 2025-04-22 13:59 UTC (permalink / raw)
To: Ingo Molnar
Cc: David Kaplan, Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
linux-kernel
On Tue, Apr 22, 2025 at 11:46:33AM +0200, Ingo Molnar wrote:
> The above pattern is not a big problem really, as the human brain has
> no trouble ignoring well-structured syntactic repetitions on the left
> side and will focus on the right side column.
>
> Unlike the current status quo, which your reply didn't quote, so I'll
> quote it again:
>
> static void __init spectre_v1_select_mitigation(void);
> static void __init spectre_v2_select_mitigation(void);
> static void __init retbleed_select_mitigation(void);
> static void __init spectre_v2_user_select_mitigation(void);
> static void __init ssb_select_mitigation(void);
> static void __init l1tf_select_mitigation(void);
> static void __init mds_select_mitigation(void);
> static void __init md_clear_update_mitigation(void);
> static void __init md_clear_select_mitigation(void);
> static void __init taa_select_mitigation(void);
> static void __init mmio_select_mitigation(void);
> static void __init srbds_select_mitigation(void);
> static void __init l1d_flush_select_mitigation(void);
> static void __init srso_select_mitigation(void);
> static void __init gds_select_mitigation(void);
>
> Which is far more visually messy.
That's when those are written in a block together - there the human
brain can selectively ignore.
I mean when the functions are called in succession. See
cpu_select_mitigations(). All this function does is select mitigations.
So there's no need to state the selection of every single mitigation.
> What do I select? Some GDS detail? Or the main mitigation itself?
> Nothing really tells me.
We're going to have _select(), _update() and apply() function per
mitigation. And this will be documented at the top of bugs.c
In the confines of this file, those functions are special and when you
select, update or apply something in bugs.c, it should always refer to
a mitigation.
We can make that decision here, in that file, for the sanity of everyone
who's looking at it.
> While with:
>
> static void __init mitigation_select_gds(void)
> {
> u64 mcu_ctrl;
>
> It's immediately clear that this is the main function that selects the
> GDS mitigation.
Sorry:
$ git grep -o -i mitigation arch/x86/kernel/cpu/bugs.c | wc -l
714
That's just madness: this file has waaaaay too many "mitigation"s and it
impairs the reading. *Especially* if this file is *all* *about*
mitigations and only about that.
The new scheme is going to tip those scales over 1K...
> 3)
>
> A proper namespace also makes it *much* easier to grep for specific
> primitives.
>
> With your suggested 'gds_select()' naming, if I want to search for all
> gds_ primitives, I get:
Sorry, this is a bad example: if you want to search for a static
function's uses in the same file, you simply go in your favorite editor
and search for "word-under-cursor".
> Hierarchical lexicographic organization of function names is basically
> a code organization 101 concept, and I didn't think it would be
> particularly controversial. :-)
And yet the current naming ain't optimal, IMO, due to the too often
regurgitation of "mitigation". And that's particularly a problem if
it impairs reading of the code - lexicographic organization then is
secondary.
And as said before, you don't need a "mitigation_" prefix for static
functions.
And reading that code properly is the most important thing. *Especially*
*that* code.
So the goal of the bikeshedding here should be optimal reading. I.e.,
what function and variable naming allows for an optimal reading of the
code.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 65+ messages in thread
* RE: [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation
2025-04-22 8:19 ` Borislav Petkov
@ 2025-04-22 14:32 ` Kaplan, David
2025-04-22 17:25 ` Borislav Petkov
0 siblings, 1 reply; 65+ messages in thread
From: Kaplan, David @ 2025-04-22 14:32 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Tuesday, April 22, 2025 3:19 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Sun, Apr 20, 2025 at 09:00:56PM +0000, Kaplan, David wrote:
> > I'm not sure this is right, it certainly diverges from upstream where
> > mds is only marked as mitigated if the CPU is actually vulnerable to
> > mds. I also think that imo it generally does not make sense to mark a
> > bug as mitigated if the CPU isn't vulnerable (seems to increase risk
> > of future bugs in the logic).
>
> Hmm, it still looks weird to me. So let's imagine the CPU is NOT affected by MDS.
> The select function will leave it to OFF.
>
> Then, some other select function will set verw_mitigation_selected.
>
> Now, the mds_update_mitigation() comes in, X86_BUG_MDS is still NOT set so
> we leave mds_mitigation to OFF even though it *technically* gets mitigated?
>
> I guess the reporting aspect does make sense - we don't want to start reporting
> MDS-unaffected CPUs as being MDS mitigated because they're not
> - not really. We just use their mitigation to mitigate other vulns.
>
> Then this comment which explains the logic of verw_mitigation_selected:
>
> /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
>
> should probably say that if the CPU is affected by MDS *in any way*
> - the BUG bit is set - then it gets full mitigation.
>
> And this should be the case for all inter-related VERW mitigations: if the CPU is in
> any way affected, it gets mitigated too. If it is not, then it gets only *reported* that it
> is not affected but the mitigation technique can be used for others.
>
> Does that make sense?
>
I think that's correct, although I'd argue the code makes that rather obvious because mds_update_mitigation() immediately returns if the CPU is not affected by MDS. So you only get an mds mitigation if you are affected by the BUG bit.
--David Kaplan
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation
2025-04-22 14:32 ` Kaplan, David
@ 2025-04-22 17:25 ` Borislav Petkov
0 siblings, 0 replies; 65+ messages in thread
From: Borislav Petkov @ 2025-04-22 17:25 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Tue, Apr 22, 2025 at 02:32:07PM +0000, Kaplan, David wrote:
> [AMD Official Use Only - AMD Internal Distribution Only]
>
> > -----Original Message-----
> > From: Borislav Petkov <bp@alien8.de>
> > Sent: Tuesday, April 22, 2025 3:19 AM
> > To: Kaplan, David <David.Kaplan@amd.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> > Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> > <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> > Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> > <hpa@zytor.com>; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation
> >
> > Caution: This message originated from an External Source. Use proper caution
> > when opening attachments, clicking links, or responding.
> >
> >
> > On Sun, Apr 20, 2025 at 09:00:56PM +0000, Kaplan, David wrote:
> > > I'm not sure this is right, it certainly diverges from upstream where
> > > mds is only marked as mitigated if the CPU is actually vulnerable to
> > > mds. I also think that imo it generally does not make sense to mark a
> > > bug as mitigated if the CPU isn't vulnerable (seems to increase risk
> > > of future bugs in the logic).
> >
> > Hmm, it still looks weird to me. So let's imagine the CPU is NOT affected by MDS.
> > The select function will leave it to OFF.
> >
> > Then, some other select function will set verw_mitigation_selected.
> >
> > Now, the mds_update_mitigation() comes in, X86_BUG_MDS is still NOT set so
> > we leave mds_mitigation to OFF even though it *technically* gets mitigated?
> >
> > I guess the reporting aspect does make sense - we don't want to start reporting
> > MDS-unaffected CPUs as being MDS mitigated because they're not
> > - not really. We just use their mitigation to mitigate other vulns.
> >
> > Then this comment which explains the logic of verw_mitigation_selected:
> >
> > /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
> >
> > should probably say that if the CPU is affected by MDS *in any way*
> > - the BUG bit is set - then it gets full mitigation.
> >
> > And this should be the case for all inter-related VERW mitigations: if the CPU is in
> > any way affected, it gets mitigated too. If it is not, then it gets only *reported* that it
> > is not affected but the mitigation technique can be used for others.
> >
> > Does that make sense?
> >
>
> I think that's correct, although I'd argue the code makes that rather obvious because mds_update_mitigation() immediately returns if the CPU is not affected by MDS. So you only get an mds mitigation if you are affected by the BUG bit.
Right, ok.
I'll add a link to this subthread when applying so that we have some
reference to this.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation
2025-04-18 16:17 ` [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation David Kaplan
@ 2025-04-24 20:19 ` Borislav Petkov
2025-04-24 20:31 ` Kaplan, David
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
1 sibling, 1 reply; 65+ messages in thread
From: Borislav Petkov @ 2025-04-24 20:19 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Fri, Apr 18, 2025 at 11:17:08AM -0500, David Kaplan wrote:
> +static void __init mmio_apply_mitigation(void)
> +{
> + if (mmio_mitigation == MMIO_MITIGATION_OFF)
> + return;
>
> /*
> - * X86_FEATURE_CLEAR_CPU_BUF could be enabled by other VERW based
> - * mitigations, disable KVM-only mitigation in that case.
> + * Only enable the VMM mitigation if the CPU buffer clear mitigation is
> + * not being used.
> */
> - if (boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
> + if (verw_mitigation_selected) {
> + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> static_branch_disable(&cpu_buf_vm_clear);
> - else
> + } else {
> static_branch_enable(&cpu_buf_vm_clear);
> + }
Sorry, but I'm still not happy about this.
After this patch, we have:
/*
* Enable CPU buffer clear mitigation for host and VMM, if also affected
* by MDS or TAA.
*/
if (boot_cpu_has_bug(X86_BUG_MDS) || taa_vulnerable())
verw_mitigation_selected = true;
in the select function.
The comment is wrong. The code does: enable the VERW mitigation for
MMIO if affected by MDS or TAA. verw_mitigation_selected doesn't have
any bearing on whether this should be a host or VMM mitigation - as its
name says, a VERW mitigation has been selected.
Then in the apply function:
/*
* Only enable the VMM mitigation if the CPU buffer clear mitigation is
* not being used.
*/
if (verw_mitigation_selected) {
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
static_branch_disable(&cpu_buf_vm_clear);
} else {
static_branch_enable(&cpu_buf_vm_clear);
}
Comment is again wrong. verw_mitigation_selected doesn't mean the CPU
buffer clear mitigation is not being used.
Yes yes, it boils down to the same thing in the end but reading it confusing
as hell. verw_mitigation_selected means what its name is: a VERW
mitigation has been selected and nothing else.
Looking at the old code - that you can actually follow:
---
/*
* Enable CPU buffer clear mitigation for host and VMM, if also affected
* by MDS or TAA. Otherwise, enable mitigation for VMM only.
*/
if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
boot_cpu_has(X86_FEATURE_RTM)))
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
/*
* X86_FEATURE_CLEAR_CPU_BUF could be enabled by other VERW based
* mitigations, disable KVM-only mitigation in that case.
*/
if (boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
static_branch_disable(&mmio_stale_data_clear);
else
static_branch_enable(&mmio_stale_data_clear);
---
because verw_mitigation_selected didn't exist.
And maybe it shouldn't be used here because that variable simply doesn't
fit here with its meaning.
Now, if this variable and the static key were called:
if (full_verw_mitigation_selected)
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
static_branch_disable(&clear_cpu_buf_vm);
} else {
static_branch_enable(&clear_cpu_buf_vm);
}
then the code makes total sense all of a sudden.
A full VERW mitigation means CLEAR_CPU_BUF, while the VMM only means,
well, clear_cpu_buf_vm_only.
Renaming the var is probably unnecessary churn but you can fix the
comments and still rename the key:
---
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c97ded4d55e5..4a5bd6214508 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -157,8 +157,8 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
* X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when KVM-only
* mitigation is required.
*/
-DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear);
-EXPORT_SYMBOL_GPL(cpu_buf_vm_clear);
+DEFINE_STATIC_KEY_FALSE(clear_cpu_buf_vm);
+EXPORT_SYMBOL_GPL(clear_cpu_buf_vm);
void __init cpu_select_mitigations(void)
{
@@ -528,10 +528,7 @@ static void __init mmio_select_mitigation(void)
if (mmio_mitigation == MMIO_MITIGATION_OFF)
return;
- /*
- * Enable CPU buffer clear mitigation for host and VMM, if also affected
- * by MDS or TAA.
- */
+ /* Enable full VERW mitigation if also affected by MDS or TAA. */
if (boot_cpu_has_bug(X86_BUG_MDS) || taa_vulnerable())
verw_mitigation_selected = true;
}
@@ -568,14 +565,14 @@ static void __init mmio_apply_mitigation(void)
return;
/*
- * Only enable the VMM mitigation if the CPU buffer clear mitigation is
- * not being used.
+ * Full VERW mitigation selection enables host and VMENTER buffer clearing,
+ * otherwise buffer clearing only on VMENTER.
*/
if (verw_mitigation_selected) {
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
- static_branch_disable(&cpu_buf_vm_clear);
+ static_branch_disable(&clear_cpu_buf_vm);
} else {
- static_branch_enable(&cpu_buf_vm_clear);
+ static_branch_enable(&clear_cpu_buf_vm);
}
/*
@@ -681,7 +678,7 @@ static void __init md_clear_update_mitigation(void)
taa_select_mitigation();
}
/*
- * MMIO_MITIGATION_OFF is not checked here so that cpu_buf_vm_clear
+ * MMIO_MITIGATION_OFF is not checked here so that clear_cpu_buf_vm
* gets updated correctly as per X86_FEATURE_CLEAR_CPU_BUF state.
*/
if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
---
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply related [flat|nested] 65+ messages in thread
* RE: [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation
2025-04-24 20:19 ` Borislav Petkov
@ 2025-04-24 20:31 ` Kaplan, David
2025-04-25 8:09 ` Borislav Petkov
0 siblings, 1 reply; 65+ messages in thread
From: Kaplan, David @ 2025-04-24 20:31 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Thursday, April 24, 2025 4:19 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Fri, Apr 18, 2025 at 11:17:08AM -0500, David Kaplan wrote:
> > +static void __init mmio_apply_mitigation(void) {
> > + if (mmio_mitigation == MMIO_MITIGATION_OFF)
> > + return;
> >
> > /*
> > - * X86_FEATURE_CLEAR_CPU_BUF could be enabled by other VERW based
> > - * mitigations, disable KVM-only mitigation in that case.
> > + * Only enable the VMM mitigation if the CPU buffer clear mitigation is
> > + * not being used.
> > */
> > - if (boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
> > + if (verw_mitigation_selected) {
> > + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> > static_branch_disable(&cpu_buf_vm_clear);
> > - else
> > + } else {
> > static_branch_enable(&cpu_buf_vm_clear);
> > + }
>
> Sorry, but I'm still not happy about this.
>
> After this patch, we have:
>
> /*
> * Enable CPU buffer clear mitigation for host and VMM, if also affected
> * by MDS or TAA.
> */
> if (boot_cpu_has_bug(X86_BUG_MDS) || taa_vulnerable())
> verw_mitigation_selected = true;
>
> in the select function.
>
> The comment is wrong. The code does: enable the VERW mitigation for MMIO if
> affected by MDS or TAA. verw_mitigation_selected doesn't have any bearing on
> whether this should be a host or VMM mitigation - as its name says, a VERW
> mitigation has been selected.
verw_mitigation_selected implies that X86_FEATURE_CLEAR_CPU_BUF will be enabled, which does a VERW on kernel/vmm exits.
So I'm not sure the comment is really wrong, but it can be rephrased.
>
> Then in the apply function:
>
> /*
> * Only enable the VMM mitigation if the CPU buffer clear mitigation is
> * not being used.
> */
> if (verw_mitigation_selected) {
> setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> static_branch_disable(&cpu_buf_vm_clear);
> } else {
> static_branch_enable(&cpu_buf_vm_clear);
> }
>
> Comment is again wrong. verw_mitigation_selected doesn't mean the CPU buffer
> clear mitigation is not being used.
But it kind of does. !verw_mitigation_selected means that the X86_FEATURE bit there isn't set. So the VMM-based mitigation (the static branch) is only used if the broader X86_FEATURE_CLEAR_CPU_BUF is not being used.
>
> Yes yes, it boils down to the same thing in the end but reading it confusing as hell.
> verw_mitigation_selected means what its name is: a VERW mitigation has been
> selected and nothing else.
>
> Renaming the var is probably unnecessary churn but you can fix the comments and
> still rename the key:
>
> ---
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index
> c97ded4d55e5..4a5bd6214508 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -157,8 +157,8 @@
> DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
> * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when KVM-only
> * mitigation is required.
> */
> -DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear);
> -EXPORT_SYMBOL_GPL(cpu_buf_vm_clear);
> +DEFINE_STATIC_KEY_FALSE(clear_cpu_buf_vm);
> +EXPORT_SYMBOL_GPL(clear_cpu_buf_vm);
>
> void __init cpu_select_mitigations(void) { @@ -528,10 +528,7 @@ static void
> __init mmio_select_mitigation(void)
> if (mmio_mitigation == MMIO_MITIGATION_OFF)
> return;
>
> - /*
> - * Enable CPU buffer clear mitigation for host and VMM, if also affected
> - * by MDS or TAA.
> - */
> + /* Enable full VERW mitigation if also affected by MDS or TAA.
> + */
> if (boot_cpu_has_bug(X86_BUG_MDS) || taa_vulnerable())
> verw_mitigation_selected = true; } @@ -568,14 +565,14 @@ static void
> __init mmio_apply_mitigation(void)
> return;
>
> /*
> - * Only enable the VMM mitigation if the CPU buffer clear mitigation is
> - * not being used.
> + * Full VERW mitigation selection enables host and VMENTER buffer clearing,
> + * otherwise buffer clearing only on VMENTER.
> */
> if (verw_mitigation_selected) {
> setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> - static_branch_disable(&cpu_buf_vm_clear);
> + static_branch_disable(&clear_cpu_buf_vm);
> } else {
> - static_branch_enable(&cpu_buf_vm_clear);
> + static_branch_enable(&clear_cpu_buf_vm);
> }
>
> /*
> @@ -681,7 +678,7 @@ static void __init md_clear_update_mitigation(void)
> taa_select_mitigation();
> }
> /*
> - * MMIO_MITIGATION_OFF is not checked here so that cpu_buf_vm_clear
> + * MMIO_MITIGATION_OFF is not checked here so that
> + clear_cpu_buf_vm
> * gets updated correctly as per X86_FEATURE_CLEAR_CPU_BUF state.
> */
> if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
> ---
>
I'm ok with this patch, as long as 'full VERW mitigation' is considered a clear enough term. I think the updated comment in the apply function does explain what that means, so if that's good enough I'm ok.
--David Kaplan
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation
2025-04-24 20:31 ` Kaplan, David
@ 2025-04-25 8:09 ` Borislav Petkov
2025-04-25 13:28 ` Kaplan, David
0 siblings, 1 reply; 65+ messages in thread
From: Borislav Petkov @ 2025-04-25 8:09 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Thu, Apr 24, 2025 at 08:31:25PM +0000, Kaplan, David wrote:
> verw_mitigation_selected implies that X86_FEATURE_CLEAR_CPU_BUF will
> be enabled, which does a VERW on kernel/vmm exits.
Does it imply that though? As explained, it simply says that *a* VERW
mitigation has been selected.
And only in the MMIO case which mandates that both spots - kernel entry and
VMENTER - should be mitigated, it basically says that this
CLEAR_CPU_BUFFERS macro should be active. And that macro does VERW on
kernel entry and right before VMLAUNCH.
And when the machine is not affected by MDS+TAA, then it enables this
cpu_buf_vm_clear thing which does VERW in C code, a bit earlier before
VMLAUNCH.
> So I'm not sure the comment is really wrong, but it can be rephrased.
Yes please.
> But it kind of does. !verw_mitigation_selected means that the
> X86_FEATURE bit there isn't set. So the VMM-based mitigation (the
> static branch) is only used if the broader X86_FEATURE_CLEAR_CPU_BUF
> is not being used.
Right, except that implication is not fully clear, I think.
> I'm ok with this patch, as long as 'full VERW mitigation' is
> considered a clear enough term. I think the updated comment in the
> apply function does explain what that means, so if that's good enough
> I'm ok.
Right.
So, I did beef up the comments some and renamed the key. Diff ontop of
yours below. How does that look?
---
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 51a677fe9a8d..8bb5740eba7a 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -561,7 +561,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
-DECLARE_STATIC_KEY_FALSE(cpu_buf_vm_clear);
+DECLARE_STATIC_KEY_FALSE(clear_cpu_buf_vm);
extern u16 mds_verw_sel;
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c97ded4d55e5..75eddf4f77d8 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -154,11 +154,11 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
/*
* Controls CPU Fill buffer clear before VMenter. This is a subset of
- * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when KVM-only
+ * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when VM-only
* mitigation is required.
*/
-DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear);
-EXPORT_SYMBOL_GPL(cpu_buf_vm_clear);
+DEFINE_STATIC_KEY_FALSE(clear_cpu_buf_vm);
+EXPORT_SYMBOL_GPL(clear_cpu_buf_vm);
void __init cpu_select_mitigations(void)
{
@@ -529,8 +529,11 @@ static void __init mmio_select_mitigation(void)
return;
/*
- * Enable CPU buffer clear mitigation for host and VMM, if also affected
- * by MDS or TAA.
+ * Enable full VERW mitigation if also affected by MDS or TAA.
+ * Full VERW mitigation in the context of the MMIO vuln means
+ * that the X86_FEATURE_CLEAR_CPU_BUF flag enables the VERW
+ * clearing in CLEAR_CPU_BUFFERS both on kernel and also on
+ * guest entry.
*/
if (boot_cpu_has_bug(X86_BUG_MDS) || taa_vulnerable())
verw_mitigation_selected = true;
@@ -568,14 +571,15 @@ static void __init mmio_apply_mitigation(void)
return;
/*
- * Only enable the VMM mitigation if the CPU buffer clear mitigation is
- * not being used.
+ * Full VERW mitigation selection enables host and guest entry
+ * buffer clearing, otherwise buffer clearing only on guest
+ * entry is needed.
*/
if (verw_mitigation_selected) {
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
- static_branch_disable(&cpu_buf_vm_clear);
+ static_branch_disable(&clear_cpu_buf_vm);
} else {
- static_branch_enable(&cpu_buf_vm_clear);
+ static_branch_enable(&clear_cpu_buf_vm);
}
/*
@@ -681,7 +685,7 @@ static void __init md_clear_update_mitigation(void)
taa_select_mitigation();
}
/*
- * MMIO_MITIGATION_OFF is not checked here so that cpu_buf_vm_clear
+ * MMIO_MITIGATION_OFF is not checked here so that clear_cpu_buf_vm
* gets updated correctly as per X86_FEATURE_CLEAR_CPU_BUF state.
*/
if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1547bfacd40f..16bb5ed1e6cf 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7359,13 +7359,13 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
* executed in spite of L1D Flush. This is because an extra VERW
* should not matter much after the big hammer L1D Flush.
*
- * cpu_buf_vm_clear is used when system is not vulnerable to MDS/TAA,
- * and is affected by MMIO Stale Data. In such cases mitigation in only
+ * clear_cpu_buf_vm is used when system is not vulnerable to MDS/TAA,
+ * and is affected by MMIO Stale Data. In such cases mitigation is only
* needed against an MMIO capable guest.
*/
if (static_branch_unlikely(&vmx_l1d_should_flush))
vmx_l1d_flush(vcpu);
- else if (static_branch_unlikely(&cpu_buf_vm_clear) &&
+ else if (static_branch_unlikely(&clear_cpu_buf_vm) &&
kvm_arch_has_assigned_device(vcpu->kvm))
mds_clear_cpu_buffers();
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply related [flat|nested] 65+ messages in thread
* RE: [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation
2025-04-25 8:09 ` Borislav Petkov
@ 2025-04-25 13:28 ` Kaplan, David
2025-04-26 11:22 ` Borislav Petkov
0 siblings, 1 reply; 65+ messages in thread
From: Kaplan, David @ 2025-04-25 13:28 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Friday, April 25, 2025 4:10 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Thu, Apr 24, 2025 at 08:31:25PM +0000, Kaplan, David wrote:
> > verw_mitigation_selected implies that X86_FEATURE_CLEAR_CPU_BUF will
> > be enabled, which does a VERW on kernel/vmm exits.
>
> Does it imply that though? As explained, it simply says that *a* VERW mitigation
> has been selected.
>
> And only in the MMIO case which mandates that both spots - kernel entry and
> VMENTER - should be mitigated, it basically says that this
> CLEAR_CPU_BUFFERS macro should be active. And that macro does VERW on
> kernel entry and right before VMLAUNCH.
It was the intent to imply that. If you look at patch 1, 2, and 4 then if verw_mitigation_selected is ever set to true, it means that some mitigation is going to force X86_FEATURE_CLEAR_CPU_BUF.
Maybe the solution here is to clarify the comment above verw_mitigation_selected that it is set if any of those 4 bugs are going to enable X86_FEATURE_CLEAR_CPU_BUF. So it implies that specific VERW-based mitigation.
Or perhaps I could even rename the variable to be 'clear_cpu_buf_selected'?
>
> And when the machine is not affected by MDS+TAA, then it enables this
> cpu_buf_vm_clear thing which does VERW in C code, a bit earlier before
> VMLAUNCH.
>
> > So I'm not sure the comment is really wrong, but it can be rephrased.
>
> Yes please.
>
> > But it kind of does. !verw_mitigation_selected means that the
> > X86_FEATURE bit there isn't set. So the VMM-based mitigation (the
> > static branch) is only used if the broader X86_FEATURE_CLEAR_CPU_BUF
> > is not being used.
>
> Right, except that implication is not fully clear, I think.
>
> > I'm ok with this patch, as long as 'full VERW mitigation' is
> > considered a clear enough term. I think the updated comment in the
> > apply function does explain what that means, so if that's good enough
> > I'm ok.
>
> Right.
>
> So, I did beef up the comments some and renamed the key. Diff ontop of yours
> below. How does that look?
>
I think clarifying what verw_mitigation_selected means is better. When that becomes clear, I think that the existing comments make sense.
Thanks
--David Kaplan
^ permalink raw reply [flat|nested] 65+ messages in thread
* RE: [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation
2025-04-25 13:28 ` Kaplan, David
@ 2025-04-26 11:22 ` Borislav Petkov
0 siblings, 0 replies; 65+ messages in thread
From: Borislav Petkov @ 2025-04-26 11:22 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On April 25, 2025 4:28:14 PM GMT+03:00, "Kaplan, David" <David.Kaplan@amd.com> wrote:
>It was the intent to imply that. If you look at patch 1, 2, and 4 then if verw_mitigation_selected is ever set to true, it means that some mitigation is going to force X86_FEATURE_CLEAR_CPU_BUF.
Aaaand the other shoe dropped...
>Maybe the solution here is to clarify the comment above verw_mitigation_selected that it is set if any of those 4 bugs are going to enable X86_FEATURE_CLEAR_CPU_BUF. So it implies that specific VERW-based mitigation.
>
>Or perhaps I could even rename the variable to be 'clear_cpu_buf_selected'?
Right, I think both should be the optimal thing as it would make it crystal clear.
>I think clarifying what verw_mitigation_selected means is better. When that becomes clear, I think that the existing comments make sense.
>
Ok, all clear.
But you don't have to resend an updated set - I'll fix that up while applying.
Thx.
--
Sent from a small device: formatting sucks and brevity is inevitable.
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 04/16] x86/bugs: Restructure RFDS mitigation
2025-04-18 16:17 ` [PATCH v5 04/16] x86/bugs: Restructure RFDS mitigation David Kaplan
@ 2025-04-27 15:09 ` Borislav Petkov
2025-04-28 13:42 ` Kaplan, David
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
1 sibling, 1 reply; 65+ messages in thread
From: Borislav Petkov @ 2025-04-27 15:09 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Fri, Apr 18, 2025 at 11:17:09AM -0500, David Kaplan wrote:
> +static bool __init rfds_has_ucode(void)
> +{
> + return (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR);
> +}
Might as well call it what the bit means and then the code reads a bit better:
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index e668ccccd8c7..2705105d9a5e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -630,7 +630,7 @@ static const char * const rfds_strings[] = {
[RFDS_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode",
};
-static bool __init rfds_has_ucode(void)
+static inline bool __init verw_clears_cpu_reg_file(void)
{
return (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR);
}
@@ -648,7 +648,7 @@ static void __init rfds_select_mitigation(void)
if (rfds_mitigation == RFDS_MITIGATION_OFF)
return;
- if (rfds_has_ucode())
+ if (verw_clears_cpu_reg_file())
verw_clear_cpu_buf_mitigation_selected = true;
}
@@ -661,7 +661,7 @@ static void __init rfds_update_mitigation(void)
rfds_mitigation = RFDS_MITIGATION_VERW;
if (rfds_mitigation == RFDS_MITIGATION_VERW) {
- if (!rfds_has_ucode())
+ if (!verw_clears_cpu_reg_file())
rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
}
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply related [flat|nested] 65+ messages in thread
* Re: [PATCH v5 09/16] x86/bugs: Allow retbleed=stuff only on Intel
2025-04-18 16:17 ` [PATCH v5 09/16] x86/bugs: Allow retbleed=stuff only on Intel David Kaplan
@ 2025-04-27 15:38 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
1 sibling, 0 replies; 65+ messages in thread
From: Borislav Petkov @ 2025-04-27 15:38 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Fri, Apr 18, 2025 at 11:17:14AM -0500, David Kaplan wrote:
> The retbleed=stuff mitigation is only applicable for Intel CPUs affected
> by retbleed. If this option is selected for another vendor, print a
> warning and fall back to the AUTO option.
>
> Signed-off-by: David Kaplan <david.kaplan@amd.com>
> ---
> arch/x86/kernel/cpu/bugs.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 72e04938fdcb..84d3f6b3d1eb 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -1187,6 +1187,10 @@ static void __init retbleed_select_mitigation(void)
> case RETBLEED_CMD_STUFF:
> if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) &&
> spectre_v2_enabled == SPECTRE_V2_RETPOLINE) {
> + if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
> + pr_err("WARNING: retbleed=stuff only supported for Intel CPUs.\n");
> + goto do_cmd_auto;
> + }
Right, the reason it is possible to select this mitigation on other
vendors is purely to be able to experiment with the different mitigation
techniques.
But I've never considered that ability to be particularly useful - and
even if, if you wanna do that, you might as well hack the kernel too.
So yeah, I guess it is better to not allow non-sensical mitigations.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 65+ messages in thread
* RE: [PATCH v5 04/16] x86/bugs: Restructure RFDS mitigation
2025-04-27 15:09 ` Borislav Petkov
@ 2025-04-28 13:42 ` Kaplan, David
0 siblings, 0 replies; 65+ messages in thread
From: Kaplan, David @ 2025-04-28 13:42 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Sunday, April 27, 2025 10:09 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v5 04/16] x86/bugs: Restructure RFDS mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Fri, Apr 18, 2025 at 11:17:09AM -0500, David Kaplan wrote:
> > +static bool __init rfds_has_ucode(void) {
> > + return (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR); }
>
> Might as well call it what the bit means and then the code reads a bit better:
>
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index
> e668ccccd8c7..2705105d9a5e 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -630,7 +630,7 @@ static const char * const rfds_strings[] = {
> [RFDS_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode",
> };
>
> -static bool __init rfds_has_ucode(void)
> +static inline bool __init verw_clears_cpu_reg_file(void)
> {
> return (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR); } @@ -648,7
> +648,7 @@ static void __init rfds_select_mitigation(void)
> if (rfds_mitigation == RFDS_MITIGATION_OFF)
> return;
>
> - if (rfds_has_ucode())
> + if (verw_clears_cpu_reg_file())
> verw_clear_cpu_buf_mitigation_selected = true; }
>
> @@ -661,7 +661,7 @@ static void __init rfds_update_mitigation(void)
> rfds_mitigation = RFDS_MITIGATION_VERW;
>
> if (rfds_mitigation == RFDS_MITIGATION_VERW) {
> - if (!rfds_has_ucode())
> + if (!verw_clears_cpu_reg_file())
> rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
> }
>
>
Makes sense to me, thanks.
--David Kaplan
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 10/16] x86/bugs: Restructure retbleed mitigation
2025-04-18 16:17 ` [PATCH v5 10/16] x86/bugs: Restructure retbleed mitigation David Kaplan
@ 2025-04-28 18:59 ` Borislav Petkov
2025-04-28 20:55 ` Kaplan, David
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
1 sibling, 1 reply; 65+ messages in thread
From: Borislav Petkov @ 2025-04-28 18:59 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Fri, Apr 18, 2025 at 11:17:15AM -0500, David Kaplan wrote:
> @@ -187,11 +189,6 @@ void __init cpu_select_mitigations(void)
> /* Select the proper CPU mitigations before patching alternatives: */
> spectre_v1_select_mitigation();
> spectre_v2_select_mitigation();
> - /*
> - * retbleed_select_mitigation() relies on the state set by
> - * spectre_v2_select_mitigation(); specifically it wants to know about
> - * spectre_v2=ibrs.
> - */
> retbleed_select_mitigation();
> /*
> * spectre_v2_user_select_mitigation() relies on the state set by
> @@ -219,12 +216,14 @@ void __init cpu_select_mitigations(void)
> * After mitigations are selected, some may need to update their
> * choices.
> */
> + retbleed_update_mitigation();
Is there any particular reason for the retbleed update function to go first...
> mds_update_mitigation();
> taa_update_mitigation();
> mmio_update_mitigation();
> rfds_update_mitigation();
... before those?
I'm under the assumption that the new scheme would get rid of this magical
ordering requirement between the mitigations...
Your commit message is alluding to that but we need to specify this clearly
for future cleanups/changes here.
> spectre_v1_apply_mitigation();
> + retbleed_apply_mitigation();
This too.
> mds_apply_mitigation();
> taa_apply_mitigation();
> mmio_apply_mitigation();
...
> -do_cmd_auto:
> - case RETBLEED_CMD_AUTO:
> + if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO) {
> + /* Intel mitigation selected in retbleed_update_mitigation() */
> if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
> boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
> if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
> @@ -1212,18 +1187,65 @@ static void __init retbleed_select_mitigation(void)
> else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
> boot_cpu_has(X86_FEATURE_IBPB))
> retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
> + else
> + retbleed_mitigation = RETBLEED_MITIGATION_NONE;
> }
> + }
> +}
I'd flip that outer check in order to save an indentation level here:
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 9d6ce4a167be..207a472d1a6e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1182,18 +1182,19 @@ static void __init retbleed_select_mitigation(void)
break;
}
- if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO) {
- /* Intel mitigation selected in retbleed_update_mitigation() */
- if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
- boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
- if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
- retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
- else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
- boot_cpu_has(X86_FEATURE_IBPB))
- retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
- else
- retbleed_mitigation = RETBLEED_MITIGATION_NONE;
- }
+ if (retbleed_mitigation != RETBLEED_MITIGATION_AUTO)
+ return;
+
+ /* Intel mitigation selected in retbleed_update_mitigation() */
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
+ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
+ if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
+ retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
+ else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
+ boot_cpu_has(X86_FEATURE_IBPB))
+ retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
+ else
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
}
}
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply related [flat|nested] 65+ messages in thread
* RE: [PATCH v5 10/16] x86/bugs: Restructure retbleed mitigation
2025-04-28 18:59 ` Borislav Petkov
@ 2025-04-28 20:55 ` Kaplan, David
2025-04-29 8:21 ` Borislav Petkov
0 siblings, 1 reply; 65+ messages in thread
From: Kaplan, David @ 2025-04-28 20:55 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Monday, April 28, 2025 2:00 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v5 10/16] x86/bugs: Restructure retbleed mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Fri, Apr 18, 2025 at 11:17:15AM -0500, David Kaplan wrote:
> > @@ -187,11 +189,6 @@ void __init cpu_select_mitigations(void)
> > /* Select the proper CPU mitigations before patching alternatives: */
> > spectre_v1_select_mitigation();
> > spectre_v2_select_mitigation();
> > - /*
> > - * retbleed_select_mitigation() relies on the state set by
> > - * spectre_v2_select_mitigation(); specifically it wants to know about
> > - * spectre_v2=ibrs.
> > - */
> > retbleed_select_mitigation();
> > /*
> > * spectre_v2_user_select_mitigation() relies on the state set
> > by @@ -219,12 +216,14 @@ void __init cpu_select_mitigations(void)
> > * After mitigations are selected, some may need to update their
> > * choices.
> > */
> > + retbleed_update_mitigation();
>
> Is there any particular reason for the retbleed update function to go first...
>
> > mds_update_mitigation();
> > taa_update_mitigation();
> > mmio_update_mitigation();
> > rfds_update_mitigation();
>
> ... before those?
It's really just following the same order as the order that the select functions are called, which largely matches the order in existing code.
>
> I'm under the assumption that the new scheme would get rid of this magical
> ordering requirement between the mitigations...
While this is mostly true, there's still a few dependencies with the update functions which are clearly noted with comments.
>
> Your commit message is alluding to that but we need to specify this clearly for future
> cleanups/changes here.
>
> > spectre_v1_apply_mitigation();
> > + retbleed_apply_mitigation();
>
> This too.
>
> > mds_apply_mitigation();
> > taa_apply_mitigation();
> > mmio_apply_mitigation();
>
> ...
>
> > -do_cmd_auto:
> > - case RETBLEED_CMD_AUTO:
> > + if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO) {
> > + /* Intel mitigation selected in
> > + retbleed_update_mitigation() */
> > if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
> > boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
> > if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
> > @@ -1212,18 +1187,65 @@ static void __init retbleed_select_mitigation(void)
> > else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
> > boot_cpu_has(X86_FEATURE_IBPB))
> > retbleed_mitigation =
> > RETBLEED_MITIGATION_IBPB;
> > + else
> > + retbleed_mitigation =
> > + RETBLEED_MITIGATION_NONE;
> > }
> > + }
> > +}
>
> I'd flip that outer check in order to save an indentation level here:
>
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index
> 9d6ce4a167be..207a472d1a6e 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -1182,18 +1182,19 @@ static void __init retbleed_select_mitigation(void)
> break;
> }
>
> - if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO) {
> - /* Intel mitigation selected in retbleed_update_mitigation() */
> - if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
> - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
> - if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
> - retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
> - else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
> - boot_cpu_has(X86_FEATURE_IBPB))
> - retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
> - else
> - retbleed_mitigation = RETBLEED_MITIGATION_NONE;
> - }
> + if (retbleed_mitigation != RETBLEED_MITIGATION_AUTO)
> + return;
> +
> + /* Intel mitigation selected in retbleed_update_mitigation() */
> + if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
> + boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
> + if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
> + retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
> + else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
> + boot_cpu_has(X86_FEATURE_IBPB))
> + retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
> + else
> + retbleed_mitigation = RETBLEED_MITIGATION_NONE;
> }
> }
>
Yeah this makes sense.
Thanks --David Kaplan
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 10/16] x86/bugs: Restructure retbleed mitigation
2025-04-28 20:55 ` Kaplan, David
@ 2025-04-29 8:21 ` Borislav Petkov
0 siblings, 0 replies; 65+ messages in thread
From: Borislav Petkov @ 2025-04-29 8:21 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Mon, Apr 28, 2025 at 08:55:13PM +0000, Kaplan, David wrote:
> It's really just following the same order as the order that the select
> functions are called, which largely matches the order in existing code.
Oh ok, as long as there's not any undocumented magic in there...
> While this is mostly true, there's still a few dependencies with the update
> functions which are clearly noted with comments.
Ack.
> Yeah this makes sense.
Ok, merged in.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 11/16] x86/bugs: Restructure spectre_v2_user mitigation
2025-04-18 16:17 ` [PATCH v5 11/16] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
@ 2025-04-29 8:47 ` Borislav Petkov
2025-04-29 14:11 ` Kaplan, David
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
1 sibling, 1 reply; 65+ messages in thread
From: Borislav Petkov @ 2025-04-29 8:47 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Fri, Apr 18, 2025 at 11:17:16AM -0500, David Kaplan wrote:
> @@ -217,6 +214,11 @@ void __init cpu_select_mitigations(void)
> * choices.
> */
> retbleed_update_mitigation();
> + /*
> + * spectre_v2_user_update_mitigation() depends on
> + * retbleed_update_mitigation().
> + */
Why aren't you keeping the reason for the dependency from the above comment?
That's important when we need to touch this code again...
> + spectre_v2_user_update_mitigation();
> mds_update_mitigation();
> taa_update_mitigation();
> mmio_update_mitigation();
> @@ -224,6 +226,7 @@ void __init cpu_select_mitigations(void)
>
> spectre_v1_apply_mitigation();
> retbleed_apply_mitigation();
> + spectre_v2_user_apply_mitigation();
> mds_apply_mitigation();
> taa_apply_mitigation();
> mmio_apply_mitigation();
> @@ -1374,6 +1377,8 @@ enum spectre_v2_mitigation_cmd {
> SPECTRE_V2_CMD_IBRS,
> };
>
> +static enum spectre_v2_mitigation_cmd spectre_v2_cmd __ro_after_init = SPECTRE_V2_CMD_AUTO;
> +
> enum spectre_v2_user_cmd {
> SPECTRE_V2_USER_CMD_NONE,
> SPECTRE_V2_USER_CMD_AUTO,
> @@ -1412,31 +1417,19 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure)
> pr_info("spectre_v2_user=%s forced on command line.\n", reason);
> }
>
> -static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd;
> -
> static enum spectre_v2_user_cmd __init
> spectre_v2_parse_user_cmdline(void)
Lemme unbreak that silly thing while here...
> {
> - enum spectre_v2_user_cmd mode;
> char arg[20];
> int ret, i;
>
> - mode = IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2) ?
> - SPECTRE_V2_USER_CMD_AUTO : SPECTRE_V2_USER_CMD_NONE;
> -
> - switch (spectre_v2_cmd) {
> - case SPECTRE_V2_CMD_NONE:
> + if (cpu_mitigations_off() || !IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2))
> return SPECTRE_V2_USER_CMD_NONE;
> - case SPECTRE_V2_CMD_FORCE:
> - return SPECTRE_V2_USER_CMD_FORCE;
> - default:
> - break;
> - }
>
> ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
> arg, sizeof(arg));
> if (ret < 0)
> - return mode;
> + return SPECTRE_V2_USER_CMD_AUTO;
>
> for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) {
> if (match_option(arg, ret, v2_user_options[i].option)) {
> @@ -1447,7 +1440,7 @@ spectre_v2_parse_user_cmdline(void)
> }
>
> pr_err("Unknown user space protection option (%s). Switching to default\n", arg);
> - return mode;
> + return SPECTRE_V2_USER_CMD_AUTO;
> }
>
> static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
> @@ -1458,7 +1451,6 @@ static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
> static void __init
> spectre_v2_user_select_mitigation(void)
That too.
> {
> - enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
> enum spectre_v2_user_cmd cmd;
Might as well get rid of that one.
> if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
> @@ -1467,48 +1459,65 @@ spectre_v2_user_select_mitigation(void)
> cmd = spectre_v2_parse_user_cmdline();
> switch (cmd) {
> case SPECTRE_V2_USER_CMD_NONE:
> - goto set_mode;
> + return;
> case SPECTRE_V2_USER_CMD_FORCE:
> - mode = SPECTRE_V2_USER_STRICT;
> + spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
> + spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
Those should be aligned at the '=' sign for better readability.
...
IOW, all the changes ontop:
---
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index afea9179acdd..dc75195760ca 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -214,9 +214,11 @@ void __init cpu_select_mitigations(void)
* choices.
*/
retbleed_update_mitigation();
+
/*
* spectre_v2_user_update_mitigation() depends on
- * retbleed_update_mitigation().
+ * retbleed_update_mitigation(), specifically the STIBP
+ * selection is forced for UNRET or IBPB.
*/
spectre_v2_user_update_mitigation();
mds_update_mitigation();
@@ -1422,8 +1424,7 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure)
pr_info("spectre_v2_user=%s forced on command line.\n", reason);
}
-static enum spectre_v2_user_cmd __init
-spectre_v2_parse_user_cmdline(void)
+static enum spectre_v2_user_cmd __init spectre_v2_parse_user_cmdline(void)
{
char arg[20];
int ret, i;
@@ -1453,29 +1454,25 @@ static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS;
}
-static void __init
-spectre_v2_user_select_mitigation(void)
+static void __init spectre_v2_user_select_mitigation(void)
{
- enum spectre_v2_user_cmd cmd;
-
if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
return;
- cmd = spectre_v2_parse_user_cmdline();
- switch (cmd) {
+ switch (spectre_v2_parse_user_cmdline()) {
case SPECTRE_V2_USER_CMD_NONE:
return;
case SPECTRE_V2_USER_CMD_FORCE:
- spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
break;
case SPECTRE_V2_USER_CMD_AUTO:
case SPECTRE_V2_USER_CMD_PRCTL:
- spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
break;
case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
- spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
break;
case SPECTRE_V2_USER_CMD_SECCOMP:
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply related [flat|nested] 65+ messages in thread
* Re: [PATCH v5 13/16] x86/bugs: Restructure spectre_v2 mitigation
2025-04-18 16:17 ` [PATCH v5 13/16] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
@ 2025-04-29 10:46 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
1 sibling, 0 replies; 65+ messages in thread
From: Borislav Petkov @ 2025-04-29 10:46 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Fri, Apr 18, 2025 at 11:17:18AM -0500, David Kaplan wrote:
> static void __init spectre_v2_select_mitigation(void)
> {
> - enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
> enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
> + spectre_v2_cmd = spectre_v2_parse_cmdline();
>
> /*
> * If the CPU is not affected and the command line mode is NONE or AUTO
> * then nothing to do.
> */
Obvious comment. Lemme zap it.
> if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
> - (cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO))
> + (spectre_v2_cmd == SPECTRE_V2_CMD_NONE || spectre_v2_cmd == SPECTRE_V2_CMD_AUTO))
> return;
>
> - switch (cmd) {
> + switch (spectre_v2_cmd) {
> case SPECTRE_V2_CMD_NONE:
> return;
>
> @@ -1898,16 +1907,6 @@ static void __init spectre_v2_select_mitigation(void)
> break;
> }
>
> - if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
> - boot_cpu_has_bug(X86_BUG_RETBLEED) &&
> - retbleed_mitigation != RETBLEED_MITIGATION_NONE &&
> - retbleed_mitigation != RETBLEED_MITIGATION_STUFF &&
> - boot_cpu_has(X86_FEATURE_IBRS) &&
> - boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
> - mode = SPECTRE_V2_IBRS;
> - break;
> - }
> -
> mode = spectre_v2_select_retpoline();
> break;
>
> @@ -1941,10 +1940,32 @@ static void __init spectre_v2_select_mitigation(void)
> break;
> }
>
> - if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
> + spectre_v2_enabled = mode;
Might as well zap mode here too, like for the others.
...
Diff ontop:
---
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 3b0ffebb8f4b..93d07438eea7 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1887,13 +1887,8 @@ static void __init bhi_apply_mitigation(void)
static void __init spectre_v2_select_mitigation(void)
{
- enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
spectre_v2_cmd = spectre_v2_parse_cmdline();
- /*
- * If the CPU is not affected and the command line mode is NONE or AUTO
- * then nothing to do.
- */
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
(spectre_v2_cmd == SPECTRE_V2_CMD_NONE || spectre_v2_cmd == SPECTRE_V2_CMD_AUTO))
return;
@@ -1905,44 +1900,42 @@ static void __init spectre_v2_select_mitigation(void)
case SPECTRE_V2_CMD_FORCE:
case SPECTRE_V2_CMD_AUTO:
if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
- mode = SPECTRE_V2_EIBRS;
+ spectre_v2_enabled = SPECTRE_V2_EIBRS;
break;
}
- mode = spectre_v2_select_retpoline();
+ spectre_v2_enabled = spectre_v2_select_retpoline();
break;
case SPECTRE_V2_CMD_RETPOLINE_LFENCE:
pr_err(SPECTRE_V2_LFENCE_MSG);
- mode = SPECTRE_V2_LFENCE;
+ spectre_v2_enabled = SPECTRE_V2_LFENCE;
break;
case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
- mode = SPECTRE_V2_RETPOLINE;
+ spectre_v2_enabled = SPECTRE_V2_RETPOLINE;
break;
case SPECTRE_V2_CMD_RETPOLINE:
- mode = spectre_v2_select_retpoline();
+ spectre_v2_enabled = spectre_v2_select_retpoline();
break;
case SPECTRE_V2_CMD_IBRS:
- mode = SPECTRE_V2_IBRS;
+ spectre_v2_enabled = SPECTRE_V2_IBRS;
break;
case SPECTRE_V2_CMD_EIBRS:
- mode = SPECTRE_V2_EIBRS;
+ spectre_v2_enabled = SPECTRE_V2_EIBRS;
break;
case SPECTRE_V2_CMD_EIBRS_LFENCE:
- mode = SPECTRE_V2_EIBRS_LFENCE;
+ spectre_v2_enabled = SPECTRE_V2_EIBRS_LFENCE;
break;
case SPECTRE_V2_CMD_EIBRS_RETPOLINE:
- mode = SPECTRE_V2_EIBRS_RETPOLINE;
+ spectre_v2_enabled = SPECTRE_V2_EIBRS_RETPOLINE;
break;
}
-
- spectre_v2_enabled = mode;
}
static void __init spectre_v2_update_mitigation(void)
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply related [flat|nested] 65+ messages in thread
* Re: [PATCH v5 14/16] x86/bugs: Restructure SSB mitigation
2025-04-18 16:17 ` [PATCH v5 14/16] x86/bugs: Restructure SSB mitigation David Kaplan
@ 2025-04-29 12:54 ` Borislav Petkov
2025-04-29 14:09 ` Kaplan, David
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
1 sibling, 1 reply; 65+ messages in thread
From: Borislav Petkov @ 2025-04-29 12:54 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Fri, Apr 18, 2025 at 11:17:19AM -0500, David Kaplan wrote:
> @@ -2224,19 +2226,18 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
> return cmd;
> }
>
> -static enum ssb_mitigation __init __ssb_select_mitigation(void)
> +static void ssb_select_mitigation(void)
I don't think you meant to drop the __init section here ...
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 65+ messages in thread
* RE: [PATCH v5 14/16] x86/bugs: Restructure SSB mitigation
2025-04-29 12:54 ` Borislav Petkov
@ 2025-04-29 14:09 ` Kaplan, David
0 siblings, 0 replies; 65+ messages in thread
From: Kaplan, David @ 2025-04-29 14:09 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Tuesday, April 29, 2025 7:54 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v5 14/16] x86/bugs: Restructure SSB mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Fri, Apr 18, 2025 at 11:17:19AM -0500, David Kaplan wrote:
> > @@ -2224,19 +2226,18 @@ static enum ssb_mitigation_cmd __init
> ssb_parse_cmdline(void)
> > return cmd;
> > }
> >
> > -static enum ssb_mitigation __init __ssb_select_mitigation(void)
> > +static void ssb_select_mitigation(void)
>
> I don't think you meant to drop the __init section here ...
>
Yeah looks like a typo, my apologies. The prototype was correct and has the __init.
--David Kaplan
^ permalink raw reply [flat|nested] 65+ messages in thread
* RE: [PATCH v5 11/16] x86/bugs: Restructure spectre_v2_user mitigation
2025-04-29 8:47 ` Borislav Petkov
@ 2025-04-29 14:11 ` Kaplan, David
0 siblings, 0 replies; 65+ messages in thread
From: Kaplan, David @ 2025-04-29 14:11 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Tuesday, April 29, 2025 3:47 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v5 11/16] x86/bugs: Restructure spectre_v2_user mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Fri, Apr 18, 2025 at 11:17:16AM -0500, David Kaplan wrote:
> > @@ -217,6 +214,11 @@ void __init cpu_select_mitigations(void)
> > * choices.
> > */
> > retbleed_update_mitigation();
> > + /*
> > + * spectre_v2_user_update_mitigation() depends on
> > + * retbleed_update_mitigation().
> > + */
>
> Why aren't you keeping the reason for the dependency from the above comment?
>
> That's important when we need to touch this code again...
>
> > + spectre_v2_user_update_mitigation();
> > mds_update_mitigation();
> > taa_update_mitigation();
> > mmio_update_mitigation();
> > @@ -224,6 +226,7 @@ void __init cpu_select_mitigations(void)
> >
> > spectre_v1_apply_mitigation();
> > retbleed_apply_mitigation();
> > + spectre_v2_user_apply_mitigation();
> > mds_apply_mitigation();
> > taa_apply_mitigation();
> > mmio_apply_mitigation();
> > @@ -1374,6 +1377,8 @@ enum spectre_v2_mitigation_cmd {
> > SPECTRE_V2_CMD_IBRS,
> > };
> >
> > +static enum spectre_v2_mitigation_cmd spectre_v2_cmd __ro_after_init
> > += SPECTRE_V2_CMD_AUTO;
> > +
> > enum spectre_v2_user_cmd {
> > SPECTRE_V2_USER_CMD_NONE,
> > SPECTRE_V2_USER_CMD_AUTO,
> > @@ -1412,31 +1417,19 @@ static void __init spec_v2_user_print_cond(const
> char *reason, bool secure)
> > pr_info("spectre_v2_user=%s forced on command line.\n",
> > reason); }
> >
> > -static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd;
> > -
> > static enum spectre_v2_user_cmd __init
> > spectre_v2_parse_user_cmdline(void)
>
> Lemme unbreak that silly thing while here...
>
> > {
> > - enum spectre_v2_user_cmd mode;
> > char arg[20];
> > int ret, i;
> >
> > - mode = IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2) ?
> > - SPECTRE_V2_USER_CMD_AUTO : SPECTRE_V2_USER_CMD_NONE;
> > -
> > - switch (spectre_v2_cmd) {
> > - case SPECTRE_V2_CMD_NONE:
> > + if (cpu_mitigations_off() ||
> > + !IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2))
> > return SPECTRE_V2_USER_CMD_NONE;
> > - case SPECTRE_V2_CMD_FORCE:
> > - return SPECTRE_V2_USER_CMD_FORCE;
> > - default:
> > - break;
> > - }
> >
> > ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
> > arg, sizeof(arg));
> > if (ret < 0)
> > - return mode;
> > + return SPECTRE_V2_USER_CMD_AUTO;
> >
> > for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) {
> > if (match_option(arg, ret, v2_user_options[i].option)) {
> > @@ -1447,7 +1440,7 @@ spectre_v2_parse_user_cmdline(void)
> > }
> >
> > pr_err("Unknown user space protection option (%s). Switching to default\n",
> arg);
> > - return mode;
> > + return SPECTRE_V2_USER_CMD_AUTO;
> > }
> >
> > static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation
> > mode) @@ -1458,7 +1451,6 @@ static inline bool
> > spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode) static void
> > __init
> > spectre_v2_user_select_mitigation(void)
>
> That too.
>
> > {
> > - enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
> > enum spectre_v2_user_cmd cmd;
>
> Might as well get rid of that one.
>
> > if (!boot_cpu_has(X86_FEATURE_IBPB) &&
> > !boot_cpu_has(X86_FEATURE_STIBP)) @@ -1467,48 +1459,65 @@
> spectre_v2_user_select_mitigation(void)
> > cmd = spectre_v2_parse_user_cmdline();
> > switch (cmd) {
> > case SPECTRE_V2_USER_CMD_NONE:
> > - goto set_mode;
> > + return;
> > case SPECTRE_V2_USER_CMD_FORCE:
> > - mode = SPECTRE_V2_USER_STRICT;
> > + spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
> > + spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
>
> Those should be aligned at the '=' sign for better readability.
>
> ...
>
> IOW, all the changes ontop:
>
> ---
>
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index
> afea9179acdd..dc75195760ca 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -214,9 +214,11 @@ void __init cpu_select_mitigations(void)
> * choices.
> */
> retbleed_update_mitigation();
> +
> /*
> * spectre_v2_user_update_mitigation() depends on
> - * retbleed_update_mitigation().
> + * retbleed_update_mitigation(), specifically the STIBP
> + * selection is forced for UNRET or IBPB.
> */
> spectre_v2_user_update_mitigation();
> mds_update_mitigation();
> @@ -1422,8 +1424,7 @@ static void __init spec_v2_user_print_cond(const char
> *reason, bool secure)
> pr_info("spectre_v2_user=%s forced on command line.\n", reason); }
>
> -static enum spectre_v2_user_cmd __init
> -spectre_v2_parse_user_cmdline(void)
> +static enum spectre_v2_user_cmd __init
> +spectre_v2_parse_user_cmdline(void)
> {
> char arg[20];
> int ret, i;
> @@ -1453,29 +1454,25 @@ static inline bool spectre_v2_in_ibrs_mode(enum
> spectre_v2_mitigation mode)
> return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS; }
>
> -static void __init
> -spectre_v2_user_select_mitigation(void)
> +static void __init spectre_v2_user_select_mitigation(void)
> {
> - enum spectre_v2_user_cmd cmd;
> -
> if (!boot_cpu_has(X86_FEATURE_IBPB) &&
> !boot_cpu_has(X86_FEATURE_STIBP))
> return;
>
> - cmd = spectre_v2_parse_user_cmdline();
> - switch (cmd) {
> + switch (spectre_v2_parse_user_cmdline()) {
> case SPECTRE_V2_USER_CMD_NONE:
> return;
> case SPECTRE_V2_USER_CMD_FORCE:
> - spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
> + spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
> spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
> break;
> case SPECTRE_V2_USER_CMD_AUTO:
> case SPECTRE_V2_USER_CMD_PRCTL:
> - spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
> + spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
> spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
> break;
> case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
> - spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
> + spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
> spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
> break;
> case SPECTRE_V2_USER_CMD_SECCOMP:
>
That all looks good to me.
Thanks --David Kaplan
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 16/16] x86/bugs: Restructure SRSO mitigation
2025-04-18 16:17 ` [PATCH v5 16/16] x86/bugs: Restructure SRSO mitigation David Kaplan
@ 2025-04-29 16:50 ` Borislav Petkov
2025-04-29 17:18 ` Kaplan, David
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
1 sibling, 1 reply; 65+ messages in thread
From: Borislav Petkov @ 2025-04-29 16:50 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Fri, Apr 18, 2025 at 11:17:21AM -0500, David Kaplan wrote:
> @@ -2738,130 +2730,80 @@ static void __init srso_select_mitigation(void)
> {
> bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
I'll push that init after the return so that it doesn't happen unnecessarily.
> - if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
> - cpu_mitigations_off() ||
> - srso_cmd == SRSO_CMD_OFF) {
> - if (boot_cpu_has(X86_FEATURE_SBPB))
> - x86_pred_cmd = PRED_CMD_SBPB;
> - goto out;
> - }
> + if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
> + srso_mitigation = SRSO_MITIGATION_NONE;
> +
> + if (srso_mitigation == SRSO_MITIGATION_NONE)
> + return;
> +
> + if (srso_mitigation == SRSO_MITIGATION_AUTO)
> + srso_mitigation = SRSO_MITIGATION_SAFE_RET;
>
> if (has_microcode) {
> /*
> * Zen1/2 with SMT off aren't vulnerable after the right
> * IBPB microcode has been applied.
> - *
> - * Zen1/2 don't have SBPB, no need to try to enable it here.
Why?
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 65+ messages in thread
* RE: [PATCH v5 16/16] x86/bugs: Restructure SRSO mitigation
2025-04-29 16:50 ` Borislav Petkov
@ 2025-04-29 17:18 ` Kaplan, David
2025-04-30 8:25 ` Borislav Petkov
0 siblings, 1 reply; 65+ messages in thread
From: Kaplan, David @ 2025-04-29 17:18 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Tuesday, April 29, 2025 11:51 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v5 16/16] x86/bugs: Restructure SRSO mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Fri, Apr 18, 2025 at 11:17:21AM -0500, David Kaplan wrote:
> > @@ -2738,130 +2730,80 @@ static void __init
> > srso_select_mitigation(void) {
> > bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
>
> I'll push that init after the return so that it doesn't happen unnecessarily.
>
> > - if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
> > - cpu_mitigations_off() ||
> > - srso_cmd == SRSO_CMD_OFF) {
> > - if (boot_cpu_has(X86_FEATURE_SBPB))
> > - x86_pred_cmd = PRED_CMD_SBPB;
> > - goto out;
> > - }
> > + if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
> > + srso_mitigation = SRSO_MITIGATION_NONE;
> > +
> > + if (srso_mitigation == SRSO_MITIGATION_NONE)
> > + return;
> > +
> > + if (srso_mitigation == SRSO_MITIGATION_AUTO)
> > + srso_mitigation = SRSO_MITIGATION_SAFE_RET;
> >
> > if (has_microcode) {
> > /*
> > * Zen1/2 with SMT off aren't vulnerable after the right
> > * IBPB microcode has been applied.
> > - *
> > - * Zen1/2 don't have SBPB, no need to try to enable it here.
>
> Why?
>
The comment doesn't make any sense in the new structure. In the old code, SBPB gets enabled at the start of the function, before we check if you're on a Zen1/2 with SMT off. The comment arguably made some sense in the old code because you're disabling SRSO mitigations but after you had done the SBPB check...but the comment is pointing out this is ok because these CPUs never support SBPB anyway. Normally, if SRSO is off, you try to use SBPB.
In the new flow, the SBPB work is done in srso_apply_mitigation(), and for all parts. So the whole concern about how the code determines SRSO mitigation isn't needed after already handling SBPB doesn't exist anymore.
I'd say this is another reason why the new structure is easier to understand, it has fewer subtleties like this.
--David Kaplan
^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [PATCH v5 16/16] x86/bugs: Restructure SRSO mitigation
2025-04-29 17:18 ` Kaplan, David
@ 2025-04-30 8:25 ` Borislav Petkov
0 siblings, 0 replies; 65+ messages in thread
From: Borislav Petkov @ 2025-04-30 8:25 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Tue, Apr 29, 2025 at 05:18:27PM +0000, Kaplan, David wrote:
> The comment doesn't make any sense in the new structure. In the old code,
> SBPB gets enabled at the start of the function, before we check if you're on
> a Zen1/2 with SMT off. The comment arguably made some sense in the old code
> because you're disabling SRSO mitigations but after you had done the SBPB
> check...but the comment is pointing out this is ok because these CPUs never
> support SBPB anyway. Normally, if SRSO is off, you try to use SBPB.
Ok, lemme add a note about that in the commit message.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure SRSO mitigation
2025-04-18 16:17 ` [PATCH v5 16/16] x86/bugs: Restructure SRSO mitigation David Kaplan
2025-04-29 16:50 ` Borislav Petkov
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
1 sibling, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 1f4bb068b498a544ae913764a797449463ef620c
Gitweb: https://git.kernel.org/tip/1f4bb068b498a544ae913764a797449463ef620c
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:21 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Wed, 30 Apr 2025 10:25:45 +02:00
x86/bugs: Restructure SRSO mitigation
Restructure SRSO to use select/update/apply functions to create
consistent vulnerability handling. Like with retbleed, the command line
options directly select mitigations which can later be modified.
While at it, remove a comment which doesn't apply anymore due to the
changed mitigation detection flow.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-17-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 215 ++++++++++++++++--------------------
1 file changed, 101 insertions(+), 114 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 25d84e2..a4f3b1d 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -84,6 +84,8 @@ static void __init srbds_select_mitigation(void);
static void __init srbds_apply_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
+static void __init srso_update_mitigation(void);
+static void __init srso_apply_mitigation(void);
static void __init gds_select_mitigation(void);
static void __init gds_apply_mitigation(void);
static void __init bhi_select_mitigation(void);
@@ -208,11 +210,6 @@ void __init cpu_select_mitigations(void)
rfds_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
-
- /*
- * srso_select_mitigation() depends and must run after
- * retbleed_select_mitigation().
- */
srso_select_mitigation();
gds_select_mitigation();
bhi_select_mitigation();
@@ -240,6 +237,8 @@ void __init cpu_select_mitigations(void)
mmio_update_mitigation();
rfds_update_mitigation();
bhi_update_mitigation();
+ /* srso_update_mitigation() depends on retbleed_update_mitigation(). */
+ srso_update_mitigation();
spectre_v1_apply_mitigation();
spectre_v2_apply_mitigation();
@@ -252,6 +251,7 @@ void __init cpu_select_mitigations(void)
mmio_apply_mitigation();
rfds_apply_mitigation();
srbds_apply_mitigation();
+ srso_apply_mitigation();
gds_apply_mitigation();
bhi_apply_mitigation();
}
@@ -2674,6 +2674,7 @@ early_param("l1tf", l1tf_cmdline);
enum srso_mitigation {
SRSO_MITIGATION_NONE,
+ SRSO_MITIGATION_AUTO,
SRSO_MITIGATION_UCODE_NEEDED,
SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
SRSO_MITIGATION_MICROCODE,
@@ -2683,14 +2684,6 @@ enum srso_mitigation {
SRSO_MITIGATION_BP_SPEC_REDUCE,
};
-enum srso_mitigation_cmd {
- SRSO_CMD_OFF,
- SRSO_CMD_MICROCODE,
- SRSO_CMD_SAFE_RET,
- SRSO_CMD_IBPB,
- SRSO_CMD_IBPB_ON_VMEXIT,
-};
-
static const char * const srso_strings[] = {
[SRSO_MITIGATION_NONE] = "Vulnerable",
[SRSO_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode",
@@ -2702,8 +2695,7 @@ static const char * const srso_strings[] = {
[SRSO_MITIGATION_BP_SPEC_REDUCE] = "Mitigation: Reduced Speculation"
};
-static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
-static enum srso_mitigation_cmd srso_cmd __ro_after_init = SRSO_CMD_SAFE_RET;
+static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_AUTO;
static int __init srso_parse_cmdline(char *str)
{
@@ -2711,15 +2703,15 @@ static int __init srso_parse_cmdline(char *str)
return -EINVAL;
if (!strcmp(str, "off"))
- srso_cmd = SRSO_CMD_OFF;
+ srso_mitigation = SRSO_MITIGATION_NONE;
else if (!strcmp(str, "microcode"))
- srso_cmd = SRSO_CMD_MICROCODE;
+ srso_mitigation = SRSO_MITIGATION_MICROCODE;
else if (!strcmp(str, "safe-ret"))
- srso_cmd = SRSO_CMD_SAFE_RET;
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET;
else if (!strcmp(str, "ibpb"))
- srso_cmd = SRSO_CMD_IBPB;
+ srso_mitigation = SRSO_MITIGATION_IBPB;
else if (!strcmp(str, "ibpb-vmexit"))
- srso_cmd = SRSO_CMD_IBPB_ON_VMEXIT;
+ srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
else
pr_err("Ignoring unknown SRSO option (%s).", str);
@@ -2731,132 +2723,83 @@ early_param("spec_rstack_overflow", srso_parse_cmdline);
static void __init srso_select_mitigation(void)
{
- bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
+ bool has_microcode;
- if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
- cpu_mitigations_off() ||
- srso_cmd == SRSO_CMD_OFF) {
- if (boot_cpu_has(X86_FEATURE_SBPB))
- x86_pred_cmd = PRED_CMD_SBPB;
- goto out;
- }
+ if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
+ srso_mitigation = SRSO_MITIGATION_NONE;
+ if (srso_mitigation == SRSO_MITIGATION_NONE)
+ return;
+
+ if (srso_mitigation == SRSO_MITIGATION_AUTO)
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+
+ has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
if (has_microcode) {
/*
* Zen1/2 with SMT off aren't vulnerable after the right
* IBPB microcode has been applied.
- *
- * Zen1/2 don't have SBPB, no need to try to enable it here.
*/
if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
- goto out;
- }
-
- if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
- srso_mitigation = SRSO_MITIGATION_IBPB;
- goto out;
+ srso_mitigation = SRSO_MITIGATION_NONE;
+ return;
}
} else {
pr_warn("IBPB-extending microcode not applied!\n");
pr_warn(SRSO_NOTICE);
-
- /* may be overwritten by SRSO_CMD_SAFE_RET below */
- srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
}
- switch (srso_cmd) {
- case SRSO_CMD_MICROCODE:
- if (has_microcode) {
- srso_mitigation = SRSO_MITIGATION_MICROCODE;
- pr_warn(SRSO_NOTICE);
- }
- break;
-
- case SRSO_CMD_SAFE_RET:
- if (boot_cpu_has(X86_FEATURE_SRSO_USER_KERNEL_NO))
+ switch (srso_mitigation) {
+ case SRSO_MITIGATION_SAFE_RET:
+ if (boot_cpu_has(X86_FEATURE_SRSO_USER_KERNEL_NO)) {
+ srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
goto ibpb_on_vmexit;
+ }
- if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
- /*
- * Enable the return thunk for generated code
- * like ftrace, static_call, etc.
- */
- setup_force_cpu_cap(X86_FEATURE_RETHUNK);
- setup_force_cpu_cap(X86_FEATURE_UNRET);
-
- if (boot_cpu_data.x86 == 0x19) {
- setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
- x86_return_thunk = srso_alias_return_thunk;
- } else {
- setup_force_cpu_cap(X86_FEATURE_SRSO);
- x86_return_thunk = srso_return_thunk;
- }
- if (has_microcode)
- srso_mitigation = SRSO_MITIGATION_SAFE_RET;
- else
- srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
- } else {
+ if (!IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
+ srso_mitigation = SRSO_MITIGATION_NONE;
}
- break;
- case SRSO_CMD_IBPB:
- if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
- if (has_microcode) {
- setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
- setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
- srso_mitigation = SRSO_MITIGATION_IBPB;
-
- /*
- * IBPB on entry already obviates the need for
- * software-based untraining so clear those in case some
- * other mitigation like Retbleed has selected them.
- */
- setup_clear_cpu_cap(X86_FEATURE_UNRET);
- setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
-
- /*
- * There is no need for RSB filling: write_ibpb() ensures
- * all predictions, including the RSB, are invalidated,
- * regardless of IBPB implementation.
- */
- setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
- }
- } else {
- pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
- }
+ if (!has_microcode)
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
break;
-
ibpb_on_vmexit:
- case SRSO_CMD_IBPB_ON_VMEXIT:
+ case SRSO_MITIGATION_IBPB_ON_VMEXIT:
if (boot_cpu_has(X86_FEATURE_SRSO_BP_SPEC_REDUCE)) {
pr_notice("Reducing speculation to address VM/HV SRSO attack vector.\n");
srso_mitigation = SRSO_MITIGATION_BP_SPEC_REDUCE;
break;
}
-
- if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
- if (has_microcode) {
- setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
- srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
-
- /*
- * There is no need for RSB filling: write_ibpb() ensures
- * all predictions, including the RSB, are invalidated,
- * regardless of IBPB implementation.
- */
- setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
- }
- } else {
+ fallthrough;
+ case SRSO_MITIGATION_IBPB:
+ if (!IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
+ srso_mitigation = SRSO_MITIGATION_NONE;
}
+
+ if (!has_microcode)
+ srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
break;
default:
break;
}
+}
-out:
+static void __init srso_update_mitigation(void)
+{
+ /* If retbleed is using IBPB, that works for SRSO as well */
+ if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB &&
+ boot_cpu_has(X86_FEATURE_IBPB_BRTYPE))
+ srso_mitigation = SRSO_MITIGATION_IBPB;
+
+ if (boot_cpu_has_bug(X86_BUG_SRSO) && !cpu_mitigations_off())
+ pr_info("%s\n", srso_strings[srso_mitigation]);
+}
+
+static void __init srso_apply_mitigation(void)
+{
/*
* Clear the feature flag if this mitigation is not selected as that
* feature flag controls the BpSpecReduce MSR bit toggling in KVM.
@@ -2864,8 +2807,52 @@ out:
if (srso_mitigation != SRSO_MITIGATION_BP_SPEC_REDUCE)
setup_clear_cpu_cap(X86_FEATURE_SRSO_BP_SPEC_REDUCE);
- if (srso_mitigation != SRSO_MITIGATION_NONE)
- pr_info("%s\n", srso_strings[srso_mitigation]);
+ if (srso_mitigation == SRSO_MITIGATION_NONE) {
+ if (boot_cpu_has(X86_FEATURE_SBPB))
+ x86_pred_cmd = PRED_CMD_SBPB;
+ return;
+ }
+
+ switch (srso_mitigation) {
+ case SRSO_MITIGATION_SAFE_RET:
+ case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED:
+ /*
+ * Enable the return thunk for generated code
+ * like ftrace, static_call, etc.
+ */
+ setup_force_cpu_cap(X86_FEATURE_RETHUNK);
+ setup_force_cpu_cap(X86_FEATURE_UNRET);
+
+ if (boot_cpu_data.x86 == 0x19) {
+ setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
+ x86_return_thunk = srso_alias_return_thunk;
+ } else {
+ setup_force_cpu_cap(X86_FEATURE_SRSO);
+ x86_return_thunk = srso_return_thunk;
+ }
+ break;
+ case SRSO_MITIGATION_IBPB:
+ setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
+ /*
+ * IBPB on entry already obviates the need for
+ * software-based untraining so clear those in case some
+ * other mitigation like Retbleed has selected them.
+ */
+ setup_clear_cpu_cap(X86_FEATURE_UNRET);
+ setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
+ fallthrough;
+ case SRSO_MITIGATION_IBPB_ON_VMEXIT:
+ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+ /*
+ * There is no need for RSB filling: entry_ibpb() ensures
+ * all predictions, including the RSB, are invalidated,
+ * regardless of IBPB implementation.
+ */
+ setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
+ break;
+ default:
+ break;
+ }
}
#undef pr_fmt
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure L1TF mitigation
2025-04-18 16:17 ` [PATCH v5 15/16] x86/bugs: Restructure L1TF mitigation David Kaplan
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
0 siblings, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: d43ba2dc8eeeca21811fd9b30e3bd15bb35caaec
Gitweb: https://git.kernel.org/tip/d43ba2dc8eeeca21811fd9b30e3bd15bb35caaec
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:20 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Tue, 29 Apr 2025 18:57:30 +02:00
x86/bugs: Restructure L1TF mitigation
Restructure L1TF to use select/apply functions to create consistent
vulnerability handling.
Define new AUTO mitigation for L1TF.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-16-david.kaplan@amd.com
---
arch/x86/include/asm/processor.h | 1 +
arch/x86/kernel/cpu/bugs.c | 25 +++++++++++++++++++------
arch/x86/kvm/vmx/vmx.c | 2 ++
3 files changed, 22 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 5d2f7e5..0973bed 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -734,6 +734,7 @@ void store_cpu_caps(struct cpuinfo_x86 *info);
enum l1tf_mitigations {
L1TF_MITIGATION_OFF,
+ L1TF_MITIGATION_AUTO,
L1TF_MITIGATION_FLUSH_NOWARN,
L1TF_MITIGATION_FLUSH,
L1TF_MITIGATION_FLUSH_NOSMT,
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index fbb4f13..25d84e2 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -67,6 +67,7 @@ static void __init spectre_v2_user_apply_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init ssb_apply_mitigation(void);
static void __init l1tf_select_mitigation(void);
+static void __init l1tf_apply_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init mds_update_mitigation(void);
static void __init mds_apply_mitigation(void);
@@ -245,6 +246,7 @@ void __init cpu_select_mitigations(void)
retbleed_apply_mitigation();
spectre_v2_user_apply_mitigation();
ssb_apply_mitigation();
+ l1tf_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -2538,7 +2540,7 @@ EXPORT_SYMBOL_GPL(itlb_multihit_kvm_mitigation);
/* Default mitigation for L1TF-affected CPUs */
enum l1tf_mitigations l1tf_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_L1TF) ? L1TF_MITIGATION_FLUSH : L1TF_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_L1TF) ? L1TF_MITIGATION_AUTO : L1TF_MITIGATION_OFF;
#if IS_ENABLED(CONFIG_KVM_INTEL)
EXPORT_SYMBOL_GPL(l1tf_mitigation);
#endif
@@ -2586,22 +2588,33 @@ static void override_cache_bits(struct cpuinfo_x86 *c)
static void __init l1tf_select_mitigation(void)
{
+ if (!boot_cpu_has_bug(X86_BUG_L1TF) || cpu_mitigations_off()) {
+ l1tf_mitigation = L1TF_MITIGATION_OFF;
+ return;
+ }
+
+ if (l1tf_mitigation == L1TF_MITIGATION_AUTO) {
+ if (cpu_mitigations_auto_nosmt())
+ l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
+ else
+ l1tf_mitigation = L1TF_MITIGATION_FLUSH;
+ }
+}
+
+static void __init l1tf_apply_mitigation(void)
+{
u64 half_pa;
if (!boot_cpu_has_bug(X86_BUG_L1TF))
return;
- if (cpu_mitigations_off())
- l1tf_mitigation = L1TF_MITIGATION_OFF;
- else if (cpu_mitigations_auto_nosmt())
- l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
-
override_cache_bits(&boot_cpu_data);
switch (l1tf_mitigation) {
case L1TF_MITIGATION_OFF:
case L1TF_MITIGATION_FLUSH_NOWARN:
case L1TF_MITIGATION_FLUSH:
+ case L1TF_MITIGATION_AUTO:
break;
case L1TF_MITIGATION_FLUSH_NOSMT:
case L1TF_MITIGATION_FULL:
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index a1754f7..0aba471 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -273,6 +273,7 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
case L1TF_MITIGATION_OFF:
l1tf = VMENTER_L1D_FLUSH_NEVER;
break;
+ case L1TF_MITIGATION_AUTO:
case L1TF_MITIGATION_FLUSH_NOWARN:
case L1TF_MITIGATION_FLUSH:
case L1TF_MITIGATION_FLUSH_NOSMT:
@@ -7704,6 +7705,7 @@ int vmx_vm_init(struct kvm *kvm)
case L1TF_MITIGATION_FLUSH_NOWARN:
/* 'I explicitly don't care' is set */
break;
+ case L1TF_MITIGATION_AUTO:
case L1TF_MITIGATION_FLUSH:
case L1TF_MITIGATION_FLUSH_NOSMT:
case L1TF_MITIGATION_FULL:
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure SSB mitigation
2025-04-18 16:17 ` [PATCH v5 14/16] x86/bugs: Restructure SSB mitigation David Kaplan
2025-04-29 12:54 ` Borislav Petkov
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
1 sibling, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 5ece59a2fca6e1467558467a05cf742b7e52d1b7
Gitweb: https://git.kernel.org/tip/5ece59a2fca6e1467558467a05cf742b7e52d1b7
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:19 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Tue, 29 Apr 2025 18:57:26 +02:00
x86/bugs: Restructure SSB mitigation
Restructure SSB to use select/apply functions to create consistent
vulnerability handling.
Remove __ssb_select_mitigation() and split the functionality between the
select/apply functions.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-15-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 36 +++++++++++++++++-------------------
1 file changed, 17 insertions(+), 19 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 93d0743..fbb4f13 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -65,6 +65,7 @@ static void __init spectre_v2_user_select_mitigation(void);
static void __init spectre_v2_user_update_mitigation(void);
static void __init spectre_v2_user_apply_mitigation(void);
static void __init ssb_select_mitigation(void);
+static void __init ssb_apply_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init mds_update_mitigation(void);
@@ -243,6 +244,7 @@ void __init cpu_select_mitigations(void)
spectre_v2_apply_mitigation();
retbleed_apply_mitigation();
spectre_v2_user_apply_mitigation();
+ ssb_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -2219,19 +2221,18 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
return cmd;
}
-static enum ssb_mitigation __init __ssb_select_mitigation(void)
+static void __init ssb_select_mitigation(void)
{
- enum ssb_mitigation mode = SPEC_STORE_BYPASS_NONE;
enum ssb_mitigation_cmd cmd;
if (!boot_cpu_has(X86_FEATURE_SSBD))
- return mode;
+ goto out;
cmd = ssb_parse_cmdline();
if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS) &&
(cmd == SPEC_STORE_BYPASS_CMD_NONE ||
cmd == SPEC_STORE_BYPASS_CMD_AUTO))
- return mode;
+ return;
switch (cmd) {
case SPEC_STORE_BYPASS_CMD_SECCOMP:
@@ -2240,28 +2241,35 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
* enabled.
*/
if (IS_ENABLED(CONFIG_SECCOMP))
- mode = SPEC_STORE_BYPASS_SECCOMP;
+ ssb_mode = SPEC_STORE_BYPASS_SECCOMP;
else
- mode = SPEC_STORE_BYPASS_PRCTL;
+ ssb_mode = SPEC_STORE_BYPASS_PRCTL;
break;
case SPEC_STORE_BYPASS_CMD_ON:
- mode = SPEC_STORE_BYPASS_DISABLE;
+ ssb_mode = SPEC_STORE_BYPASS_DISABLE;
break;
case SPEC_STORE_BYPASS_CMD_AUTO:
case SPEC_STORE_BYPASS_CMD_PRCTL:
- mode = SPEC_STORE_BYPASS_PRCTL;
+ ssb_mode = SPEC_STORE_BYPASS_PRCTL;
break;
case SPEC_STORE_BYPASS_CMD_NONE:
break;
}
+out:
+ if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
+ pr_info("%s\n", ssb_strings[ssb_mode]);
+}
+
+static void __init ssb_apply_mitigation(void)
+{
/*
* We have three CPU feature flags that are in play here:
* - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
* - X86_FEATURE_SSBD - CPU is able to turn off speculative store bypass
* - X86_FEATURE_SPEC_STORE_BYPASS_DISABLE - engage the mitigation
*/
- if (mode == SPEC_STORE_BYPASS_DISABLE) {
+ if (ssb_mode == SPEC_STORE_BYPASS_DISABLE) {
setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE);
/*
* Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
@@ -2275,16 +2283,6 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
update_spec_ctrl(x86_spec_ctrl_base);
}
}
-
- return mode;
-}
-
-static void ssb_select_mitigation(void)
-{
- ssb_mode = __ssb_select_mitigation();
-
- if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
- pr_info("%s\n", ssb_strings[ssb_mode]);
}
#undef pr_fmt
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure spectre_v2 mitigation
2025-04-18 16:17 ` [PATCH v5 13/16] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
2025-04-29 10:46 ` Borislav Petkov
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
1 sibling, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 480e803dacf8be92b1104ca65f2be4cb0e191375
Gitweb: https://git.kernel.org/tip/480e803dacf8be92b1104ca65f2be4cb0e191375
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:18 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Tue, 29 Apr 2025 18:53:35 +02:00
x86/bugs: Restructure spectre_v2 mitigation
Restructure spectre_v2 to use select/update/apply functions to create
consistent vulnerability handling.
The spectre_v2 mitigation may be updated based on the selected retbleed
mitigation.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-14-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 101 +++++++++++++++++++-----------------
1 file changed, 56 insertions(+), 45 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 100a320..93d0743 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -56,6 +56,8 @@
static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v1_apply_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
+static void __init spectre_v2_update_mitigation(void);
+static void __init spectre_v2_apply_mitigation(void);
static void __init retbleed_select_mitigation(void);
static void __init retbleed_update_mitigation(void);
static void __init retbleed_apply_mitigation(void);
@@ -217,6 +219,12 @@ void __init cpu_select_mitigations(void)
* After mitigations are selected, some may need to update their
* choices.
*/
+ spectre_v2_update_mitigation();
+ /*
+ * retbleed_update_mitigation() relies on the state set by
+ * spectre_v2_update_mitigation(); specifically it wants to know about
+ * spectre_v2=ibrs.
+ */
retbleed_update_mitigation();
/*
@@ -232,6 +240,7 @@ void __init cpu_select_mitigations(void)
bhi_update_mitigation();
spectre_v1_apply_mitigation();
+ spectre_v2_apply_mitigation();
retbleed_apply_mitigation();
spectre_v2_user_apply_mitigation();
mds_apply_mitigation();
@@ -1878,75 +1887,80 @@ static void __init bhi_apply_mitigation(void)
static void __init spectre_v2_select_mitigation(void)
{
- enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
- enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
+ spectre_v2_cmd = spectre_v2_parse_cmdline();
- /*
- * If the CPU is not affected and the command line mode is NONE or AUTO
- * then nothing to do.
- */
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
- (cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO))
+ (spectre_v2_cmd == SPECTRE_V2_CMD_NONE || spectre_v2_cmd == SPECTRE_V2_CMD_AUTO))
return;
- switch (cmd) {
+ switch (spectre_v2_cmd) {
case SPECTRE_V2_CMD_NONE:
return;
case SPECTRE_V2_CMD_FORCE:
case SPECTRE_V2_CMD_AUTO:
if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
- mode = SPECTRE_V2_EIBRS;
- break;
- }
-
- if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
- boot_cpu_has_bug(X86_BUG_RETBLEED) &&
- retbleed_mitigation != RETBLEED_MITIGATION_NONE &&
- retbleed_mitigation != RETBLEED_MITIGATION_STUFF &&
- boot_cpu_has(X86_FEATURE_IBRS) &&
- boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
- mode = SPECTRE_V2_IBRS;
+ spectre_v2_enabled = SPECTRE_V2_EIBRS;
break;
}
- mode = spectre_v2_select_retpoline();
+ spectre_v2_enabled = spectre_v2_select_retpoline();
break;
case SPECTRE_V2_CMD_RETPOLINE_LFENCE:
pr_err(SPECTRE_V2_LFENCE_MSG);
- mode = SPECTRE_V2_LFENCE;
+ spectre_v2_enabled = SPECTRE_V2_LFENCE;
break;
case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
- mode = SPECTRE_V2_RETPOLINE;
+ spectre_v2_enabled = SPECTRE_V2_RETPOLINE;
break;
case SPECTRE_V2_CMD_RETPOLINE:
- mode = spectre_v2_select_retpoline();
+ spectre_v2_enabled = spectre_v2_select_retpoline();
break;
case SPECTRE_V2_CMD_IBRS:
- mode = SPECTRE_V2_IBRS;
+ spectre_v2_enabled = SPECTRE_V2_IBRS;
break;
case SPECTRE_V2_CMD_EIBRS:
- mode = SPECTRE_V2_EIBRS;
+ spectre_v2_enabled = SPECTRE_V2_EIBRS;
break;
case SPECTRE_V2_CMD_EIBRS_LFENCE:
- mode = SPECTRE_V2_EIBRS_LFENCE;
+ spectre_v2_enabled = SPECTRE_V2_EIBRS_LFENCE;
break;
case SPECTRE_V2_CMD_EIBRS_RETPOLINE:
- mode = SPECTRE_V2_EIBRS_RETPOLINE;
+ spectre_v2_enabled = SPECTRE_V2_EIBRS_RETPOLINE;
break;
}
+}
- if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
+static void __init spectre_v2_update_mitigation(void)
+{
+ if (spectre_v2_cmd == SPECTRE_V2_CMD_AUTO) {
+ if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
+ boot_cpu_has_bug(X86_BUG_RETBLEED) &&
+ retbleed_mitigation != RETBLEED_MITIGATION_NONE &&
+ retbleed_mitigation != RETBLEED_MITIGATION_STUFF &&
+ boot_cpu_has(X86_FEATURE_IBRS) &&
+ boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+ spectre_v2_enabled = SPECTRE_V2_IBRS;
+ }
+ }
+
+ if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) && !cpu_mitigations_off())
+ pr_info("%s\n", spectre_v2_strings[spectre_v2_enabled]);
+}
+
+static void __init spectre_v2_apply_mitigation(void)
+{
+ if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
- if (spectre_v2_in_ibrs_mode(mode)) {
+ if (spectre_v2_in_ibrs_mode(spectre_v2_enabled)) {
if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
} else {
@@ -1955,8 +1969,10 @@ static void __init spectre_v2_select_mitigation(void)
}
}
- switch (mode) {
+ switch (spectre_v2_enabled) {
case SPECTRE_V2_NONE:
+ return;
+
case SPECTRE_V2_EIBRS:
break;
@@ -1982,15 +1998,12 @@ static void __init spectre_v2_select_mitigation(void)
* JMPs gets protection against BHI and Intramode-BTI, but RET
* prediction from a non-RSB predictor is still a risk.
*/
- if (mode == SPECTRE_V2_EIBRS_LFENCE ||
- mode == SPECTRE_V2_EIBRS_RETPOLINE ||
- mode == SPECTRE_V2_RETPOLINE)
+ if (spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE ||
+ spectre_v2_enabled == SPECTRE_V2_EIBRS_RETPOLINE ||
+ spectre_v2_enabled == SPECTRE_V2_RETPOLINE)
spec_ctrl_disable_kernel_rrsba();
- spectre_v2_enabled = mode;
- pr_info("%s\n", spectre_v2_strings[mode]);
-
- spectre_v2_select_rsb_mitigation(mode);
+ spectre_v2_select_rsb_mitigation(spectre_v2_enabled);
/*
* Retpoline protects the kernel, but doesn't protect firmware. IBRS
@@ -1998,10 +2011,10 @@ static void __init spectre_v2_select_mitigation(void)
* firmware calls only when IBRS / Enhanced / Automatic IBRS aren't
* otherwise enabled.
*
- * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
- * the user might select retpoline on the kernel command line and if
- * the CPU supports Enhanced IBRS, kernel might un-intentionally not
- * enable IBRS around firmware calls.
+ * Use "spectre_v2_enabled" to check Enhanced IBRS instead of
+ * boot_cpu_has(), because the user might select retpoline on the kernel
+ * command line and if the CPU supports Enhanced IBRS, kernel might
+ * un-intentionally not enable IBRS around firmware calls.
*/
if (boot_cpu_has_bug(X86_BUG_RETBLEED) &&
boot_cpu_has(X86_FEATURE_IBPB) &&
@@ -2013,13 +2026,11 @@ static void __init spectre_v2_select_mitigation(void)
pr_info("Enabling Speculation Barrier for firmware calls\n");
}
- } else if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) {
+ } else if (boot_cpu_has(X86_FEATURE_IBRS) &&
+ !spectre_v2_in_ibrs_mode(spectre_v2_enabled)) {
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
-
- /* Set up IBPB and STIBP depending on the general spectre V2 command */
- spectre_v2_cmd = cmd;
}
static void update_stibp_msr(void * __unused)
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure BHI mitigation
2025-04-18 16:17 ` [PATCH v5 12/16] x86/bugs: Restructure BHI mitigation David Kaplan
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
0 siblings, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: efe313827c98c81156dea9b004877db4ca728b1a
Gitweb: https://git.kernel.org/tip/efe313827c98c81156dea9b004877db4ca728b1a
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:17 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Tue, 29 Apr 2025 18:51:29 +02:00
x86/bugs: Restructure BHI mitigation
Restructure BHI mitigation to use select/update/apply functions to create
consistent vulnerability handling. BHI mitigation was previously selected
from within spectre_v2_select_mitigation() and now is selected from
cpu_select_mitigation() like with all others.
Define new AUTO mitigation for BHI.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-13-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 31 +++++++++++++++++++++++++++----
1 file changed, 27 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index dc75195..100a320 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -82,6 +82,9 @@ static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
static void __init gds_select_mitigation(void);
static void __init gds_apply_mitigation(void);
+static void __init bhi_select_mitigation(void);
+static void __init bhi_update_mitigation(void);
+static void __init bhi_apply_mitigation(void);
/* The base value of the SPEC_CTRL MSR without task-specific bits set */
u64 x86_spec_ctrl_base;
@@ -208,6 +211,7 @@ void __init cpu_select_mitigations(void)
*/
srso_select_mitigation();
gds_select_mitigation();
+ bhi_select_mitigation();
/*
* After mitigations are selected, some may need to update their
@@ -225,6 +229,7 @@ void __init cpu_select_mitigations(void)
taa_update_mitigation();
mmio_update_mitigation();
rfds_update_mitigation();
+ bhi_update_mitigation();
spectre_v1_apply_mitigation();
retbleed_apply_mitigation();
@@ -235,6 +240,7 @@ void __init cpu_select_mitigations(void)
rfds_apply_mitigation();
srbds_apply_mitigation();
gds_apply_mitigation();
+ bhi_apply_mitigation();
}
/*
@@ -1794,12 +1800,13 @@ static bool __init spec_ctrl_bhi_dis(void)
enum bhi_mitigations {
BHI_MITIGATION_OFF,
+ BHI_MITIGATION_AUTO,
BHI_MITIGATION_ON,
BHI_MITIGATION_VMEXIT_ONLY,
};
static enum bhi_mitigations bhi_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_SPECTRE_BHI) ? BHI_MITIGATION_ON : BHI_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_SPECTRE_BHI) ? BHI_MITIGATION_AUTO : BHI_MITIGATION_OFF;
static int __init spectre_bhi_parse_cmdline(char *str)
{
@@ -1821,6 +1828,25 @@ early_param("spectre_bhi", spectre_bhi_parse_cmdline);
static void __init bhi_select_mitigation(void)
{
+ if (!boot_cpu_has(X86_BUG_BHI) || cpu_mitigations_off())
+ bhi_mitigation = BHI_MITIGATION_OFF;
+
+ if (bhi_mitigation == BHI_MITIGATION_AUTO)
+ bhi_mitigation = BHI_MITIGATION_ON;
+}
+
+static void __init bhi_update_mitigation(void)
+{
+ if (spectre_v2_cmd == SPECTRE_V2_CMD_NONE)
+ bhi_mitigation = BHI_MITIGATION_OFF;
+
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
+ spectre_v2_cmd == SPECTRE_V2_CMD_AUTO)
+ bhi_mitigation = BHI_MITIGATION_OFF;
+}
+
+static void __init bhi_apply_mitigation(void)
+{
if (bhi_mitigation == BHI_MITIGATION_OFF)
return;
@@ -1961,9 +1987,6 @@ static void __init spectre_v2_select_mitigation(void)
mode == SPECTRE_V2_RETPOLINE)
spec_ctrl_disable_kernel_rrsba();
- if (boot_cpu_has(X86_BUG_BHI))
- bhi_select_mitigation();
-
spectre_v2_enabled = mode;
pr_info("%s\n", spectre_v2_strings[mode]);
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure spectre_v2_user mitigation
2025-04-18 16:17 ` [PATCH v5 11/16] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
2025-04-29 8:47 ` Borislav Petkov
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
1 sibling, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: ddfca9430a617780c8ad9691bf44660ae49e2a35
Gitweb: https://git.kernel.org/tip/ddfca9430a617780c8ad9691bf44660ae49e2a35
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:16 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Tue, 29 Apr 2025 18:51:21 +02:00
x86/bugs: Restructure spectre_v2_user mitigation
Restructure spectre_v2_user to use select/update/apply functions to
create consistent vulnerability handling.
The IBPB/STIBP choices are first decided based on the spectre_v2_user
command line but can be modified by the spectre_v2 command line option
as well.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-12-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 168 ++++++++++++++++++++----------------
1 file changed, 94 insertions(+), 74 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 207a472..dc75195 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -60,6 +60,8 @@ static void __init retbleed_select_mitigation(void);
static void __init retbleed_update_mitigation(void);
static void __init retbleed_apply_mitigation(void);
static void __init spectre_v2_user_select_mitigation(void);
+static void __init spectre_v2_user_update_mitigation(void);
+static void __init spectre_v2_user_apply_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
@@ -190,11 +192,6 @@ void __init cpu_select_mitigations(void)
spectre_v1_select_mitigation();
spectre_v2_select_mitigation();
retbleed_select_mitigation();
- /*
- * spectre_v2_user_select_mitigation() relies on the state set by
- * retbleed_select_mitigation(); specifically the STIBP selection is
- * forced for UNRET or IBPB.
- */
spectre_v2_user_select_mitigation();
ssb_select_mitigation();
l1tf_select_mitigation();
@@ -217,6 +214,13 @@ void __init cpu_select_mitigations(void)
* choices.
*/
retbleed_update_mitigation();
+
+ /*
+ * spectre_v2_user_update_mitigation() depends on
+ * retbleed_update_mitigation(), specifically the STIBP
+ * selection is forced for UNRET or IBPB.
+ */
+ spectre_v2_user_update_mitigation();
mds_update_mitigation();
taa_update_mitigation();
mmio_update_mitigation();
@@ -224,6 +228,7 @@ void __init cpu_select_mitigations(void)
spectre_v1_apply_mitigation();
retbleed_apply_mitigation();
+ spectre_v2_user_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -1379,6 +1384,8 @@ enum spectre_v2_mitigation_cmd {
SPECTRE_V2_CMD_IBRS,
};
+static enum spectre_v2_mitigation_cmd spectre_v2_cmd __ro_after_init = SPECTRE_V2_CMD_AUTO;
+
enum spectre_v2_user_cmd {
SPECTRE_V2_USER_CMD_NONE,
SPECTRE_V2_USER_CMD_AUTO,
@@ -1417,31 +1424,18 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure)
pr_info("spectre_v2_user=%s forced on command line.\n", reason);
}
-static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd;
-
-static enum spectre_v2_user_cmd __init
-spectre_v2_parse_user_cmdline(void)
+static enum spectre_v2_user_cmd __init spectre_v2_parse_user_cmdline(void)
{
- enum spectre_v2_user_cmd mode;
char arg[20];
int ret, i;
- mode = IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2) ?
- SPECTRE_V2_USER_CMD_AUTO : SPECTRE_V2_USER_CMD_NONE;
-
- switch (spectre_v2_cmd) {
- case SPECTRE_V2_CMD_NONE:
+ if (cpu_mitigations_off() || !IS_ENABLED(CONFIG_MITIGATION_SPECTRE_V2))
return SPECTRE_V2_USER_CMD_NONE;
- case SPECTRE_V2_CMD_FORCE:
- return SPECTRE_V2_USER_CMD_FORCE;
- default:
- break;
- }
ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
arg, sizeof(arg));
if (ret < 0)
- return mode;
+ return SPECTRE_V2_USER_CMD_AUTO;
for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) {
if (match_option(arg, ret, v2_user_options[i].option)) {
@@ -1452,7 +1446,7 @@ spectre_v2_parse_user_cmdline(void)
}
pr_err("Unknown user space protection option (%s). Switching to default\n", arg);
- return mode;
+ return SPECTRE_V2_USER_CMD_AUTO;
}
static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
@@ -1460,60 +1454,72 @@ static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS;
}
-static void __init
-spectre_v2_user_select_mitigation(void)
+static void __init spectre_v2_user_select_mitigation(void)
{
- enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
- enum spectre_v2_user_cmd cmd;
-
if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
return;
- cmd = spectre_v2_parse_user_cmdline();
- switch (cmd) {
+ switch (spectre_v2_parse_user_cmdline()) {
case SPECTRE_V2_USER_CMD_NONE:
- goto set_mode;
+ return;
case SPECTRE_V2_USER_CMD_FORCE:
- mode = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
break;
case SPECTRE_V2_USER_CMD_AUTO:
case SPECTRE_V2_USER_CMD_PRCTL:
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
+ break;
case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
- mode = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
break;
case SPECTRE_V2_USER_CMD_SECCOMP:
+ if (IS_ENABLED(CONFIG_SECCOMP))
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_SECCOMP;
+ else
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_stibp = spectre_v2_user_ibpb;
+ break;
case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
if (IS_ENABLED(CONFIG_SECCOMP))
- mode = SPECTRE_V2_USER_SECCOMP;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_SECCOMP;
else
- mode = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
break;
}
- /* Initialize Indirect Branch Prediction Barrier */
- if (boot_cpu_has(X86_FEATURE_IBPB)) {
- static_branch_enable(&switch_vcpu_ibpb);
+ /*
+ * At this point, an STIBP mode other than "off" has been set.
+ * If STIBP support is not being forced, check if STIBP always-on
+ * is preferred.
+ */
+ if ((spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL ||
+ spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP) &&
+ boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
+ spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT_PREFERRED;
- spectre_v2_user_ibpb = mode;
- switch (cmd) {
- case SPECTRE_V2_USER_CMD_NONE:
- break;
- case SPECTRE_V2_USER_CMD_FORCE:
- case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
- case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
- static_branch_enable(&switch_mm_always_ibpb);
- spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
- break;
- case SPECTRE_V2_USER_CMD_PRCTL:
- case SPECTRE_V2_USER_CMD_AUTO:
- case SPECTRE_V2_USER_CMD_SECCOMP:
- static_branch_enable(&switch_mm_cond_ibpb);
- break;
- }
+ if (!boot_cpu_has(X86_FEATURE_IBPB))
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_NONE;
- pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
- static_key_enabled(&switch_mm_always_ibpb) ?
- "always-on" : "conditional");
+ if (!boot_cpu_has(X86_FEATURE_STIBP))
+ spectre_v2_user_stibp = SPECTRE_V2_USER_NONE;
+}
+
+static void __init spectre_v2_user_update_mitigation(void)
+{
+ if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
+ return;
+
+ /* The spectre_v2 cmd line can override spectre_v2_user options */
+ if (spectre_v2_cmd == SPECTRE_V2_CMD_NONE) {
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_NONE;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_NONE;
+ } else if (spectre_v2_cmd == SPECTRE_V2_CMD_FORCE) {
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
}
/*
@@ -1531,30 +1537,44 @@ spectre_v2_user_select_mitigation(void)
if (!boot_cpu_has(X86_FEATURE_STIBP) ||
!cpu_smt_possible() ||
(spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
- !boot_cpu_has(X86_FEATURE_AUTOIBRS)))
+ !boot_cpu_has(X86_FEATURE_AUTOIBRS))) {
+ spectre_v2_user_stibp = SPECTRE_V2_USER_NONE;
return;
+ }
- /*
- * At this point, an STIBP mode other than "off" has been set.
- * If STIBP support is not being forced, check if STIBP always-on
- * is preferred.
- */
- if (mode != SPECTRE_V2_USER_STRICT &&
- boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
- mode = SPECTRE_V2_USER_STRICT_PREFERRED;
-
- if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
- retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
- if (mode != SPECTRE_V2_USER_STRICT &&
- mode != SPECTRE_V2_USER_STRICT_PREFERRED)
+ if (spectre_v2_user_stibp != SPECTRE_V2_USER_NONE &&
+ (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
+ retbleed_mitigation == RETBLEED_MITIGATION_IBPB)) {
+ if (spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT &&
+ spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT_PREFERRED)
pr_info("Selecting STIBP always-on mode to complement retbleed mitigation\n");
- mode = SPECTRE_V2_USER_STRICT_PREFERRED;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT_PREFERRED;
}
+ pr_info("%s\n", spectre_v2_user_strings[spectre_v2_user_stibp]);
+}
- spectre_v2_user_stibp = mode;
+static void __init spectre_v2_user_apply_mitigation(void)
+{
+ /* Initialize Indirect Branch Prediction Barrier */
+ if (spectre_v2_user_ibpb != SPECTRE_V2_USER_NONE) {
+ static_branch_enable(&switch_vcpu_ibpb);
+
+ switch (spectre_v2_user_ibpb) {
+ case SPECTRE_V2_USER_STRICT:
+ static_branch_enable(&switch_mm_always_ibpb);
+ break;
+ case SPECTRE_V2_USER_PRCTL:
+ case SPECTRE_V2_USER_SECCOMP:
+ static_branch_enable(&switch_mm_cond_ibpb);
+ break;
+ default:
+ break;
+ }
-set_mode:
- pr_info("%s\n", spectre_v2_user_strings[mode]);
+ pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
+ static_key_enabled(&switch_mm_always_ibpb) ?
+ "always-on" : "conditional");
+ }
}
static const char * const spectre_v2_strings[] = {
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Allow retbleed=stuff only on Intel
2025-04-18 16:17 ` [PATCH v5 09/16] x86/bugs: Allow retbleed=stuff only on Intel David Kaplan
2025-04-27 15:38 ` Borislav Petkov
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
1 sibling, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 83d4b19331f3a5d5829d338a0a64b69c9c28b36e
Gitweb: https://git.kernel.org/tip/83d4b19331f3a5d5829d338a0a64b69c9c28b36e
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:14 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Mon, 28 Apr 2025 19:55:50 +02:00
x86/bugs: Allow retbleed=stuff only on Intel
The retbleed=stuff mitigation is only applicable for Intel CPUs affected
by retbleed. If this option is selected for another vendor, print a
warning and fall back to the AUTO option.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-10-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 1a42abb..7edf429 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1191,6 +1191,10 @@ static void __init retbleed_select_mitigation(void)
case RETBLEED_CMD_STUFF:
if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) &&
spectre_v2_enabled == SPECTRE_V2_RETPOLINE) {
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
+ pr_err("WARNING: retbleed=stuff only supported for Intel CPUs.\n");
+ goto do_cmd_auto;
+ }
retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
} else {
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure retbleed mitigation
2025-04-18 16:17 ` [PATCH v5 10/16] x86/bugs: Restructure retbleed mitigation David Kaplan
2025-04-28 18:59 ` Borislav Petkov
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
1 sibling, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: e3b78a7ad5ea718ea1dbaeb02ba9a6aa2aee9324
Gitweb: https://git.kernel.org/tip/e3b78a7ad5ea718ea1dbaeb02ba9a6aa2aee9324
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:15 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Tue, 29 Apr 2025 10:22:08 +02:00
x86/bugs: Restructure retbleed mitigation
Restructure retbleed mitigation to use select/update/apply functions to create
consistent vulnerability handling. The retbleed_update_mitigation()
simplifies the dependency between spectre_v2 and retbleed.
The command line options now directly select a preferred mitigation
which simplifies the logic.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-11-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 195 ++++++++++++++++++------------------
1 file changed, 98 insertions(+), 97 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 7edf429..207a472 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -57,6 +57,8 @@ static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v1_apply_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init retbleed_select_mitigation(void);
+static void __init retbleed_update_mitigation(void);
+static void __init retbleed_apply_mitigation(void);
static void __init spectre_v2_user_select_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
@@ -187,11 +189,6 @@ void __init cpu_select_mitigations(void)
/* Select the proper CPU mitigations before patching alternatives: */
spectre_v1_select_mitigation();
spectre_v2_select_mitigation();
- /*
- * retbleed_select_mitigation() relies on the state set by
- * spectre_v2_select_mitigation(); specifically it wants to know about
- * spectre_v2=ibrs.
- */
retbleed_select_mitigation();
/*
* spectre_v2_user_select_mitigation() relies on the state set by
@@ -219,12 +216,14 @@ void __init cpu_select_mitigations(void)
* After mitigations are selected, some may need to update their
* choices.
*/
+ retbleed_update_mitigation();
mds_update_mitigation();
taa_update_mitigation();
mmio_update_mitigation();
rfds_update_mitigation();
spectre_v1_apply_mitigation();
+ retbleed_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -1085,6 +1084,7 @@ enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE;
enum retbleed_mitigation {
RETBLEED_MITIGATION_NONE,
+ RETBLEED_MITIGATION_AUTO,
RETBLEED_MITIGATION_UNRET,
RETBLEED_MITIGATION_IBPB,
RETBLEED_MITIGATION_IBRS,
@@ -1092,14 +1092,6 @@ enum retbleed_mitigation {
RETBLEED_MITIGATION_STUFF,
};
-enum retbleed_mitigation_cmd {
- RETBLEED_CMD_OFF,
- RETBLEED_CMD_AUTO,
- RETBLEED_CMD_UNRET,
- RETBLEED_CMD_IBPB,
- RETBLEED_CMD_STUFF,
-};
-
static const char * const retbleed_strings[] = {
[RETBLEED_MITIGATION_NONE] = "Vulnerable",
[RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk",
@@ -1110,9 +1102,7 @@ static const char * const retbleed_strings[] = {
};
static enum retbleed_mitigation retbleed_mitigation __ro_after_init =
- RETBLEED_MITIGATION_NONE;
-static enum retbleed_mitigation_cmd retbleed_cmd __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_CMD_AUTO : RETBLEED_CMD_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_MITIGATION_AUTO : RETBLEED_MITIGATION_NONE;
static int __ro_after_init retbleed_nosmt = false;
@@ -1129,15 +1119,15 @@ static int __init retbleed_parse_cmdline(char *str)
}
if (!strcmp(str, "off")) {
- retbleed_cmd = RETBLEED_CMD_OFF;
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
} else if (!strcmp(str, "auto")) {
- retbleed_cmd = RETBLEED_CMD_AUTO;
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
} else if (!strcmp(str, "unret")) {
- retbleed_cmd = RETBLEED_CMD_UNRET;
+ retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
} else if (!strcmp(str, "ibpb")) {
- retbleed_cmd = RETBLEED_CMD_IBPB;
+ retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
} else if (!strcmp(str, "stuff")) {
- retbleed_cmd = RETBLEED_CMD_STUFF;
+ retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
} else if (!strcmp(str, "nosmt")) {
retbleed_nosmt = true;
} else if (!strcmp(str, "force")) {
@@ -1158,76 +1148,109 @@ early_param("retbleed", retbleed_parse_cmdline);
static void __init retbleed_select_mitigation(void)
{
- bool mitigate_smt = false;
-
- if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
- return;
-
- switch (retbleed_cmd) {
- case RETBLEED_CMD_OFF:
+ if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off()) {
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
return;
+ }
- case RETBLEED_CMD_UNRET:
- if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
- retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
- } else {
+ switch (retbleed_mitigation) {
+ case RETBLEED_MITIGATION_UNRET:
+ if (!IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
pr_err("WARNING: kernel not compiled with MITIGATION_UNRET_ENTRY.\n");
- goto do_cmd_auto;
}
break;
-
- case RETBLEED_CMD_IBPB:
+ case RETBLEED_MITIGATION_IBPB:
if (!boot_cpu_has(X86_FEATURE_IBPB)) {
pr_err("WARNING: CPU does not support IBPB.\n");
- goto do_cmd_auto;
- } else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
- retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
- } else {
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+ } else if (!IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
- goto do_cmd_auto;
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+ }
+ break;
+ case RETBLEED_MITIGATION_STUFF:
+ if (!IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING)) {
+ pr_err("WARNING: kernel not compiled with MITIGATION_CALL_DEPTH_TRACKING.\n");
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+ } else if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
+ pr_err("WARNING: retbleed=stuff only supported for Intel CPUs.\n");
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
}
break;
+ default:
+ break;
+ }
- case RETBLEED_CMD_STUFF:
- if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) &&
- spectre_v2_enabled == SPECTRE_V2_RETPOLINE) {
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) {
- pr_err("WARNING: retbleed=stuff only supported for Intel CPUs.\n");
- goto do_cmd_auto;
- }
- retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
+ if (retbleed_mitigation != RETBLEED_MITIGATION_AUTO)
+ return;
- } else {
- if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING))
- pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
- else
- pr_err("WARNING: kernel not compiled with MITIGATION_CALL_DEPTH_TRACKING.\n");
+ /* Intel mitigation selected in retbleed_update_mitigation() */
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
+ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
+ if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
+ retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
+ else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
+ boot_cpu_has(X86_FEATURE_IBPB))
+ retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
+ else
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
+ }
+}
- goto do_cmd_auto;
- }
- break;
+static void __init retbleed_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
+ return;
+
+ if (retbleed_mitigation == RETBLEED_MITIGATION_NONE)
+ goto out;
-do_cmd_auto:
- case RETBLEED_CMD_AUTO:
- if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
- boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
- if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
- retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
- else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
- boot_cpu_has(X86_FEATURE_IBPB))
- retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
+ /*
+ * retbleed=stuff is only allowed on Intel. If stuffing can't be used
+ * then a different mitigation will be selected below.
+ */
+ if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF) {
+ if (spectre_v2_enabled != SPECTRE_V2_RETPOLINE) {
+ pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
}
+ }
+ /*
+ * Let IBRS trump all on Intel without affecting the effects of the
+ * retbleed= cmdline option except for call depth based stuffing
+ */
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+ switch (spectre_v2_enabled) {
+ case SPECTRE_V2_IBRS:
+ retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
+ break;
+ case SPECTRE_V2_EIBRS:
+ case SPECTRE_V2_EIBRS_RETPOLINE:
+ case SPECTRE_V2_EIBRS_LFENCE:
+ retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
+ break;
+ default:
+ if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
+ pr_err(RETBLEED_INTEL_MSG);
+ }
+ /* If nothing has set the mitigation yet, default to NONE. */
+ if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO)
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
+ }
+out:
+ pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
+}
- /*
- * The Intel mitigation (IBRS or eIBRS) was already selected in
- * spectre_v2_select_mitigation(). 'retbleed_mitigation' will
- * be set accordingly below.
- */
- break;
- }
+static void __init retbleed_apply_mitigation(void)
+{
+ bool mitigate_smt = false;
switch (retbleed_mitigation) {
+ case RETBLEED_MITIGATION_NONE:
+ return;
+
case RETBLEED_MITIGATION_UNRET:
setup_force_cpu_cap(X86_FEATURE_RETHUNK);
setup_force_cpu_cap(X86_FEATURE_UNRET);
@@ -1277,28 +1300,6 @@ do_cmd_auto:
if (mitigate_smt && !boot_cpu_has(X86_FEATURE_STIBP) &&
(retbleed_nosmt || cpu_mitigations_auto_nosmt()))
cpu_smt_disable(false);
-
- /*
- * Let IBRS trump all on Intel without affecting the effects of the
- * retbleed= cmdline option except for call depth based stuffing
- */
- if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
- switch (spectre_v2_enabled) {
- case SPECTRE_V2_IBRS:
- retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
- break;
- case SPECTRE_V2_EIBRS:
- case SPECTRE_V2_EIBRS_RETPOLINE:
- case SPECTRE_V2_EIBRS_LFENCE:
- retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
- break;
- default:
- if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
- pr_err(RETBLEED_INTEL_MSG);
- }
- }
-
- pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
}
#undef pr_fmt
@@ -1855,8 +1856,8 @@ static void __init spectre_v2_select_mitigation(void)
if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
boot_cpu_has_bug(X86_BUG_RETBLEED) &&
- retbleed_cmd != RETBLEED_CMD_OFF &&
- retbleed_cmd != RETBLEED_CMD_STUFF &&
+ retbleed_mitigation != RETBLEED_MITIGATION_NONE &&
+ retbleed_mitigation != RETBLEED_MITIGATION_STUFF &&
boot_cpu_has(X86_FEATURE_IBRS) &&
boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
mode = SPECTRE_V2_IBRS;
@@ -1964,7 +1965,7 @@ static void __init spectre_v2_select_mitigation(void)
(boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)) {
- if (retbleed_cmd != RETBLEED_CMD_IBPB) {
+ if (retbleed_mitigation != RETBLEED_MITIGATION_IBPB) {
setup_force_cpu_cap(X86_FEATURE_USE_IBPB_FW);
pr_info("Enabling Speculation Barrier for firmware calls\n");
}
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure spectre_v1 mitigation
2025-04-18 16:17 ` [PATCH v5 08/16] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
0 siblings, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 46d5925b8eb8c7a8d634147c23db24669cfc2f76
Gitweb: https://git.kernel.org/tip/46d5925b8eb8c7a8d634147c23db24669cfc2f76
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:13 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Mon, 28 Apr 2025 19:40:10 +02:00
x86/bugs: Restructure spectre_v1 mitigation
Restructure spectre_v1 to use select/apply functions to create
consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-9-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b070ad7..1a42abb 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -54,6 +54,7 @@
*/
static void __init spectre_v1_select_mitigation(void);
+static void __init spectre_v1_apply_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init retbleed_select_mitigation(void);
static void __init spectre_v2_user_select_mitigation(void);
@@ -223,6 +224,7 @@ void __init cpu_select_mitigations(void)
mmio_update_mitigation();
rfds_update_mitigation();
+ spectre_v1_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -1021,10 +1023,14 @@ static bool smap_works_speculatively(void)
static void __init spectre_v1_select_mitigation(void)
{
- if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) {
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
+}
+
+static void __init spectre_v1_apply_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
return;
- }
if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {
/*
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure GDS mitigation
2025-04-18 16:17 ` [PATCH v5 07/16] x86/bugs: Restructure GDS mitigation David Kaplan
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
0 siblings, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 9dcad2fb31bd9b9b860c515859625a065dd6e656
Gitweb: https://git.kernel.org/tip/9dcad2fb31bd9b9b860c515859625a065dd6e656
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:12 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Mon, 28 Apr 2025 15:19:30 +02:00
x86/bugs: Restructure GDS mitigation
Restructure GDS mitigation to use select/apply functions to create
consistent vulnerability handling.
Define new AUTO mitigation for GDS.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-8-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 43 ++++++++++++++++++++++++-------------
1 file changed, 29 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 25b74a7..b070ad7 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -76,6 +76,7 @@ static void __init srbds_apply_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
static void __init gds_select_mitigation(void);
+static void __init gds_apply_mitigation(void);
/* The base value of the SPEC_CTRL MSR without task-specific bits set */
u64 x86_spec_ctrl_base;
@@ -227,6 +228,7 @@ void __init cpu_select_mitigations(void)
mmio_apply_mitigation();
rfds_apply_mitigation();
srbds_apply_mitigation();
+ gds_apply_mitigation();
}
/*
@@ -831,6 +833,7 @@ early_param("l1d_flush", l1d_flush_parse_cmdline);
enum gds_mitigations {
GDS_MITIGATION_OFF,
+ GDS_MITIGATION_AUTO,
GDS_MITIGATION_UCODE_NEEDED,
GDS_MITIGATION_FORCE,
GDS_MITIGATION_FULL,
@@ -839,7 +842,7 @@ enum gds_mitigations {
};
static enum gds_mitigations gds_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_FULL : GDS_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_AUTO : GDS_MITIGATION_OFF;
static const char * const gds_strings[] = {
[GDS_MITIGATION_OFF] = "Vulnerable",
@@ -880,6 +883,7 @@ void update_gds_msr(void)
case GDS_MITIGATION_FORCE:
case GDS_MITIGATION_UCODE_NEEDED:
case GDS_MITIGATION_HYPERVISOR:
+ case GDS_MITIGATION_AUTO:
return;
}
@@ -903,26 +907,21 @@ static void __init gds_select_mitigation(void)
if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
gds_mitigation = GDS_MITIGATION_HYPERVISOR;
- goto out;
+ return;
}
if (cpu_mitigations_off())
gds_mitigation = GDS_MITIGATION_OFF;
/* Will verify below that mitigation _can_ be disabled */
+ if (gds_mitigation == GDS_MITIGATION_AUTO)
+ gds_mitigation = GDS_MITIGATION_FULL;
+
/* No microcode */
if (!(x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)) {
- if (gds_mitigation == GDS_MITIGATION_FORCE) {
- /*
- * This only needs to be done on the boot CPU so do it
- * here rather than in update_gds_msr()
- */
- setup_clear_cpu_cap(X86_FEATURE_AVX);
- pr_warn("Microcode update needed! Disabling AVX as mitigation.\n");
- } else {
+ if (gds_mitigation != GDS_MITIGATION_FORCE)
gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
- }
- goto out;
+ return;
}
/* Microcode has mitigation, use it */
@@ -943,9 +942,25 @@ static void __init gds_select_mitigation(void)
*/
gds_mitigation = GDS_MITIGATION_FULL_LOCKED;
}
+}
+
+static void __init gds_apply_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_GDS))
+ return;
+
+ /* Microcode is present */
+ if (x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)
+ update_gds_msr();
+ else if (gds_mitigation == GDS_MITIGATION_FORCE) {
+ /*
+ * This only needs to be done on the boot CPU so do it
+ * here rather than in update_gds_msr()
+ */
+ setup_clear_cpu_cap(X86_FEATURE_AVX);
+ pr_warn("Microcode update needed! Disabling AVX as mitigation.\n");
+ }
- update_gds_msr();
-out:
pr_info("%s\n", gds_strings[gds_mitigation]);
}
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Remove md_clear_*_mitigation()
2025-04-18 16:17 ` [PATCH v5 05/16] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
0 siblings, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 6f0960a760eb926d8a2b9fe6fc7a1086cba14dd1
Gitweb: https://git.kernel.org/tip/6f0960a760eb926d8a2b9fe6fc7a1086cba14dd1
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Sun, 27 Apr 2025 17:12:41 +02:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Mon, 28 Apr 2025 14:50:33 +02:00
x86/bugs: Remove md_clear_*_mitigation()
The functionality in md_clear_update_mitigation() and
md_clear_select_mitigation() is now integrated into the select/update
functions for the MDS, TAA, MMIO, and RFDS vulnerabilities.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-6-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 65 +-------------------------------------
1 file changed, 65 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 2705105..98476b8 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -62,8 +62,6 @@ static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init mds_update_mitigation(void);
static void __init mds_apply_mitigation(void);
-static void __init md_clear_update_mitigation(void);
-static void __init md_clear_select_mitigation(void);
static void __init taa_select_mitigation(void);
static void __init taa_update_mitigation(void);
static void __init taa_apply_mitigation(void);
@@ -204,7 +202,6 @@ void __init cpu_select_mitigations(void)
taa_select_mitigation();
mmio_select_mitigation();
rfds_select_mitigation();
- md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -692,68 +689,6 @@ static __init int rfds_parse_cmdline(char *str)
early_param("reg_file_data_sampling", rfds_parse_cmdline);
#undef pr_fmt
-#define pr_fmt(fmt) "" fmt
-
-static void __init md_clear_update_mitigation(void)
-{
- if (cpu_mitigations_off())
- return;
-
- if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
- goto out;
-
- /*
- * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO
- * Stale Data mitigation, if necessary.
- */
- if (mds_mitigation == MDS_MITIGATION_OFF &&
- boot_cpu_has_bug(X86_BUG_MDS)) {
- mds_mitigation = MDS_MITIGATION_FULL;
- mds_select_mitigation();
- }
- if (taa_mitigation == TAA_MITIGATION_OFF &&
- boot_cpu_has_bug(X86_BUG_TAA)) {
- taa_mitigation = TAA_MITIGATION_VERW;
- taa_select_mitigation();
- }
- /*
- * MMIO_MITIGATION_OFF is not checked here so that cpu_buf_vm_clear
- * gets updated correctly as per X86_FEATURE_CLEAR_CPU_BUF state.
- */
- if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
- mmio_mitigation = MMIO_MITIGATION_VERW;
- mmio_select_mitigation();
- }
- if (rfds_mitigation == RFDS_MITIGATION_OFF &&
- boot_cpu_has_bug(X86_BUG_RFDS)) {
- rfds_mitigation = RFDS_MITIGATION_VERW;
- rfds_select_mitigation();
- }
-out:
- if (boot_cpu_has_bug(X86_BUG_MDS))
- pr_info("MDS: %s\n", mds_strings[mds_mitigation]);
- if (boot_cpu_has_bug(X86_BUG_TAA))
- pr_info("TAA: %s\n", taa_strings[taa_mitigation]);
- if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
- pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]);
- else if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
- pr_info("MMIO Stale Data: Unknown: No mitigations\n");
- if (boot_cpu_has_bug(X86_BUG_RFDS))
- pr_info("Register File Data Sampling: %s\n", rfds_strings[rfds_mitigation]);
-}
-
-static void __init md_clear_select_mitigation(void)
-{
-
- /*
- * As these mitigations are inter-related and rely on VERW instruction
- * to clear the microarchitural buffers, update and print their status
- * after mitigation selection is done for each of these vulnerabilities.
- */
- md_clear_update_mitigation();
-}
-
-#undef pr_fmt
#define pr_fmt(fmt) "SRBDS: " fmt
enum srbds_mitigations {
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure SRBDS mitigation
2025-04-18 16:17 ` [PATCH v5 06/16] x86/bugs: Restructure SRBDS mitigation David Kaplan
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
0 siblings, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 2178ac58e176d8e1e4529b02647f5e549bb88405
Gitweb: https://git.kernel.org/tip/2178ac58e176d8e1e4529b02647f5e549bb88405
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:11 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Mon, 28 Apr 2025 15:05:41 +02:00
x86/bugs: Restructure SRBDS mitigation
Restructure SRBDS to use select/apply functions to create consistent
vulnerability handling.
Define new AUTO mitigation for SRBDS.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-7-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 98476b8..25b74a7 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -72,6 +72,7 @@ static void __init rfds_select_mitigation(void);
static void __init rfds_update_mitigation(void);
static void __init rfds_apply_mitigation(void);
static void __init srbds_select_mitigation(void);
+static void __init srbds_apply_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
static void __init gds_select_mitigation(void);
@@ -225,6 +226,7 @@ void __init cpu_select_mitigations(void)
taa_apply_mitigation();
mmio_apply_mitigation();
rfds_apply_mitigation();
+ srbds_apply_mitigation();
}
/*
@@ -693,6 +695,7 @@ early_param("reg_file_data_sampling", rfds_parse_cmdline);
enum srbds_mitigations {
SRBDS_MITIGATION_OFF,
+ SRBDS_MITIGATION_AUTO,
SRBDS_MITIGATION_UCODE_NEEDED,
SRBDS_MITIGATION_FULL,
SRBDS_MITIGATION_TSX_OFF,
@@ -700,7 +703,7 @@ enum srbds_mitigations {
};
static enum srbds_mitigations srbds_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_SRBDS) ? SRBDS_MITIGATION_FULL : SRBDS_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_SRBDS) ? SRBDS_MITIGATION_AUTO : SRBDS_MITIGATION_OFF;
static const char * const srbds_strings[] = {
[SRBDS_MITIGATION_OFF] = "Vulnerable",
@@ -751,8 +754,13 @@ void update_srbds_msr(void)
static void __init srbds_select_mitigation(void)
{
- if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+ if (!boot_cpu_has_bug(X86_BUG_SRBDS) || cpu_mitigations_off()) {
+ srbds_mitigation = SRBDS_MITIGATION_OFF;
return;
+ }
+
+ if (srbds_mitigation == SRBDS_MITIGATION_AUTO)
+ srbds_mitigation = SRBDS_MITIGATION_FULL;
/*
* Check to see if this is one of the MDS_NO systems supporting TSX that
@@ -766,13 +774,17 @@ static void __init srbds_select_mitigation(void)
srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
- else if (cpu_mitigations_off() || srbds_off)
+ else if (srbds_off)
srbds_mitigation = SRBDS_MITIGATION_OFF;
- update_srbds_msr();
pr_info("%s\n", srbds_strings[srbds_mitigation]);
}
+static void __init srbds_apply_mitigation(void)
+{
+ update_srbds_msr();
+}
+
static int __init srbds_parse_cmdline(char *str)
{
if (!str)
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure RFDS mitigation
2025-04-18 16:17 ` [PATCH v5 04/16] x86/bugs: Restructure RFDS mitigation David Kaplan
2025-04-27 15:09 ` Borislav Petkov
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
1 sibling, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 203d81f8e167a9e82747a14dace40e0abbd5c791
Gitweb: https://git.kernel.org/tip/203d81f8e167a9e82747a14dace40e0abbd5c791
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:09 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Mon, 28 Apr 2025 13:46:11 +02:00
x86/bugs: Restructure RFDS mitigation
Restructure RFDS mitigation to use select/update/apply functions to
create consistent vulnerability handling.
[ bp: Rename the oneline helper to what it checks. ]
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-5-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 41 ++++++++++++++++++++++++++++++++-----
1 file changed, 36 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index bc74c22..2705105 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -70,6 +70,9 @@ static void __init taa_apply_mitigation(void);
static void __init mmio_select_mitigation(void);
static void __init mmio_update_mitigation(void);
static void __init mmio_apply_mitigation(void);
+static void __init rfds_select_mitigation(void);
+static void __init rfds_update_mitigation(void);
+static void __init rfds_apply_mitigation(void);
static void __init srbds_select_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
@@ -200,6 +203,7 @@ void __init cpu_select_mitigations(void)
mds_select_mitigation();
taa_select_mitigation();
mmio_select_mitigation();
+ rfds_select_mitigation();
md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -218,10 +222,12 @@ void __init cpu_select_mitigations(void)
mds_update_mitigation();
taa_update_mitigation();
mmio_update_mitigation();
+ rfds_update_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
+ rfds_apply_mitigation();
}
/*
@@ -624,22 +630,48 @@ static const char * const rfds_strings[] = {
[RFDS_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode",
};
+static inline bool __init verw_clears_cpu_reg_file(void)
+{
+ return (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR);
+}
+
static void __init rfds_select_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_RFDS) || cpu_mitigations_off()) {
rfds_mitigation = RFDS_MITIGATION_OFF;
return;
}
+
+ if (rfds_mitigation == RFDS_MITIGATION_AUTO)
+ rfds_mitigation = RFDS_MITIGATION_VERW;
+
if (rfds_mitigation == RFDS_MITIGATION_OFF)
return;
- if (rfds_mitigation == RFDS_MITIGATION_AUTO)
+ if (verw_clears_cpu_reg_file())
+ verw_clear_cpu_buf_mitigation_selected = true;
+}
+
+static void __init rfds_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_RFDS) || cpu_mitigations_off())
+ return;
+
+ if (verw_clear_cpu_buf_mitigation_selected)
rfds_mitigation = RFDS_MITIGATION_VERW;
- if (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR)
+ if (rfds_mitigation == RFDS_MITIGATION_VERW) {
+ if (!verw_clears_cpu_reg_file())
+ rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
+ }
+
+ pr_info("%s\n", rfds_strings[rfds_mitigation]);
+}
+
+static void __init rfds_apply_mitigation(void)
+{
+ if (rfds_mitigation == RFDS_MITIGATION_VERW)
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
- else
- rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
}
static __init int rfds_parse_cmdline(char *str)
@@ -712,7 +744,6 @@ out:
static void __init md_clear_select_mitigation(void)
{
- rfds_select_mitigation();
/*
* As these mitigations are inter-related and rely on VERW instruction
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure MMIO mitigation
2025-04-18 16:17 ` [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation David Kaplan
2025-04-24 20:19 ` Borislav Petkov
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
1 sibling, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 4a5a04e61d7f8f26472f93287f6dcb669f0cf22f
Gitweb: https://git.kernel.org/tip/4a5a04e61d7f8f26472f93287f6dcb669f0cf22f
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:08 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Mon, 28 Apr 2025 13:22:24 +02:00
x86/bugs: Restructure MMIO mitigation
Restructure MMIO mitigation to use select/update/apply functions to
create consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-4-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 74 +++++++++++++++++++++++++------------
1 file changed, 50 insertions(+), 24 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 5db21d2..bc74c22 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -68,6 +68,8 @@ static void __init taa_select_mitigation(void);
static void __init taa_update_mitigation(void);
static void __init taa_apply_mitigation(void);
static void __init mmio_select_mitigation(void);
+static void __init mmio_update_mitigation(void);
+static void __init mmio_apply_mitigation(void);
static void __init srbds_select_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
@@ -197,6 +199,7 @@ void __init cpu_select_mitigations(void)
l1tf_select_mitigation();
mds_select_mitigation();
taa_select_mitigation();
+ mmio_select_mitigation();
md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -214,9 +217,11 @@ void __init cpu_select_mitigations(void)
*/
mds_update_mitigation();
taa_update_mitigation();
+ mmio_update_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
+ mmio_apply_mitigation();
}
/*
@@ -520,25 +525,62 @@ static void __init mmio_select_mitigation(void)
return;
}
+ /* Microcode will be checked in mmio_update_mitigation(). */
+ if (mmio_mitigation == MMIO_MITIGATION_AUTO)
+ mmio_mitigation = MMIO_MITIGATION_VERW;
+
if (mmio_mitigation == MMIO_MITIGATION_OFF)
return;
/*
* Enable CPU buffer clear mitigation for host and VMM, if also affected
- * by MDS or TAA. Otherwise, enable mitigation for VMM only.
+ * by MDS or TAA.
*/
- if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
- boot_cpu_has(X86_FEATURE_RTM)))
- setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+ if (boot_cpu_has_bug(X86_BUG_MDS) || taa_vulnerable())
+ verw_clear_cpu_buf_mitigation_selected = true;
+}
+
+static void __init mmio_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) || cpu_mitigations_off())
+ return;
+
+ if (verw_clear_cpu_buf_mitigation_selected)
+ mmio_mitigation = MMIO_MITIGATION_VERW;
+
+ if (mmio_mitigation == MMIO_MITIGATION_VERW) {
+ /*
+ * Check if the system has the right microcode.
+ *
+ * CPU Fill buffer clear mitigation is enumerated by either an explicit
+ * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
+ * affected systems.
+ */
+ if (!((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
+ (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
+ boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
+ !(x86_arch_cap_msr & ARCH_CAP_MDS_NO))))
+ mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
+ }
+
+ pr_info("%s\n", mmio_strings[mmio_mitigation]);
+}
+
+static void __init mmio_apply_mitigation(void)
+{
+ if (mmio_mitigation == MMIO_MITIGATION_OFF)
+ return;
/*
- * X86_FEATURE_CLEAR_CPU_BUF could be enabled by other VERW based
- * mitigations, disable KVM-only mitigation in that case.
+ * Only enable the VMM mitigation if the CPU buffer clear mitigation is
+ * not being used.
*/
- if (boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
+ if (verw_clear_cpu_buf_mitigation_selected) {
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
static_branch_disable(&cpu_buf_vm_clear);
- else
+ } else {
static_branch_enable(&cpu_buf_vm_clear);
+ }
/*
* If Processor-MMIO-Stale-Data bug is present and Fill Buffer data can
@@ -548,21 +590,6 @@ static void __init mmio_select_mitigation(void)
if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
static_branch_enable(&mds_idle_clear);
- /*
- * Check if the system has the right microcode.
- *
- * CPU Fill buffer clear mitigation is enumerated by either an explicit
- * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
- * affected systems.
- */
- if ((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
- (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
- boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
- !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)))
- mmio_mitigation = MMIO_MITIGATION_VERW;
- else
- mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
-
if (mmio_nosmt || cpu_mitigations_auto_nosmt())
cpu_smt_disable(false);
}
@@ -685,7 +712,6 @@ out:
static void __init md_clear_select_mitigation(void)
{
- mmio_select_mitigation();
rfds_select_mitigation();
/*
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure TAA mitigation
2025-04-18 16:17 ` [PATCH v5 02/16] x86/bugs: Restructure TAA mitigation David Kaplan
2025-04-19 12:36 ` Borislav Petkov
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
1 sibling, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: bdd7fce7a8168cebd400926d6352d2fbc1ac9f79
Gitweb: https://git.kernel.org/tip/bdd7fce7a8168cebd400926d6352d2fbc1ac9f79
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:07 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Mon, 28 Apr 2025 13:02:04 +02:00
x86/bugs: Restructure TAA mitigation
Restructure TAA mitigation to use select/update/apply functions to
create consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-3-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 94 +++++++++++++++++++++++--------------
1 file changed, 59 insertions(+), 35 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index f697e6b..5db21d2 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -65,6 +65,8 @@ static void __init mds_apply_mitigation(void);
static void __init md_clear_update_mitigation(void);
static void __init md_clear_select_mitigation(void);
static void __init taa_select_mitigation(void);
+static void __init taa_update_mitigation(void);
+static void __init taa_apply_mitigation(void);
static void __init mmio_select_mitigation(void);
static void __init srbds_select_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
@@ -194,6 +196,7 @@ void __init cpu_select_mitigations(void)
ssb_select_mitigation();
l1tf_select_mitigation();
mds_select_mitigation();
+ taa_select_mitigation();
md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -210,8 +213,10 @@ void __init cpu_select_mitigations(void)
* choices.
*/
mds_update_mitigation();
+ taa_update_mitigation();
mds_apply_mitigation();
+ taa_apply_mitigation();
}
/*
@@ -397,6 +402,11 @@ static const char * const taa_strings[] = {
[TAA_MITIGATION_TSX_DISABLED] = "Mitigation: TSX disabled",
};
+static bool __init taa_vulnerable(void)
+{
+ return boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM);
+}
+
static void __init taa_select_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_TAA)) {
@@ -410,48 +420,63 @@ static void __init taa_select_mitigation(void)
return;
}
- if (cpu_mitigations_off()) {
+ if (cpu_mitigations_off())
taa_mitigation = TAA_MITIGATION_OFF;
- return;
- }
- /*
- * TAA mitigation via VERW is turned off if both
- * tsx_async_abort=off and mds=off are specified.
- */
- if (taa_mitigation == TAA_MITIGATION_OFF &&
- mds_mitigation == MDS_MITIGATION_OFF)
+ /* Microcode will be checked in taa_update_mitigation(). */
+ if (taa_mitigation == TAA_MITIGATION_AUTO)
+ taa_mitigation = TAA_MITIGATION_VERW;
+
+ if (taa_mitigation != TAA_MITIGATION_OFF)
+ verw_clear_cpu_buf_mitigation_selected = true;
+}
+
+static void __init taa_update_mitigation(void)
+{
+ if (!taa_vulnerable() || cpu_mitigations_off())
return;
- if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+ if (verw_clear_cpu_buf_mitigation_selected)
taa_mitigation = TAA_MITIGATION_VERW;
- else
- taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
- /*
- * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
- * A microcode update fixes this behavior to clear CPU buffers. It also
- * adds support for MSR_IA32_TSX_CTRL which is enumerated by the
- * ARCH_CAP_TSX_CTRL_MSR bit.
- *
- * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
- * update is required.
- */
- if ( (x86_arch_cap_msr & ARCH_CAP_MDS_NO) &&
- !(x86_arch_cap_msr & ARCH_CAP_TSX_CTRL_MSR))
- taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+ if (taa_mitigation == TAA_MITIGATION_VERW) {
+ /* Check if the requisite ucode is available. */
+ if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
+ taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
- /*
- * TSX is enabled, select alternate mitigation for TAA which is
- * the same as MDS. Enable MDS static branch to clear CPU buffers.
- *
- * For guests that can't determine whether the correct microcode is
- * present on host, enable the mitigation for UCODE_NEEDED as well.
- */
- setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+ /*
+ * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
+ * A microcode update fixes this behavior to clear CPU buffers. It also
+ * adds support for MSR_IA32_TSX_CTRL which is enumerated by the
+ * ARCH_CAP_TSX_CTRL_MSR bit.
+ *
+ * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
+ * update is required.
+ */
+ if ((x86_arch_cap_msr & ARCH_CAP_MDS_NO) &&
+ !(x86_arch_cap_msr & ARCH_CAP_TSX_CTRL_MSR))
+ taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+ }
- if (taa_nosmt || cpu_mitigations_auto_nosmt())
- cpu_smt_disable(false);
+ pr_info("%s\n", taa_strings[taa_mitigation]);
+}
+
+static void __init taa_apply_mitigation(void)
+{
+ if (taa_mitigation == TAA_MITIGATION_VERW ||
+ taa_mitigation == TAA_MITIGATION_UCODE_NEEDED) {
+ /*
+ * TSX is enabled, select alternate mitigation for TAA which is
+ * the same as MDS. Enable MDS static branch to clear CPU buffers.
+ *
+ * For guests that can't determine whether the correct microcode is
+ * present on host, enable the mitigation for UCODE_NEEDED as well.
+ */
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+
+ if (taa_nosmt || cpu_mitigations_auto_nosmt())
+ cpu_smt_disable(false);
+ }
}
static int __init tsx_async_abort_parse_cmdline(char *str)
@@ -660,7 +685,6 @@ out:
static void __init md_clear_select_mitigation(void)
{
- taa_select_mitigation();
mmio_select_mitigation();
rfds_select_mitigation();
^ permalink raw reply related [flat|nested] 65+ messages in thread
* [tip: x86/bugs] x86/bugs: Restructure MDS mitigation
2025-04-18 16:17 ` [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation David Kaplan
2025-04-18 20:42 ` Borislav Petkov
@ 2025-05-02 10:33 ` tip-bot2 for David Kaplan
1 sibling, 0 replies; 65+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-05-02 10:33 UTC (permalink / raw)
To: linux-tip-commits
Cc: David Kaplan, Borislav Petkov (AMD), Josh Poimboeuf, x86,
linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 559c758bc722ca0630d2b7f433f490cb76fe6128
Gitweb: https://git.kernel.org/tip/559c758bc722ca0630d2b7f433f490cb76fe6128
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Fri, 18 Apr 2025 11:17:06 -05:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Mon, 28 Apr 2025 12:53:33 +02:00
x86/bugs: Restructure MDS mitigation
Restructure MDS mitigation selection to use select/update/apply
functions to create consistent vulnerability handling.
[ bp: rename and beef up comment over VERW mitigation selected var for
maximum clarity. ]
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Link: https://lore.kernel.org/20250418161721.1855190-2-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 61 +++++++++++++++++++++++++++++++++++--
1 file changed, 59 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 9131e61..f697e6b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -34,6 +34,25 @@
#include "cpu.h"
+/*
+ * Speculation Vulnerability Handling
+ *
+ * Each vulnerability is handled with the following functions:
+ * <vuln>_select_mitigation() -- Selects a mitigation to use. This should
+ * take into account all relevant command line
+ * options.
+ * <vuln>_update_mitigation() -- This is called after all vulnerabilities have
+ * selected a mitigation, in case the selection
+ * may want to change based on other choices
+ * made. This function is optional.
+ * <vuln>_apply_mitigation() -- Enable the selected mitigation.
+ *
+ * The compile-time mitigation in all cases should be AUTO. An explicit
+ * command-line option can override AUTO. If no such option is
+ * provided, <vuln>_select_mitigation() will override AUTO to the best
+ * mitigation option.
+ */
+
static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init retbleed_select_mitigation(void);
@@ -41,6 +60,8 @@ static void __init spectre_v2_user_select_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
+static void __init mds_update_mitigation(void);
+static void __init mds_apply_mitigation(void);
static void __init md_clear_update_mitigation(void);
static void __init md_clear_select_mitigation(void);
static void __init taa_select_mitigation(void);
@@ -172,6 +193,7 @@ void __init cpu_select_mitigations(void)
spectre_v2_user_select_mitigation();
ssb_select_mitigation();
l1tf_select_mitigation();
+ mds_select_mitigation();
md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -182,6 +204,14 @@ void __init cpu_select_mitigations(void)
*/
srso_select_mitigation();
gds_select_mitigation();
+
+ /*
+ * After mitigations are selected, some may need to update their
+ * choices.
+ */
+ mds_update_mitigation();
+
+ mds_apply_mitigation();
}
/*
@@ -284,6 +314,12 @@ enum rfds_mitigations {
static enum rfds_mitigations rfds_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO : RFDS_MITIGATION_OFF;
+/*
+ * Set if any of MDS/TAA/MMIO/RFDS are going to enable VERW clearing
+ * through X86_FEATURE_CLEAR_CPU_BUF on kernel and guest entry.
+ */
+static bool verw_clear_cpu_buf_mitigation_selected __ro_after_init;
+
static void __init mds_select_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
@@ -294,12 +330,34 @@ static void __init mds_select_mitigation(void)
if (mds_mitigation == MDS_MITIGATION_AUTO)
mds_mitigation = MDS_MITIGATION_FULL;
+ if (mds_mitigation == MDS_MITIGATION_OFF)
+ return;
+
+ verw_clear_cpu_buf_mitigation_selected = true;
+}
+
+static void __init mds_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
+ return;
+
+ /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
+ if (verw_clear_cpu_buf_mitigation_selected)
+ mds_mitigation = MDS_MITIGATION_FULL;
+
if (mds_mitigation == MDS_MITIGATION_FULL) {
if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
mds_mitigation = MDS_MITIGATION_VMWERV;
+ }
- setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+ pr_info("%s\n", mds_strings[mds_mitigation]);
+}
+static void __init mds_apply_mitigation(void)
+{
+ if (mds_mitigation == MDS_MITIGATION_FULL ||
+ mds_mitigation == MDS_MITIGATION_VMWERV) {
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
(mds_nosmt || cpu_mitigations_auto_nosmt()))
cpu_smt_disable(false);
@@ -602,7 +660,6 @@ out:
static void __init md_clear_select_mitigation(void)
{
- mds_select_mitigation();
taa_select_mitigation();
mmio_select_mitigation();
rfds_select_mitigation();
^ permalink raw reply related [flat|nested] 65+ messages in thread
end of thread, other threads:[~2025-05-02 10:33 UTC | newest]
Thread overview: 65+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-18 16:17 [PATCH v5 00/16] Attack vector controls (part 1) David Kaplan
2025-04-18 16:17 ` [PATCH v5 01/16] x86/bugs: Restructure MDS mitigation David Kaplan
2025-04-18 20:42 ` Borislav Petkov
2025-04-20 21:00 ` Kaplan, David
2025-04-22 8:19 ` Borislav Petkov
2025-04-22 14:32 ` Kaplan, David
2025-04-22 17:25 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 02/16] x86/bugs: Restructure TAA mitigation David Kaplan
2025-04-19 12:36 ` Borislav Petkov
2025-04-20 21:03 ` Kaplan, David
2025-04-22 8:56 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 03/16] x86/bugs: Restructure MMIO mitigation David Kaplan
2025-04-24 20:19 ` Borislav Petkov
2025-04-24 20:31 ` Kaplan, David
2025-04-25 8:09 ` Borislav Petkov
2025-04-25 13:28 ` Kaplan, David
2025-04-26 11:22 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 04/16] x86/bugs: Restructure RFDS mitigation David Kaplan
2025-04-27 15:09 ` Borislav Petkov
2025-04-28 13:42 ` Kaplan, David
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 05/16] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 06/16] x86/bugs: Restructure SRBDS mitigation David Kaplan
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 07/16] x86/bugs: Restructure GDS mitigation David Kaplan
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 08/16] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 09/16] x86/bugs: Allow retbleed=stuff only on Intel David Kaplan
2025-04-27 15:38 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 10/16] x86/bugs: Restructure retbleed mitigation David Kaplan
2025-04-28 18:59 ` Borislav Petkov
2025-04-28 20:55 ` Kaplan, David
2025-04-29 8:21 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 11/16] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
2025-04-29 8:47 ` Borislav Petkov
2025-04-29 14:11 ` Kaplan, David
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 12/16] x86/bugs: Restructure BHI mitigation David Kaplan
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 13/16] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
2025-04-29 10:46 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 14/16] x86/bugs: Restructure SSB mitigation David Kaplan
2025-04-29 12:54 ` Borislav Petkov
2025-04-29 14:09 ` Kaplan, David
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 15/16] x86/bugs: Restructure L1TF mitigation David Kaplan
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 16:17 ` [PATCH v5 16/16] x86/bugs: Restructure SRSO mitigation David Kaplan
2025-04-29 16:50 ` Borislav Petkov
2025-04-29 17:18 ` Kaplan, David
2025-04-30 8:25 ` Borislav Petkov
2025-05-02 10:33 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-04-18 20:03 ` [PATCH v5 00/16] Attack vector controls (part 1) Ingo Molnar
2025-04-18 21:33 ` Borislav Petkov
2025-04-22 9:46 ` Ingo Molnar
2025-04-22 13:59 ` Borislav Petkov
2025-04-22 5:22 ` Josh Poimboeuf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox