* [PATCH v3 01/35] x86/bugs: Add X86_BUG_SPECTRE_V2_USER
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-28 11:53 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-01-08 20:24 ` [PATCH v3 02/35] x86/bugs: Relocate mds/taa/mmio/rfds defines David Kaplan
` (34 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
All CPU vulnerabilities with command line options map to a single
X86_BUG bit except for Spectre V2 where both the spectre_v2 and
spectre_v2_user command line options are related to the same bug. The
spectre_v2 command line options mostly relate to user->kernel and
guest->host mitigations, while the spectre_v2_user command line options
relate to user->user or guest->guest protections.
Define a new X86_BUG bit for spectre_v2_user so each
*_select_mitigation() function in bugs.c is related to a unique X86_BUG
bit.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/common.c | 4 +++-
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 508c0dad116b..f77073507647 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -534,4 +534,5 @@
#define X86_BUG_RFDS X86_BUG(1*32 + 2) /* "rfds" CPU is vulnerable to Register File Data Sampling */
#define X86_BUG_BHI X86_BUG(1*32 + 3) /* "bhi" CPU is affected by Branch History Injection */
#define X86_BUG_IBPB_NO_RET X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */
+#define X86_BUG_SPECTRE_V2_USER X86_BUG(1*32 + 5) /* "spectre_v2_user" CPU is affected by Spectre variant 2 attack between user processes */
#endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 7cce91b19fb2..1e80d76dc9c1 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1331,8 +1331,10 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
- if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2))
+ if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2)) {
setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
+ setup_force_cpu_bug(X86_BUG_SPECTRE_V2_USER);
+ }
if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) &&
!(x86_arch_cap_msr & ARCH_CAP_SSB_NO) &&
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [tip: x86/bugs] x86/bugs: Add X86_BUG_SPECTRE_V2_USER
2025-01-08 20:24 ` [PATCH v3 01/35] x86/bugs: Add X86_BUG_SPECTRE_V2_USER David Kaplan
@ 2025-02-28 11:53 ` tip-bot2 for David Kaplan
0 siblings, 0 replies; 138+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-02-28 11:53 UTC (permalink / raw)
To: linux-tip-commits; +Cc: David Kaplan, Borislav Petkov (AMD), x86, linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 98c7a713db91c5a9a7ffc47cd85e7158e0963cb8
Gitweb: https://git.kernel.org/tip/98c7a713db91c5a9a7ffc47cd85e7158e0963cb8
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Wed, 08 Jan 2025 14:24:41 -06:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Fri, 28 Feb 2025 12:34:30 +01:00
x86/bugs: Add X86_BUG_SPECTRE_V2_USER
All CPU vulnerabilities with command line options map to a single X86_BUG bit
except for Spectre V2 where both the spectre_v2 and spectre_v2_user command
line options are related to the same bug.
The spectre_v2 command line options mostly relate to user->kernel and
guest->host mitigations, while the spectre_v2_user command line options relate
to user->user or guest->guest protections.
Define a new X86_BUG bit for spectre_v2_user so each *_select_mitigation()
function in bugs.c is related to a unique X86_BUG bit.
No functional changes.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20250108202515.385902-2-david.kaplan@amd.com
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/common.c | 4 +++-
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index c8701ab..0bc4203 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -537,4 +537,5 @@
#define X86_BUG_RFDS X86_BUG(1*32 + 2) /* "rfds" CPU is vulnerable to Register File Data Sampling */
#define X86_BUG_BHI X86_BUG(1*32 + 3) /* "bhi" CPU is affected by Branch History Injection */
#define X86_BUG_IBPB_NO_RET X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */
+#define X86_BUG_SPECTRE_V2_USER X86_BUG(1*32 + 5) /* "spectre_v2_user" CPU is affected by Spectre variant 2 attack between user processes */
#endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 7cce91b..1e80d76 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1331,8 +1331,10 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
- if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2))
+ if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2)) {
setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
+ setup_force_cpu_bug(X86_BUG_SPECTRE_V2_USER);
+ }
if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) &&
!(x86_arch_cap_msr & ARCH_CAP_SSB_NO) &&
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v3 02/35] x86/bugs: Relocate mds/taa/mmio/rfds defines
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
2025-01-08 20:24 ` [PATCH v3 01/35] x86/bugs: Add X86_BUG_SPECTRE_V2_USER David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-28 11:53 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-01-08 20:24 ` [PATCH v3 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds David Kaplan
` (33 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Move the mds, taa, mmio, and rfds mitigation enums earlier in the file
to prepare for restructuring of these mitigations as they are all
inter-related.
No functional change.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 60 ++++++++++++++++++++------------------
1 file changed, 31 insertions(+), 29 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 5a505aa65489..bbe4c772e557 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -243,6 +243,37 @@ static const char * const mds_strings[] = {
[MDS_MITIGATION_VMWERV] = "Vulnerable: Clear CPU buffers attempted, no microcode",
};
+enum taa_mitigations {
+ TAA_MITIGATION_OFF,
+ TAA_MITIGATION_UCODE_NEEDED,
+ TAA_MITIGATION_VERW,
+ TAA_MITIGATION_TSX_DISABLED,
+};
+
+/* Default mitigation for TAA-affected CPUs */
+static enum taa_mitigations taa_mitigation __ro_after_init =
+ IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
+
+enum mmio_mitigations {
+ MMIO_MITIGATION_OFF,
+ MMIO_MITIGATION_UCODE_NEEDED,
+ MMIO_MITIGATION_VERW,
+};
+
+/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
+static enum mmio_mitigations mmio_mitigation __ro_after_init =
+ IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
+
+enum rfds_mitigations {
+ RFDS_MITIGATION_OFF,
+ RFDS_MITIGATION_VERW,
+ RFDS_MITIGATION_UCODE_NEEDED,
+};
+
+/* Default mitigation for Register File Data Sampling */
+static enum rfds_mitigations rfds_mitigation __ro_after_init =
+ IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
+
static void __init mds_select_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
@@ -286,16 +317,6 @@ early_param("mds", mds_cmdline);
#undef pr_fmt
#define pr_fmt(fmt) "TAA: " fmt
-enum taa_mitigations {
- TAA_MITIGATION_OFF,
- TAA_MITIGATION_UCODE_NEEDED,
- TAA_MITIGATION_VERW,
- TAA_MITIGATION_TSX_DISABLED,
-};
-
-/* Default mitigation for TAA-affected CPUs */
-static enum taa_mitigations taa_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
static bool taa_nosmt __ro_after_init;
static const char * const taa_strings[] = {
@@ -386,15 +407,6 @@ early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
#undef pr_fmt
#define pr_fmt(fmt) "MMIO Stale Data: " fmt
-enum mmio_mitigations {
- MMIO_MITIGATION_OFF,
- MMIO_MITIGATION_UCODE_NEEDED,
- MMIO_MITIGATION_VERW,
-};
-
-/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
-static enum mmio_mitigations mmio_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
static bool mmio_nosmt __ro_after_init = false;
static const char * const mmio_strings[] = {
@@ -483,16 +495,6 @@ early_param("mmio_stale_data", mmio_stale_data_parse_cmdline);
#undef pr_fmt
#define pr_fmt(fmt) "Register File Data Sampling: " fmt
-enum rfds_mitigations {
- RFDS_MITIGATION_OFF,
- RFDS_MITIGATION_VERW,
- RFDS_MITIGATION_UCODE_NEEDED,
-};
-
-/* Default mitigation for Register File Data Sampling */
-static enum rfds_mitigations rfds_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
-
static const char * const rfds_strings[] = {
[RFDS_MITIGATION_OFF] = "Vulnerable",
[RFDS_MITIGATION_VERW] = "Mitigation: Clear Register File",
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [tip: x86/bugs] x86/bugs: Relocate mds/taa/mmio/rfds defines
2025-01-08 20:24 ` [PATCH v3 02/35] x86/bugs: Relocate mds/taa/mmio/rfds defines David Kaplan
@ 2025-02-28 11:53 ` tip-bot2 for David Kaplan
0 siblings, 0 replies; 138+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-02-28 11:53 UTC (permalink / raw)
To: linux-tip-commits; +Cc: David Kaplan, Borislav Petkov (AMD), x86, linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: 2c93762ec4b3ab66980ee7aaffde1e050bfbf291
Gitweb: https://git.kernel.org/tip/2c93762ec4b3ab66980ee7aaffde1e050bfbf291
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Wed, 08 Jan 2025 14:24:42 -06:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Fri, 28 Feb 2025 12:39:17 +01:00
x86/bugs: Relocate mds/taa/mmio/rfds defines
Move the mds, taa, mmio, and rfds mitigation enums earlier in the file to
prepare for restructuring of these mitigations as they are all inter-related.
No functional change.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20250108202515.385902-3-david.kaplan@amd.com
---
arch/x86/kernel/cpu/bugs.c | 60 +++++++++++++++++++------------------
1 file changed, 31 insertions(+), 29 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 5397d0a..4269ed1 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -247,6 +247,37 @@ static const char * const mds_strings[] = {
[MDS_MITIGATION_VMWERV] = "Vulnerable: Clear CPU buffers attempted, no microcode",
};
+enum taa_mitigations {
+ TAA_MITIGATION_OFF,
+ TAA_MITIGATION_UCODE_NEEDED,
+ TAA_MITIGATION_VERW,
+ TAA_MITIGATION_TSX_DISABLED,
+};
+
+/* Default mitigation for TAA-affected CPUs */
+static enum taa_mitigations taa_mitigation __ro_after_init =
+ IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
+
+enum mmio_mitigations {
+ MMIO_MITIGATION_OFF,
+ MMIO_MITIGATION_UCODE_NEEDED,
+ MMIO_MITIGATION_VERW,
+};
+
+/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
+static enum mmio_mitigations mmio_mitigation __ro_after_init =
+ IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
+
+enum rfds_mitigations {
+ RFDS_MITIGATION_OFF,
+ RFDS_MITIGATION_VERW,
+ RFDS_MITIGATION_UCODE_NEEDED,
+};
+
+/* Default mitigation for Register File Data Sampling */
+static enum rfds_mitigations rfds_mitigation __ro_after_init =
+ IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
+
static void __init mds_select_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
@@ -290,16 +321,6 @@ early_param("mds", mds_cmdline);
#undef pr_fmt
#define pr_fmt(fmt) "TAA: " fmt
-enum taa_mitigations {
- TAA_MITIGATION_OFF,
- TAA_MITIGATION_UCODE_NEEDED,
- TAA_MITIGATION_VERW,
- TAA_MITIGATION_TSX_DISABLED,
-};
-
-/* Default mitigation for TAA-affected CPUs */
-static enum taa_mitigations taa_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
static bool taa_nosmt __ro_after_init;
static const char * const taa_strings[] = {
@@ -390,15 +411,6 @@ early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
#undef pr_fmt
#define pr_fmt(fmt) "MMIO Stale Data: " fmt
-enum mmio_mitigations {
- MMIO_MITIGATION_OFF,
- MMIO_MITIGATION_UCODE_NEEDED,
- MMIO_MITIGATION_VERW,
-};
-
-/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
-static enum mmio_mitigations mmio_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
static bool mmio_nosmt __ro_after_init = false;
static const char * const mmio_strings[] = {
@@ -487,16 +499,6 @@ early_param("mmio_stale_data", mmio_stale_data_parse_cmdline);
#undef pr_fmt
#define pr_fmt(fmt) "Register File Data Sampling: " fmt
-enum rfds_mitigations {
- RFDS_MITIGATION_OFF,
- RFDS_MITIGATION_VERW,
- RFDS_MITIGATION_UCODE_NEEDED,
-};
-
-/* Default mitigation for Register File Data Sampling */
-static enum rfds_mitigations rfds_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
-
static const char * const rfds_strings[] = {
[RFDS_MITIGATION_OFF] = "Vulnerable",
[RFDS_MITIGATION_VERW] = "Mitigation: Clear Register File",
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v3 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
2025-01-08 20:24 ` [PATCH v3 01/35] x86/bugs: Add X86_BUG_SPECTRE_V2_USER David Kaplan
2025-01-08 20:24 ` [PATCH v3 02/35] x86/bugs: Relocate mds/taa/mmio/rfds defines David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-28 11:53 ` [tip: x86/bugs] " tip-bot2 for David Kaplan
2025-01-08 20:24 ` [PATCH v3 04/35] x86/bugs: Restructure mds mitigation David Kaplan
` (32 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Add AUTO mitigations for mds/taa/mmio/rfds to create consistent
vulnerability handling. These AUTO mitigations will be turned into the
appropriate default mitigations in the <vuln>_select_mitigation()
functions. In a later patch, these will be used with the new attack
vector controls to help select appropriate mitigations.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/include/asm/processor.h | 1 +
arch/x86/kernel/cpu/bugs.c | 20 ++++++++++++++++----
2 files changed, 17 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index c0cd10182e90..90278d0c071b 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -757,6 +757,7 @@ extern enum l1tf_mitigations l1tf_mitigation;
enum mds_mitigations {
MDS_MITIGATION_OFF,
+ MDS_MITIGATION_AUTO,
MDS_MITIGATION_FULL,
MDS_MITIGATION_VMWERV,
};
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index bbe4c772e557..592d40551432 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -234,7 +234,7 @@ static void x86_amd_ssb_disable(void)
/* Default mitigation for MDS-affected CPUs */
static enum mds_mitigations mds_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_FULL : MDS_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_AUTO : MDS_MITIGATION_OFF;
static bool mds_nosmt __ro_after_init = false;
static const char * const mds_strings[] = {
@@ -245,6 +245,7 @@ static const char * const mds_strings[] = {
enum taa_mitigations {
TAA_MITIGATION_OFF,
+ TAA_MITIGATION_AUTO,
TAA_MITIGATION_UCODE_NEEDED,
TAA_MITIGATION_VERW,
TAA_MITIGATION_TSX_DISABLED,
@@ -252,27 +253,29 @@ enum taa_mitigations {
/* Default mitigation for TAA-affected CPUs */
static enum taa_mitigations taa_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_AUTO : TAA_MITIGATION_OFF;
enum mmio_mitigations {
MMIO_MITIGATION_OFF,
+ MMIO_MITIGATION_AUTO,
MMIO_MITIGATION_UCODE_NEEDED,
MMIO_MITIGATION_VERW,
};
/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
static enum mmio_mitigations mmio_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_AUTO : MMIO_MITIGATION_OFF;
enum rfds_mitigations {
RFDS_MITIGATION_OFF,
+ RFDS_MITIGATION_AUTO,
RFDS_MITIGATION_VERW,
RFDS_MITIGATION_UCODE_NEEDED,
};
/* Default mitigation for Register File Data Sampling */
static enum rfds_mitigations rfds_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO : RFDS_MITIGATION_OFF;
static void __init mds_select_mitigation(void)
{
@@ -281,6 +284,9 @@ static void __init mds_select_mitigation(void)
return;
}
+ if (mds_mitigation == MDS_MITIGATION_AUTO)
+ mds_mitigation = MDS_MITIGATION_FULL;
+
if (mds_mitigation == MDS_MITIGATION_FULL) {
if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
mds_mitigation = MDS_MITIGATION_VMWERV;
@@ -510,6 +516,9 @@ static void __init rfds_select_mitigation(void)
if (rfds_mitigation == RFDS_MITIGATION_OFF)
return;
+ if (rfds_mitigation == RFDS_MITIGATION_AUTO)
+ rfds_mitigation = RFDS_MITIGATION_VERW;
+
if (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR)
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
else
@@ -1976,6 +1985,7 @@ void cpu_bugs_smt_update(void)
switch (mds_mitigation) {
case MDS_MITIGATION_FULL:
+ case MDS_MITIGATION_AUTO:
case MDS_MITIGATION_VMWERV:
if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
pr_warn_once(MDS_MSG_SMT);
@@ -1987,6 +1997,7 @@ void cpu_bugs_smt_update(void)
switch (taa_mitigation) {
case TAA_MITIGATION_VERW:
+ case TAA_MITIGATION_AUTO:
case TAA_MITIGATION_UCODE_NEEDED:
if (sched_smt_active())
pr_warn_once(TAA_MSG_SMT);
@@ -1998,6 +2009,7 @@ void cpu_bugs_smt_update(void)
switch (mmio_mitigation) {
case MMIO_MITIGATION_VERW:
+ case MMIO_MITIGATION_AUTO:
case MMIO_MITIGATION_UCODE_NEEDED:
if (sched_smt_active())
pr_warn_once(MMIO_MSG_SMT);
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [tip: x86/bugs] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
2025-01-08 20:24 ` [PATCH v3 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds David Kaplan
@ 2025-02-28 11:53 ` tip-bot2 for David Kaplan
0 siblings, 0 replies; 138+ messages in thread
From: tip-bot2 for David Kaplan @ 2025-02-28 11:53 UTC (permalink / raw)
To: linux-tip-commits; +Cc: David Kaplan, Borislav Petkov (AMD), x86, linux-kernel
The following commit has been merged into the x86/bugs branch of tip:
Commit-ID: b8ce25df2999ac6a135ce1bd14b7243030a1338a
Gitweb: https://git.kernel.org/tip/b8ce25df2999ac6a135ce1bd14b7243030a1338a
Author: David Kaplan <david.kaplan@amd.com>
AuthorDate: Wed, 08 Jan 2025 14:24:43 -06:00
Committer: Borislav Petkov (AMD) <bp@alien8.de>
CommitterDate: Fri, 28 Feb 2025 12:40:21 +01:00
x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds
Add AUTO mitigations for mds/taa/mmio/rfds to create consistent vulnerability
handling. These AUTO mitigations will be turned into the appropriate default
mitigations in the <vuln>_select_mitigation() functions. Later, these will be
used with the new attack vector controls to help select appropriate
mitigations.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20250108202515.385902-4-david.kaplan@amd.com
---
arch/x86/include/asm/processor.h | 1 +
arch/x86/kernel/cpu/bugs.c | 20 ++++++++++++++++----
2 files changed, 17 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index c0cd101..90278d0 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -757,6 +757,7 @@ extern enum l1tf_mitigations l1tf_mitigation;
enum mds_mitigations {
MDS_MITIGATION_OFF,
+ MDS_MITIGATION_AUTO,
MDS_MITIGATION_FULL,
MDS_MITIGATION_VMWERV,
};
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4269ed1..93c437f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -238,7 +238,7 @@ static void x86_amd_ssb_disable(void)
/* Default mitigation for MDS-affected CPUs */
static enum mds_mitigations mds_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_FULL : MDS_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_AUTO : MDS_MITIGATION_OFF;
static bool mds_nosmt __ro_after_init = false;
static const char * const mds_strings[] = {
@@ -249,6 +249,7 @@ static const char * const mds_strings[] = {
enum taa_mitigations {
TAA_MITIGATION_OFF,
+ TAA_MITIGATION_AUTO,
TAA_MITIGATION_UCODE_NEEDED,
TAA_MITIGATION_VERW,
TAA_MITIGATION_TSX_DISABLED,
@@ -256,27 +257,29 @@ enum taa_mitigations {
/* Default mitigation for TAA-affected CPUs */
static enum taa_mitigations taa_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_VERW : TAA_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_TAA) ? TAA_MITIGATION_AUTO : TAA_MITIGATION_OFF;
enum mmio_mitigations {
MMIO_MITIGATION_OFF,
+ MMIO_MITIGATION_AUTO,
MMIO_MITIGATION_UCODE_NEEDED,
MMIO_MITIGATION_VERW,
};
/* Default mitigation for Processor MMIO Stale Data vulnerabilities */
static enum mmio_mitigations mmio_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_VERW : MMIO_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_MMIO_STALE_DATA) ? MMIO_MITIGATION_AUTO : MMIO_MITIGATION_OFF;
enum rfds_mitigations {
RFDS_MITIGATION_OFF,
+ RFDS_MITIGATION_AUTO,
RFDS_MITIGATION_VERW,
RFDS_MITIGATION_UCODE_NEEDED,
};
/* Default mitigation for Register File Data Sampling */
static enum rfds_mitigations rfds_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_VERW : RFDS_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO : RFDS_MITIGATION_OFF;
static void __init mds_select_mitigation(void)
{
@@ -285,6 +288,9 @@ static void __init mds_select_mitigation(void)
return;
}
+ if (mds_mitigation == MDS_MITIGATION_AUTO)
+ mds_mitigation = MDS_MITIGATION_FULL;
+
if (mds_mitigation == MDS_MITIGATION_FULL) {
if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
mds_mitigation = MDS_MITIGATION_VMWERV;
@@ -514,6 +520,9 @@ static void __init rfds_select_mitigation(void)
if (rfds_mitigation == RFDS_MITIGATION_OFF)
return;
+ if (rfds_mitigation == RFDS_MITIGATION_AUTO)
+ rfds_mitigation = RFDS_MITIGATION_VERW;
+
if (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR)
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
else
@@ -1979,6 +1988,7 @@ void cpu_bugs_smt_update(void)
switch (mds_mitigation) {
case MDS_MITIGATION_FULL:
+ case MDS_MITIGATION_AUTO:
case MDS_MITIGATION_VMWERV:
if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
pr_warn_once(MDS_MSG_SMT);
@@ -1990,6 +2000,7 @@ void cpu_bugs_smt_update(void)
switch (taa_mitigation) {
case TAA_MITIGATION_VERW:
+ case TAA_MITIGATION_AUTO:
case TAA_MITIGATION_UCODE_NEEDED:
if (sched_smt_active())
pr_warn_once(TAA_MSG_SMT);
@@ -2001,6 +2012,7 @@ void cpu_bugs_smt_update(void)
switch (mmio_mitigation) {
case MMIO_MITIGATION_VERW:
+ case MMIO_MITIGATION_AUTO:
case MMIO_MITIGATION_UCODE_NEEDED:
if (sched_smt_active())
pr_warn_once(MMIO_MSG_SMT);
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v3 04/35] x86/bugs: Restructure mds mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (2 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 03/35] x86/bugs: Add AUTO mitigations for mds/taa/mmio/rfds David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-10 16:13 ` Brendan Jackman
2025-02-10 22:25 ` Josh Poimboeuf
2025-01-08 20:24 ` [PATCH v3 05/35] x86/bugs: Restructure taa mitigation David Kaplan
` (31 subsequent siblings)
35 siblings, 2 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure mds mitigation selection to use select/update/apply
functions to create consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 70 +++++++++++++++++++++++++++++++++-----
1 file changed, 62 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 592d40551432..ff2d6f2e01f4 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -34,6 +34,25 @@
#include "cpu.h"
+/*
+ * Speculation Vulnerability Handling
+ *
+ * Each vulnerability is handled with the following functions:
+ * <vuln>_select_mitigation() -- Selects a mitigation to use. This should
+ * take into account all relevant command line
+ * options.
+ * <vuln>_update_mitigation() -- This is called after all vulnerabilities have
+ * selected a mitigation, in case the selection
+ * may want to change based on other choices
+ * made. This function is optional.
+ * <vuln>_apply_mitigation() -- Enable the selected mitigation.
+ *
+ * The compile-time mitigation in all cases should be AUTO. An explicit
+ * command-line option can override AUTO. If no such option is
+ * provided, <vuln>_select_mitigation() will override AUTO to the best
+ * mitigation option.
+ */
+
static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init retbleed_select_mitigation(void);
@@ -41,6 +60,8 @@ static void __init spectre_v2_user_select_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
+static void __init mds_update_mitigation(void);
+static void __init mds_apply_mitigation(void);
static void __init md_clear_update_mitigation(void);
static void __init md_clear_select_mitigation(void);
static void __init taa_select_mitigation(void);
@@ -165,6 +186,7 @@ void __init cpu_select_mitigations(void)
spectre_v2_user_select_mitigation();
ssb_select_mitigation();
l1tf_select_mitigation();
+ mds_select_mitigation();
md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -175,6 +197,14 @@ void __init cpu_select_mitigations(void)
*/
srso_select_mitigation();
gds_select_mitigation();
+
+ /*
+ * After mitigations are selected, some may need to update their
+ * choices.
+ */
+ mds_update_mitigation();
+
+ mds_apply_mitigation();
}
/*
@@ -229,9 +259,6 @@ static void x86_amd_ssb_disable(void)
wrmsrl(MSR_AMD64_LS_CFG, msrval);
}
-#undef pr_fmt
-#define pr_fmt(fmt) "MDS: " fmt
-
/* Default mitigation for MDS-affected CPUs */
static enum mds_mitigations mds_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_AUTO : MDS_MITIGATION_OFF;
@@ -277,12 +304,20 @@ enum rfds_mitigations {
static enum rfds_mitigations rfds_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_RFDS) ? RFDS_MITIGATION_AUTO : RFDS_MITIGATION_OFF;
+/* Return TRUE if any VERW-based mitigation is enabled. */
+static bool __init verw_mitigation_enabled(void)
+{
+ return (mds_mitigation != MDS_MITIGATION_OFF ||
+ (taa_mitigation != TAA_MITIGATION_OFF &&
+ taa_mitigation != TAA_MITIGATION_TSX_DISABLED) ||
+ mmio_mitigation != MMIO_MITIGATION_OFF ||
+ rfds_mitigation != RFDS_MITIGATION_OFF);
+}
+
static void __init mds_select_mitigation(void)
{
- if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
+ if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
mds_mitigation = MDS_MITIGATION_OFF;
- return;
- }
if (mds_mitigation == MDS_MITIGATION_AUTO)
mds_mitigation = MDS_MITIGATION_FULL;
@@ -290,9 +325,29 @@ static void __init mds_select_mitigation(void)
if (mds_mitigation == MDS_MITIGATION_FULL) {
if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
mds_mitigation = MDS_MITIGATION_VMWERV;
+ }
+}
- setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+static void __init mds_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
+ return;
+
+ /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
+ if (verw_mitigation_enabled()) {
+ if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+ mds_mitigation = MDS_MITIGATION_FULL;
+ else
+ mds_mitigation = MDS_MITIGATION_VMWERV;
+ }
+
+ pr_info("MDS: %s\n", mds_strings[mds_mitigation]);
+}
+static void __init mds_apply_mitigation(void)
+{
+ if (mds_mitigation == MDS_MITIGATION_FULL) {
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
(mds_nosmt || cpu_mitigations_auto_nosmt()))
cpu_smt_disable(false);
@@ -595,7 +650,6 @@ static void __init md_clear_update_mitigation(void)
static void __init md_clear_select_mitigation(void)
{
- mds_select_mitigation();
taa_select_mitigation();
mmio_select_mitigation();
rfds_select_mitigation();
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 04/35] x86/bugs: Restructure mds mitigation
2025-01-08 20:24 ` [PATCH v3 04/35] x86/bugs: Restructure mds mitigation David Kaplan
@ 2025-02-10 16:13 ` Brendan Jackman
2025-02-10 17:17 ` Kaplan, David
2025-02-10 22:25 ` Josh Poimboeuf
1 sibling, 1 reply; 138+ messages in thread
From: Brendan Jackman @ 2025-02-10 16:13 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
linux-kernel
On Wed, 8 Jan 2025 at 21:27, David Kaplan <david.kaplan@amd.com> wrote:
> +/*
> + * Speculation Vulnerability Handling
> + *
> + * Each vulnerability is handled with the following functions:
> + * <vuln>_select_mitigation() -- Selects a mitigation to use. This should
> + * take into account all relevant command line
> + * options.
> + * <vuln>_update_mitigation() -- This is called after all vulnerabilities have
> + * selected a mitigation, in case the selection
> + * may want to change based on other choices
> + * made. This function is optional.
> + * <vuln>_apply_mitigation() -- Enable the selected mitigation.
Maybe also worth calling out cpu_bugs_smt_update() here?
> +/* Return TRUE if any VERW-based mitigation is enabled. */
> +static bool __init verw_mitigation_enabled(void)
> +{
> + return (mds_mitigation != MDS_MITIGATION_OFF ||
> + (taa_mitigation != TAA_MITIGATION_OFF &&
> + taa_mitigation != TAA_MITIGATION_TSX_DISABLED) ||
> + mmio_mitigation != MMIO_MITIGATION_OFF ||
> + rfds_mitigation != RFDS_MITIGATION_OFF);
> +}
Since you defined such nice terminology above, why not use it here and
say verw_mitigation_selected()?
(Obviously if the alternative was a respin for this trivial issue
alone I would prefer to merge with the current name!)
> +static void __init mds_update_mitigation(void)
> +{
> + if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
> + return;
> +
> + /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
> + if (verw_mitigation_enabled()) {
> + if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
> + mds_mitigation = MDS_MITIGATION_FULL;
> + else
> + mds_mitigation = MDS_MITIGATION_VMWERV;
> + }
This is changing what the user will see in sysfs. This seems good to
me, but it would be worth calling it out in the commit log I think.
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 04/35] x86/bugs: Restructure mds mitigation
2025-02-10 16:13 ` Brendan Jackman
@ 2025-02-10 17:17 ` Kaplan, David
2025-02-10 17:28 ` Brendan Jackman
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-10 17:17 UTC (permalink / raw)
To: Brendan Jackman
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86@kernel.org,
H . Peter Anvin, linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Brendan Jackman <jackmanb@google.com>
> Sent: Monday, February 10, 2025 10:14 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Pawan
> Gupta <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>;
> Dave Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 04/35] x86/bugs: Restructure mds mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, 8 Jan 2025 at 21:27, David Kaplan <david.kaplan@amd.com> wrote:
> > +/*
> > + * Speculation Vulnerability Handling
> > + *
> > + * Each vulnerability is handled with the following functions:
> > + * <vuln>_select_mitigation() -- Selects a mitigation to use. This should
> > + * take into account all relevant command line
> > + * options.
> > + * <vuln>_update_mitigation() -- This is called after all vulnerabilities have
> > + * selected a mitigation, in case the selection
> > + * may want to change based on other choices
> > + * made. This function is optional.
> > + * <vuln>_apply_mitigation() -- Enable the selected mitigation.
>
> Maybe also worth calling out cpu_bugs_smt_update() here?
Hmm, how were you thinking? The 3 functions above are defined for each vulnerability. So this is more intended as a guide where if adding a new vulnerability, you should define the functions above as needed.
>
> > +/* Return TRUE if any VERW-based mitigation is enabled. */ static
> > +bool __init verw_mitigation_enabled(void) {
> > + return (mds_mitigation != MDS_MITIGATION_OFF ||
> > + (taa_mitigation != TAA_MITIGATION_OFF &&
> > + taa_mitigation != TAA_MITIGATION_TSX_DISABLED) ||
> > + mmio_mitigation != MMIO_MITIGATION_OFF ||
> > + rfds_mitigation != RFDS_MITIGATION_OFF); }
>
> Since you defined such nice terminology above, why not use it here and say
> verw_mitigation_selected()?
>
> (Obviously if the alternative was a respin for this trivial issue alone I would prefer to
> merge with the current name!)
I do like that name better, I'll use that unless anyone else objects.
>
> > +static void __init mds_update_mitigation(void) {
> > + if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
> > + return;
> > +
> > + /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
> > + if (verw_mitigation_enabled()) {
> > + if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
> > + mds_mitigation = MDS_MITIGATION_FULL;
> > + else
> > + mds_mitigation = MDS_MITIGATION_VMWERV;
> > + }
>
> This is changing what the user will see in sysfs. This seems good to me, but it
> would be worth calling it out in the commit log I think.
Does it? What is the case you're thinking of where it is different vs tip?
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 04/35] x86/bugs: Restructure mds mitigation
2025-02-10 17:17 ` Kaplan, David
@ 2025-02-10 17:28 ` Brendan Jackman
0 siblings, 0 replies; 138+ messages in thread
From: Brendan Jackman @ 2025-02-10 17:28 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86@kernel.org,
H . Peter Anvin, linux-kernel@vger.kernel.org
On Mon, 10 Feb 2025 at 18:17, Kaplan, David <David.Kaplan@amd.com> wrote:
>
> [AMD Official Use Only - AMD Internal Distribution Only]
>
> > -----Original Message-----
> > From: Brendan Jackman <jackmanb@google.com>
> > Sent: Monday, February 10, 2025 10:14 AM
> > To: Kaplan, David <David.Kaplan@amd.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> > Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Pawan
> > Gupta <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>;
> > Dave Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> > <hpa@zytor.com>; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH v3 04/35] x86/bugs: Restructure mds mitigation
> >
> > Caution: This message originated from an External Source. Use proper caution
> > when opening attachments, clicking links, or responding.
> >
> >
> > On Wed, 8 Jan 2025 at 21:27, David Kaplan <david.kaplan@amd.com> wrote:
> > > +/*
> > > + * Speculation Vulnerability Handling
> > > + *
> > > + * Each vulnerability is handled with the following functions:
> > > + * <vuln>_select_mitigation() -- Selects a mitigation to use. This should
> > > + * take into account all relevant command line
> > > + * options.
> > > + * <vuln>_update_mitigation() -- This is called after all vulnerabilities have
> > > + * selected a mitigation, in case the selection
> > > + * may want to change based on other choices
> > > + * made. This function is optional.
> > > + * <vuln>_apply_mitigation() -- Enable the selected mitigation.
> >
> > Maybe also worth calling out cpu_bugs_smt_update() here?
>
> Hmm, how were you thinking? The 3 functions above are defined for each vulnerability. So this is more intended as a guide where if adding a new vulnerability, you should define the functions above as needed.
Yeah it's not really needed for people adding new mitigations but it's
still helpful for readers IMO.
Just something like "see also cpu_bugs_smt_update()" to highlight
that just coz all the *_apply_mitigation()s are done, it doesn't mean
we've finished setting up the mitigations yet.
(Like with the naming bikeshed though, this is very much a nonblocking
suggestion!)
> > > +/* Return TRUE if any VERW-based mitigation is enabled. */ static
> > > +bool __init verw_mitigation_enabled(void) {
> > > + return (mds_mitigation != MDS_MITIGATION_OFF ||
> > > + (taa_mitigation != TAA_MITIGATION_OFF &&
> > > + taa_mitigation != TAA_MITIGATION_TSX_DISABLED) ||
> > > + mmio_mitigation != MMIO_MITIGATION_OFF ||
> > > + rfds_mitigation != RFDS_MITIGATION_OFF); }
> >
> > Since you defined such nice terminology above, why not use it here and say
> > verw_mitigation_selected()?
> >
> > (Obviously if the alternative was a respin for this trivial issue alone I would prefer to
> > merge with the current name!)
>
> I do like that name better, I'll use that unless anyone else objects.
>
> >
> > > +static void __init mds_update_mitigation(void) {
> > > + if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
> > > + return;
> > > +
> > > + /* If TAA, MMIO, or RFDS are being mitigated, MDS gets mitigated too. */
> > > + if (verw_mitigation_enabled()) {
> > > + if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
> > > + mds_mitigation = MDS_MITIGATION_FULL;
> > > + else
> > > + mds_mitigation = MDS_MITIGATION_VMWERV;
> > > + }
> >
> > This is changing what the user will see in sysfs. This seems good to me, but it
> > would be worth calling it out in the commit log I think.
>
> Does it? What is the case you're thinking of where it is different vs tip?
Oh, no it doesn't - I forgot about md_clear_update_mitigation(). This
error is a good justification for this refactoring :)
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 04/35] x86/bugs: Restructure mds mitigation
2025-01-08 20:24 ` [PATCH v3 04/35] x86/bugs: Restructure mds mitigation David Kaplan
2025-02-10 16:13 ` Brendan Jackman
@ 2025-02-10 22:25 ` Josh Poimboeuf
2025-02-10 22:33 ` Kaplan, David
1 sibling, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-10 22:25 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:44PM -0600, David Kaplan wrote:
> @@ -229,9 +259,6 @@ static void x86_amd_ssb_disable(void)
> wrmsrl(MSR_AMD64_LS_CFG, msrval);
> }
>
> -#undef pr_fmt
> -#define pr_fmt(fmt) "MDS: " fmt
> -
Why? For consistency with the rest of the file it's best to leave the
correct pr_fmt() in place for mds_*(), taa_*(), rfds_*(), etc.
> static void __init mds_select_mitigation(void)
> {
> - if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
> + if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
> mds_mitigation = MDS_MITIGATION_OFF;
> - return;
> - }
For clarity it should still return here, that makes it obvious none of
the subsequent conditions apply.
> +static void __init mds_apply_mitigation(void)
> +{
> + if (mds_mitigation == MDS_MITIGATION_FULL) {
> + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
The mitigation still needs to be attempted for the MDS_MITIGATION_VMWERV
case.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 04/35] x86/bugs: Restructure mds mitigation
2025-02-10 22:25 ` Josh Poimboeuf
@ 2025-02-10 22:33 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-10 22:33 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Monday, February 10, 2025 4:25 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 04/35] x86/bugs: Restructure mds mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:44PM -0600, David Kaplan wrote:
> > @@ -229,9 +259,6 @@ static void x86_amd_ssb_disable(void)
> > wrmsrl(MSR_AMD64_LS_CFG, msrval); }
> >
> > -#undef pr_fmt
> > -#define pr_fmt(fmt) "MDS: " fmt
> > -
>
> Why? For consistency with the rest of the file it's best to leave the correct pr_fmt()
> in place for mds_*(), taa_*(), rfds_*(), etc.
I had removed it because it wasn't used, but it actually can be used for the pr_info in mds_update_mitigation so I'll put it back.
>
> > static void __init mds_select_mitigation(void) {
> > - if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
> > + if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
> > mds_mitigation = MDS_MITIGATION_OFF;
> > - return;
> > - }
>
> For clarity it should still return here, that makes it obvious none of the subsequent
> conditions apply.
Ok
>
> > +static void __init mds_apply_mitigation(void) {
> > + if (mds_mitigation == MDS_MITIGATION_FULL) {
> > + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
>
> The mitigation still needs to be attempted for the MDS_MITIGATION_VMWERV
> case.
>
Good catch, will fix.
Thanks --David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 05/35] x86/bugs: Restructure taa mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (3 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 04/35] x86/bugs: Restructure mds mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-10 16:24 ` Brendan Jackman
2025-02-10 22:50 ` Josh Poimboeuf
2025-01-08 20:24 ` [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation David Kaplan
` (30 subsequent siblings)
35 siblings, 2 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure taa mitigation to use select/update/apply functions to
create consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 92 ++++++++++++++++++++++++--------------
1 file changed, 58 insertions(+), 34 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ff2d6f2e01f4..7beb2d6c43bb 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -65,6 +65,8 @@ static void __init mds_apply_mitigation(void);
static void __init md_clear_update_mitigation(void);
static void __init md_clear_select_mitigation(void);
static void __init taa_select_mitigation(void);
+static void __init taa_update_mitigation(void);
+static void __init taa_apply_mitigation(void);
static void __init mmio_select_mitigation(void);
static void __init srbds_select_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
@@ -187,6 +189,7 @@ void __init cpu_select_mitigations(void)
ssb_select_mitigation();
l1tf_select_mitigation();
mds_select_mitigation();
+ taa_select_mitigation();
md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -203,8 +206,10 @@ void __init cpu_select_mitigations(void)
* choices.
*/
mds_update_mitigation();
+ taa_update_mitigation();
mds_apply_mitigation();
+ taa_apply_mitigation();
}
/*
@@ -375,9 +380,6 @@ static int __init mds_cmdline(char *str)
}
early_param("mds", mds_cmdline);
-#undef pr_fmt
-#define pr_fmt(fmt) "TAA: " fmt
-
static bool taa_nosmt __ro_after_init;
static const char * const taa_strings[] = {
@@ -400,48 +402,71 @@ static void __init taa_select_mitigation(void)
return;
}
- if (cpu_mitigations_off()) {
+ if (cpu_mitigations_off())
taa_mitigation = TAA_MITIGATION_OFF;
- return;
- }
/*
* TAA mitigation via VERW is turned off if both
* tsx_async_abort=off and mds=off are specified.
+ *
+ * MDS mitigation will be checked in taa_update_mitigation().
*/
- if (taa_mitigation == TAA_MITIGATION_OFF &&
- mds_mitigation == MDS_MITIGATION_OFF)
+ if (taa_mitigation == TAA_MITIGATION_OFF)
return;
- if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
+ /* Microcode will be checked in taa_update_mitigation(). */
+ if (taa_mitigation == TAA_MITIGATION_AUTO)
taa_mitigation = TAA_MITIGATION_VERW;
- else
- taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
- /*
- * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
- * A microcode update fixes this behavior to clear CPU buffers. It also
- * adds support for MSR_IA32_TSX_CTRL which is enumerated by the
- * ARCH_CAP_TSX_CTRL_MSR bit.
- *
- * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
- * update is required.
- */
- if ( (x86_arch_cap_msr & ARCH_CAP_MDS_NO) &&
- !(x86_arch_cap_msr & ARCH_CAP_TSX_CTRL_MSR))
- taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+}
- /*
- * TSX is enabled, select alternate mitigation for TAA which is
- * the same as MDS. Enable MDS static branch to clear CPU buffers.
- *
- * For guests that can't determine whether the correct microcode is
- * present on host, enable the mitigation for UCODE_NEEDED as well.
- */
- setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+static void __init taa_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_TAA) || cpu_mitigations_off())
+ return;
+
+ if (verw_mitigation_enabled())
+ taa_mitigation = TAA_MITIGATION_VERW;
+
+ if (taa_mitigation == TAA_MITIGATION_VERW) {
+ /* Check if the requisite ucode is available. */
+ if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
+ taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+
+ /*
+ * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
+ * A microcode update fixes this behavior to clear CPU buffers. It also
+ * adds support for MSR_IA32_TSX_CTRL which is enumerated by the
+ * ARCH_CAP_TSX_CTRL_MSR bit.
+ *
+ * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
+ * update is required.
+ */
+ if ((x86_arch_cap_msr & ARCH_CAP_MDS_NO) &&
+ !(x86_arch_cap_msr & ARCH_CAP_TSX_CTRL_MSR))
+ taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
+ }
+
+ pr_info("TAA: %s\n", taa_strings[taa_mitigation]);
+}
+
+static void __init taa_apply_mitigation(void)
+{
+ if (taa_mitigation == TAA_MITIGATION_VERW ||
+ taa_mitigation == TAA_MITIGATION_UCODE_NEEDED) {
+ /*
+ * TSX is enabled, select alternate mitigation for TAA which is
+ * the same as MDS. Enable MDS static branch to clear CPU buffers.
+ *
+ * For guests that can't determine whether the correct microcode is
+ * present on host, enable the mitigation for UCODE_NEEDED as well.
+ */
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
+
+ if (taa_nosmt || cpu_mitigations_auto_nosmt())
+ cpu_smt_disable(false);
+ }
- if (taa_nosmt || cpu_mitigations_auto_nosmt())
- cpu_smt_disable(false);
}
static int __init tsx_async_abort_parse_cmdline(char *str)
@@ -650,7 +675,6 @@ static void __init md_clear_update_mitigation(void)
static void __init md_clear_select_mitigation(void)
{
- taa_select_mitigation();
mmio_select_mitigation();
rfds_select_mitigation();
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 05/35] x86/bugs: Restructure taa mitigation
2025-01-08 20:24 ` [PATCH v3 05/35] x86/bugs: Restructure taa mitigation David Kaplan
@ 2025-02-10 16:24 ` Brendan Jackman
2025-02-10 17:19 ` Kaplan, David
2025-02-10 22:50 ` Josh Poimboeuf
1 sibling, 1 reply; 138+ messages in thread
From: Brendan Jackman @ 2025-02-10 16:24 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
linux-kernel
On Wed, 8 Jan 2025 at 21:27, David Kaplan <david.kaplan@amd.com> wrote:
> @@ -400,48 +402,71 @@ static void __init taa_select_mitigation(void)
> return;
> }
>
> - if (cpu_mitigations_off()) {
> + if (cpu_mitigations_off())
> taa_mitigation = TAA_MITIGATION_OFF;
> - return;
> - }
>
> /*
> * TAA mitigation via VERW is turned off if both
> * tsx_async_abort=off and mds=off are specified.
> + *
> + * MDS mitigation will be checked in taa_update_mitigation().
What we are actually talking about here is the new
verw_mitigation_enabled(), right? I don't think this block/commentary
adds any clarity any more. Maybe just delete it?
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 05/35] x86/bugs: Restructure taa mitigation
2025-02-10 16:24 ` Brendan Jackman
@ 2025-02-10 17:19 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-10 17:19 UTC (permalink / raw)
To: Brendan Jackman
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86@kernel.org,
H . Peter Anvin, linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Brendan Jackman <jackmanb@google.com>
> Sent: Monday, February 10, 2025 10:25 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Pawan
> Gupta <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>;
> Dave Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 05/35] x86/bugs: Restructure taa mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, 8 Jan 2025 at 21:27, David Kaplan <david.kaplan@amd.com> wrote:
> > @@ -400,48 +402,71 @@ static void __init taa_select_mitigation(void)
> > return;
> > }
> >
> > - if (cpu_mitigations_off()) {
> > + if (cpu_mitigations_off())
> > taa_mitigation = TAA_MITIGATION_OFF;
> > - return;
> > - }
> >
> > /*
> > * TAA mitigation via VERW is turned off if both
> > * tsx_async_abort=off and mds=off are specified.
> > + *
> > + * MDS mitigation will be checked in taa_update_mitigation().
>
> What we are actually talking about here is the new verw_mitigation_enabled(), right?
> I don't think this block/commentary adds any clarity any more. Maybe just delete it?
Yeah, I think that's fair.
Thanks --David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 05/35] x86/bugs: Restructure taa mitigation
2025-01-08 20:24 ` [PATCH v3 05/35] x86/bugs: Restructure taa mitigation David Kaplan
2025-02-10 16:24 ` Brendan Jackman
@ 2025-02-10 22:50 ` Josh Poimboeuf
2025-02-11 17:17 ` Kaplan, David
1 sibling, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-10 22:50 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:45PM -0600, David Kaplan wrote:
> @@ -400,48 +402,71 @@ static void __init taa_select_mitigation(void)
> return;
> }
>
> - if (cpu_mitigations_off()) {
> + if (cpu_mitigations_off())
> taa_mitigation = TAA_MITIGATION_OFF;
> - return;
> - }
>
> /*
> * TAA mitigation via VERW is turned off if both
> * tsx_async_abort=off and mds=off are specified.
> + *
> + * MDS mitigation will be checked in taa_update_mitigation().
> */
> - if (taa_mitigation == TAA_MITIGATION_OFF &&
> - mds_mitigation == MDS_MITIGATION_OFF)
> + if (taa_mitigation == TAA_MITIGATION_OFF)
> return;
This check seems rather pointless, the only thing after this is the
TAA_MITIGATION_AUTO check.
>
> - if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
> + /* Microcode will be checked in taa_update_mitigation(). */
> + if (taa_mitigation == TAA_MITIGATION_AUTO)
> taa_mitigation = TAA_MITIGATION_VERW;
> - else
> - taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
In the previous patch, MDS checks for ucode in both select and update,
which is overkill. That should probably be done only in
mds_update_mitigation() to be consistent with how TAA does it here?
>
> - /*
> - * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
> - * A microcode update fixes this behavior to clear CPU buffers. It also
> - * adds support for MSR_IA32_TSX_CTRL which is enumerated by the
> - * ARCH_CAP_TSX_CTRL_MSR bit.
> - *
> - * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
> - * update is required.
> - */
> - if ( (x86_arch_cap_msr & ARCH_CAP_MDS_NO) &&
> - !(x86_arch_cap_msr & ARCH_CAP_TSX_CTRL_MSR))
> - taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
> +}
Extra whitespace here at the end of the function.
> +static void __init taa_update_mitigation(void)
> +{
> + if (!boot_cpu_has_bug(X86_BUG_TAA) || cpu_mitigations_off())
> + return;
> +
> + if (verw_mitigation_enabled())
> + taa_mitigation = TAA_MITIGATION_VERW;
This overwrites TAA_MITIGATION_TSX_DISABLED?
I think reporting TSX disabled here is more accurate than reporting
VERW, since the VERW is only done to mitigate the other vulns.
> +static void __init taa_apply_mitigation(void)
> +{
> + if (taa_mitigation == TAA_MITIGATION_VERW ||
> + taa_mitigation == TAA_MITIGATION_UCODE_NEEDED) {
> + /*
> + * TSX is enabled, select alternate mitigation for TAA which is
> + * the same as MDS. Enable MDS static branch to clear CPU buffers.
> + *
> + * For guests that can't determine whether the correct microcode is
> + * present on host, enable the mitigation for UCODE_NEEDED as well.
> + */
> + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> +
> + if (taa_nosmt || cpu_mitigations_auto_nosmt())
> + cpu_smt_disable(false);
> + }
>
> - if (taa_nosmt || cpu_mitigations_auto_nosmt())
> - cpu_smt_disable(false);
> }
Another extra whitespace here at the end of the function.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 05/35] x86/bugs: Restructure taa mitigation
2025-02-10 22:50 ` Josh Poimboeuf
@ 2025-02-11 17:17 ` Kaplan, David
2025-02-11 19:17 ` Josh Poimboeuf
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-11 17:17 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Monday, February 10, 2025 4:50 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 05/35] x86/bugs: Restructure taa mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:45PM -0600, David Kaplan wrote:
> > @@ -400,48 +402,71 @@ static void __init taa_select_mitigation(void)
> > return;
> > }
> >
> > - if (cpu_mitigations_off()) {
> > + if (cpu_mitigations_off())
> > taa_mitigation = TAA_MITIGATION_OFF;
> > - return;
> > - }
> >
> > /*
> > * TAA mitigation via VERW is turned off if both
> > * tsx_async_abort=off and mds=off are specified.
> > + *
> > + * MDS mitigation will be checked in taa_update_mitigation().
> > */
> > - if (taa_mitigation == TAA_MITIGATION_OFF &&
> > - mds_mitigation == MDS_MITIGATION_OFF)
> > + if (taa_mitigation == TAA_MITIGATION_OFF)
> > return;
>
> This check seems rather pointless, the only thing after this is the
> TAA_MITIGATION_AUTO check.
True, and it can be removed. But in patch 4 in the mds logic you did suggest having an explicit return to make it clear that none of the later conditions applied. I'm not sure I feel strongly either way, but I'd like to be consistent.
>
> >
> > - if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
> > + /* Microcode will be checked in taa_update_mitigation(). */
> > + if (taa_mitigation == TAA_MITIGATION_AUTO)
> > taa_mitigation = TAA_MITIGATION_VERW;
> > - else
> > - taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
>
> In the previous patch, MDS checks for ucode in both select and update, which is
> overkill. That should probably be done only in
> mds_update_mitigation() to be consistent with how TAA does it here?
I assume you're referring to the X86_FEATURE_MD_CLEAR check. I'll fix this in the mds patch to be similar, good idea.
>
> >
> > - /*
> > - * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and
> MDS_NO=1.
> > - * A microcode update fixes this behavior to clear CPU buffers. It also
> > - * adds support for MSR_IA32_TSX_CTRL which is enumerated by the
> > - * ARCH_CAP_TSX_CTRL_MSR bit.
> > - *
> > - * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set,
> microcode
> > - * update is required.
> > - */
> > - if ( (x86_arch_cap_msr & ARCH_CAP_MDS_NO) &&
> > - !(x86_arch_cap_msr & ARCH_CAP_TSX_CTRL_MSR))
> > - taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
> > +}
>
> Extra whitespace here at the end of the function.
Ack
>
> > +static void __init taa_update_mitigation(void) {
> > + if (!boot_cpu_has_bug(X86_BUG_TAA) || cpu_mitigations_off())
> > + return;
> > +
> > + if (verw_mitigation_enabled())
> > + taa_mitigation = TAA_MITIGATION_VERW;
>
> This overwrites TAA_MITIGATION_TSX_DISABLED?
>
> I think reporting TSX disabled here is more accurate than reporting VERW, since
> the VERW is only done to mitigate the other vulns.
Agreed, will fix.
>
> > +static void __init taa_apply_mitigation(void) {
> > + if (taa_mitigation == TAA_MITIGATION_VERW ||
> > + taa_mitigation == TAA_MITIGATION_UCODE_NEEDED) {
> > + /*
> > + * TSX is enabled, select alternate mitigation for TAA which is
> > + * the same as MDS. Enable MDS static branch to clear CPU buffers.
> > + *
> > + * For guests that can't determine whether the correct microcode is
> > + * present on host, enable the mitigation for UCODE_NEEDED as well.
> > + */
> > + setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> > +
> > + if (taa_nosmt || cpu_mitigations_auto_nosmt())
> > + cpu_smt_disable(false);
> > + }
> >
> > - if (taa_nosmt || cpu_mitigations_auto_nosmt())
> > - cpu_smt_disable(false);
> > }
>
> Another extra whitespace here at the end of the function.
Ack
Thanks --David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 05/35] x86/bugs: Restructure taa mitigation
2025-02-11 17:17 ` Kaplan, David
@ 2025-02-11 19:17 ` Josh Poimboeuf
0 siblings, 0 replies; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 19:17 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Tue, Feb 11, 2025 at 05:17:15PM +0000, Kaplan, David wrote:
> > On Wed, Jan 08, 2025 at 02:24:45PM -0600, David Kaplan wrote:
> > > @@ -400,48 +402,71 @@ static void __init taa_select_mitigation(void)
> > > return;
> > > }
> > >
> > > - if (cpu_mitigations_off()) {
> > > + if (cpu_mitigations_off())
> > > taa_mitigation = TAA_MITIGATION_OFF;
> > > - return;
> > > - }
> > >
> > > /*
> > > * TAA mitigation via VERW is turned off if both
> > > * tsx_async_abort=off and mds=off are specified.
> > > + *
> > > + * MDS mitigation will be checked in taa_update_mitigation().
> > > */
> > > - if (taa_mitigation == TAA_MITIGATION_OFF &&
> > > - mds_mitigation == MDS_MITIGATION_OFF)
> > > + if (taa_mitigation == TAA_MITIGATION_OFF)
> > > return;
> >
> > This check seems rather pointless, the only thing after this is the
> > TAA_MITIGATION_AUTO check.
>
> True, and it can be removed. But in patch 4 in the mds logic you did suggest having an explicit return to make it clear that none of the later conditions applied. I'm not sure I feel strongly either way, but I'd like to be consistent.
Let me try to clarify:
- If it's already doing the conditional for another reason, then
adding in the return makes sense:
if (condition) {
do_something;
/* all actions related to condition are done */
return;
}
- Or, if it's adding a condition+return to avoid having to explicitly
check for !condition later, that also makes sense.
if (condition)
return;
/* assume !condition */
...
- But adding condition+return, when there's only one condition
remaining in the function, which already implicitly excludes the
original condition, that just adds code for no reason.
/* this has no purpose */
if (condition)
return;
if (!condition && condition2)
do_something;
return;
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (4 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 05/35] x86/bugs: Restructure taa mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-10 16:42 ` Brendan Jackman
2025-02-10 23:29 ` Josh Poimboeuf
2025-01-08 20:24 ` [PATCH v3 07/35] x86/bugs: Restructure rfds mitigation David Kaplan
` (29 subsequent siblings)
35 siblings, 2 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure mmio mitigation to use select/update/apply functions to
create consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 60 ++++++++++++++++++++++++++++----------
1 file changed, 44 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 7beb2d6c43bb..a8da097ab2d5 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -68,6 +68,8 @@ static void __init taa_select_mitigation(void);
static void __init taa_update_mitigation(void);
static void __init taa_apply_mitigation(void);
static void __init mmio_select_mitigation(void);
+static void __init mmio_update_mitigation(void);
+static void __init mmio_apply_mitigation(void);
static void __init srbds_select_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
@@ -190,6 +192,7 @@ void __init cpu_select_mitigations(void)
l1tf_select_mitigation();
mds_select_mitigation();
taa_select_mitigation();
+ mmio_select_mitigation();
md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -207,9 +210,11 @@ void __init cpu_select_mitigations(void)
*/
mds_update_mitigation();
taa_update_mitigation();
+ mmio_update_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
+ mmio_apply_mitigation();
}
/*
@@ -510,6 +515,45 @@ static void __init mmio_select_mitigation(void)
return;
}
+ if (mmio_mitigation == MMIO_MITIGATION_OFF)
+ return;
+
+ /* Microcode will be checked in mmio_update_mitigation(). */
+ if (mmio_mitigation == MMIO_MITIGATION_AUTO)
+ mmio_mitigation = MMIO_MITIGATION_VERW;
+
+}
+
+static void __init mmio_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) || cpu_mitigations_off())
+ return;
+
+ if (verw_mitigation_enabled())
+ mmio_mitigation = MMIO_MITIGATION_VERW;
+
+ if (mmio_mitigation == MMIO_MITIGATION_VERW) {
+ /*
+ * Check if the system has the right microcode.
+ *
+ * CPU Fill buffer clear mitigation is enumerated by either an explicit
+ * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
+ * affected systems.
+ */
+ if (!((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
+ (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
+ boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
+ !(x86_arch_cap_msr & ARCH_CAP_MDS_NO))))
+ mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
+ }
+
+ pr_info("%s\n", mmio_strings[mmio_mitigation]);
+ if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
+ pr_info("Unknown: No mitigations\n");
+}
+
+static void __init mmio_apply_mitigation(void)
+{
if (mmio_mitigation == MMIO_MITIGATION_OFF)
return;
@@ -538,21 +582,6 @@ static void __init mmio_select_mitigation(void)
if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
static_branch_enable(&mds_idle_clear);
- /*
- * Check if the system has the right microcode.
- *
- * CPU Fill buffer clear mitigation is enumerated by either an explicit
- * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
- * affected systems.
- */
- if ((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
- (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
- boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
- !(x86_arch_cap_msr & ARCH_CAP_MDS_NO)))
- mmio_mitigation = MMIO_MITIGATION_VERW;
- else
- mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
-
if (mmio_nosmt || cpu_mitigations_auto_nosmt())
cpu_smt_disable(false);
}
@@ -675,7 +704,6 @@ static void __init md_clear_update_mitigation(void)
static void __init md_clear_select_mitigation(void)
{
- mmio_select_mitigation();
rfds_select_mitigation();
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
2025-01-08 20:24 ` [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation David Kaplan
@ 2025-02-10 16:42 ` Brendan Jackman
2025-02-10 17:22 ` Kaplan, David
2025-02-10 23:29 ` Josh Poimboeuf
1 sibling, 1 reply; 138+ messages in thread
From: Brendan Jackman @ 2025-02-10 16:42 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
linux-kernel
On Wed, 8 Jan 2025 at 21:27, David Kaplan <david.kaplan@amd.com> wrote:
> +static void __init mmio_apply_mitigation(void)
> +{
> if (mmio_mitigation == MMIO_MITIGATION_OFF)
> return;
> /*
> * Enable CPU buffer clear mitigation for host and VMM, if also affected
> * by MDS or TAA. Otherwise, enable mitigation for VMM only.
> */
> if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
> boot_cpu_has(X86_FEATURE_RTM)))
> setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
This is still peeking at other mitigations in _apply_mitigation.
Shouldn't we shunt that logic into _update_mitigation?
I guess this would need a new enum value but that doesn't seem too
bad. Worth it to have all the inter-mitigation dependencies localised
into *_udpate_mitigation IMO.
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
2025-02-10 16:42 ` Brendan Jackman
@ 2025-02-10 17:22 ` Kaplan, David
2025-02-10 17:35 ` Brendan Jackman
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-10 17:22 UTC (permalink / raw)
To: Brendan Jackman
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86@kernel.org,
H . Peter Anvin, linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Brendan Jackman <jackmanb@google.com>
> Sent: Monday, February 10, 2025 10:42 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Pawan
> Gupta <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>;
> Dave Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, 8 Jan 2025 at 21:27, David Kaplan <david.kaplan@amd.com> wrote:
> > +static void __init mmio_apply_mitigation(void) {
> > if (mmio_mitigation == MMIO_MITIGATION_OFF)
> > return;
>
> > /*
> > * Enable CPU buffer clear mitigation for host and VMM, if also
> > affected
> > * by MDS or TAA. Otherwise, enable mitigation for VMM only.
> > */
> > if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA)
> &&
> > boot_cpu_has(X86_FEATURE_RTM)))
> > setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
>
> This is still peeking at other mitigations in _apply_mitigation.
> Shouldn't we shunt that logic into _update_mitigation?
>
> I guess this would need a new enum value but that doesn't seem too bad. Worth it
> to have all the inter-mitigation dependencies localised into *_udpate_mitigation IMO.
I don't think it is peeking at other mitigations, it's only looking at what other bugs the CPU has (which is static). Looking at the mds/taa/etc. mitigation values is done in mmio_update_mitigation.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
2025-02-10 17:22 ` Kaplan, David
@ 2025-02-10 17:35 ` Brendan Jackman
0 siblings, 0 replies; 138+ messages in thread
From: Brendan Jackman @ 2025-02-10 17:35 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86@kernel.org,
H . Peter Anvin, linux-kernel@vger.kernel.org
On Mon, 10 Feb 2025 at 18:22, Kaplan, David <David.Kaplan@amd.com> wrote:
> > This is still peeking at other mitigations in _apply_mitigation.
> > Shouldn't we shunt that logic into _update_mitigation?
> >
> > I guess this would need a new enum value but that doesn't seem too bad. Worth it
> > to have all the inter-mitigation dependencies localised into *_udpate_mitigation IMO.
>
> I don't think it is peeking at other mitigations, it's only looking at what other bugs the CPU has (which is static). Looking at the mds/taa/etc. mitigation values is done in mmio_update_mitigation.
Hmm, that's true but it doesn't quite shake my underlying feeling that
we're leaving isolation of logic on the table here. I know I said
"inter-mitigation dependencies" but if we could even keep all the
inter-_vuln_ dependencies in one place that would be really nice.
But, I will come back to this once I've looked at the rest of the
series. Maybe it doesn't really make sense to try and fully isolate
these things.
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
2025-01-08 20:24 ` [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation David Kaplan
2025-02-10 16:42 ` Brendan Jackman
@ 2025-02-10 23:29 ` Josh Poimboeuf
2025-02-11 20:35 ` Kaplan, David
1 sibling, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-10 23:29 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:46PM -0600, David Kaplan wrote:
> Restructure mmio mitigation to use select/update/apply functions to
> create consistent vulnerability handling.
>
> Signed-off-by: David Kaplan <david.kaplan@amd.com>
> ---
> arch/x86/kernel/cpu/bugs.c | 60 ++++++++++++++++++++++++++++----------
> 1 file changed, 44 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 7beb2d6c43bb..a8da097ab2d5 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -68,6 +68,8 @@ static void __init taa_select_mitigation(void);
> static void __init taa_update_mitigation(void);
> static void __init taa_apply_mitigation(void);
> static void __init mmio_select_mitigation(void);
> +static void __init mmio_update_mitigation(void);
> +static void __init mmio_apply_mitigation(void);
> static void __init srbds_select_mitigation(void);
> static void __init l1d_flush_select_mitigation(void);
> static void __init srso_select_mitigation(void);
> @@ -190,6 +192,7 @@ void __init cpu_select_mitigations(void)
> l1tf_select_mitigation();
> mds_select_mitigation();
> taa_select_mitigation();
> + mmio_select_mitigation();
> md_clear_select_mitigation();
> srbds_select_mitigation();
> l1d_flush_select_mitigation();
> @@ -207,9 +210,11 @@ void __init cpu_select_mitigations(void)
> */
> mds_update_mitigation();
> taa_update_mitigation();
> + mmio_update_mitigation();
>
> mds_apply_mitigation();
> taa_apply_mitigation();
> + mmio_apply_mitigation();
> }
>
> /*
> @@ -510,6 +515,45 @@ static void __init mmio_select_mitigation(void)
> return;
> }
>
> + if (mmio_mitigation == MMIO_MITIGATION_OFF)
> + return;
Another seemingly pointless return, the only thing after this is the
MMIO_MITIGATION_AUTO check.
> + /* Microcode will be checked in mmio_update_mitigation(). */
> + if (mmio_mitigation == MMIO_MITIGATION_AUTO)
> + mmio_mitigation = MMIO_MITIGATION_VERW;
> +
> +}
Extra whitespace.
> +
> +static void __init mmio_update_mitigation(void)
> +{
> + if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) || cpu_mitigations_off())
> + return;
> +
> + if (verw_mitigation_enabled())
> + mmio_mitigation = MMIO_MITIGATION_VERW;
> +
> + if (mmio_mitigation == MMIO_MITIGATION_VERW) {
> + /*
> + * Check if the system has the right microcode.
> + *
> + * CPU Fill buffer clear mitigation is enumerated by either an explicit
> + * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH on MDS
> + * affected systems.
> + */
> + if (!((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
> + (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
> + boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
> + !(x86_arch_cap_msr & ARCH_CAP_MDS_NO))))
> + mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
> + }
> +
> + pr_info("%s\n", mmio_strings[mmio_mitigation]);
> + if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
> + pr_info("Unknown: No mitigations\n");
Seems weird to print two messages for the X86_BUG_MMIO_UNKNOWN case?
And note that if it gets enabled by verw_mitigation_enabled() it prints:
MMIO Stale Data: Mitigation: Clear CPU buffers
MMIO Stale Data: Unknown: No mitigations
which is confusing at best :-)
It should probably just print either one or the other, like it did
before (and like mmio_stale_data_show_state() does).
> @@ -538,21 +582,6 @@ static void __init mmio_select_mitigation(void)
> if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
> static_branch_enable(&mds_idle_clear);
Right here it does the following:
/*
* Enable CPU buffer clear mitigation for host and VMM, if also affected
* by MDS or TAA. Otherwise, enable mitigation for VMM only.
*/
if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
boot_cpu_has(X86_FEATURE_RTM)))
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
Isn't that a cross-mitigation dependency? i.e. if
X86_FEATURE_CLEAR_CPU_BUF gets enabled here then the other mitigations
would need to update their mitigation reporting?
Maybe that check can be done in mmio_select_mitigation()?
Then, after that there's:
/*
* X86_FEATURE_CLEAR_CPU_BUF could be enabled by other VERW based
* mitigations, disable KVM-only mitigation in that case.
*/
if (boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
static_branch_disable(&mmio_stale_data_clear);
else
static_branch_enable(&mmio_stale_data_clear);
which assumes this is called after the other VERW-enabling
*_apply_mitigation() functions. It feels like this decision should be
made in mmio_update_mitigation().
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
2025-02-10 23:29 ` Josh Poimboeuf
@ 2025-02-11 20:35 ` Kaplan, David
2025-02-11 23:18 ` Josh Poimboeuf
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-11 20:35 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Monday, February 10, 2025 5:30 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:46PM -0600, David Kaplan wrote:
> > Restructure mmio mitigation to use select/update/apply functions to
> > create consistent vulnerability handling.
> >
> > Signed-off-by: David Kaplan <david.kaplan@amd.com>
> > ---
> > arch/x86/kernel/cpu/bugs.c | 60
> > ++++++++++++++++++++++++++++----------
> > 1 file changed, 44 insertions(+), 16 deletions(-)
> >
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index 7beb2d6c43bb..a8da097ab2d5 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -68,6 +68,8 @@ static void __init taa_select_mitigation(void);
> > static void __init taa_update_mitigation(void); static void __init
> > taa_apply_mitigation(void); static void __init
> > mmio_select_mitigation(void);
> > +static void __init mmio_update_mitigation(void); static void __init
> > +mmio_apply_mitigation(void);
> > static void __init srbds_select_mitigation(void); static void __init
> > l1d_flush_select_mitigation(void);
> > static void __init srso_select_mitigation(void); @@ -190,6 +192,7 @@
> > void __init cpu_select_mitigations(void)
> > l1tf_select_mitigation();
> > mds_select_mitigation();
> > taa_select_mitigation();
> > + mmio_select_mitigation();
> > md_clear_select_mitigation();
> > srbds_select_mitigation();
> > l1d_flush_select_mitigation();
> > @@ -207,9 +210,11 @@ void __init cpu_select_mitigations(void)
> > */
> > mds_update_mitigation();
> > taa_update_mitigation();
> > + mmio_update_mitigation();
> >
> > mds_apply_mitigation();
> > taa_apply_mitigation();
> > + mmio_apply_mitigation();
> > }
> >
> > /*
> > @@ -510,6 +515,45 @@ static void __init mmio_select_mitigation(void)
> > return;
> > }
> >
> > + if (mmio_mitigation == MMIO_MITIGATION_OFF)
> > + return;
>
> Another seemingly pointless return, the only thing after this is the
> MMIO_MITIGATION_AUTO check.
Ack
>
> > + /* Microcode will be checked in mmio_update_mitigation(). */
> > + if (mmio_mitigation == MMIO_MITIGATION_AUTO)
> > + mmio_mitigation = MMIO_MITIGATION_VERW;
> > +
> > +}
>
> Extra whitespace.
Ack
>
> > +
> > +static void __init mmio_update_mitigation(void) {
> > + if (!boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA) ||
> cpu_mitigations_off())
> > + return;
> > +
> > + if (verw_mitigation_enabled())
> > + mmio_mitigation = MMIO_MITIGATION_VERW;
> > +
> > + if (mmio_mitigation == MMIO_MITIGATION_VERW) {
> > + /*
> > + * Check if the system has the right microcode.
> > + *
> > + * CPU Fill buffer clear mitigation is enumerated by either an explicit
> > + * FB_CLEAR or by the presence of both MD_CLEAR and L1D_FLUSH
> on MDS
> > + * affected systems.
> > + */
> > + if (!((x86_arch_cap_msr & ARCH_CAP_FB_CLEAR) ||
> > + (boot_cpu_has(X86_FEATURE_MD_CLEAR) &&
> > + boot_cpu_has(X86_FEATURE_FLUSH_L1D) &&
> > + !(x86_arch_cap_msr & ARCH_CAP_MDS_NO))))
> > + mmio_mitigation = MMIO_MITIGATION_UCODE_NEEDED;
> > + }
> > +
> > + pr_info("%s\n", mmio_strings[mmio_mitigation]);
> > + if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
> > + pr_info("Unknown: No mitigations\n");
>
> Seems weird to print two messages for the X86_BUG_MMIO_UNKNOWN case?
>
> And note that if it gets enabled by verw_mitigation_enabled() it prints:
>
> MMIO Stale Data: Mitigation: Clear CPU buffers
> MMIO Stale Data: Unknown: No mitigations
>
> which is confusing at best :-)
>
> It should probably just print either one or the other, like it did before (and like
> mmio_stale_data_show_state() does).
Good catch, will fix.
>
> > @@ -538,21 +582,6 @@ static void __init mmio_select_mitigation(void)
> > if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
> > static_branch_enable(&mds_idle_clear);
>
> Right here it does the following:
>
> /*
> * Enable CPU buffer clear mitigation for host and VMM, if also affected
> * by MDS or TAA. Otherwise, enable mitigation for VMM only.
> */
> if (boot_cpu_has_bug(X86_BUG_MDS) ||
> (boot_cpu_has_bug(X86_BUG_TAA) &&
> boot_cpu_has(X86_FEATURE_RTM)))
> setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
>
> Isn't that a cross-mitigation dependency? i.e. if
> X86_FEATURE_CLEAR_CPU_BUF gets enabled here then the other mitigations
> would need to update their mitigation reporting?
I don't think so, nobody should be looking at X86_FEATURE_CLEAR_CPU_BUF to determine their mitigation selection, they should only be looking at the other variables like taa_mitigation as done in the verw_mitigation_enabled() function.
>
> Maybe that check can be done in mmio_select_mitigation()?
>
>
> Then, after that there's:
>
> /*
> * X86_FEATURE_CLEAR_CPU_BUF could be enabled by other VERW
> based
> * mitigations, disable KVM-only mitigation in that case.
> */
> if (boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
> static_branch_disable(&mmio_stale_data_clear);
> else
> static_branch_enable(&mmio_stale_data_clear);
>
> which assumes this is called after the other VERW-enabling
> *_apply_mitigation() functions. It feels like this decision should be
> made in mmio_update_mitigation().
>
Hmm. So I think the only case that is relevant here is if the CPU is immune to MDS and TAA, but is vulnerable to MMIO and RFDS. rfds_apply_mitigation will force X86_FEATURE_CLEAR_CPU_BUF and in that case, we want to disable the static branch. If the CPU was vulnerable to MDS or TAA then MMIO would force X86_FEATURE_CLEAR_CPU_BUF on its own.
So I think that mmio_apply_mitigation could better handle this by checking rfds_mitigation to decide whether to disable the static branch.
Like:
/*
* Enable CPU buffer clear mitigation for host and VMM, if also affected
* by MDS or TAA.
*
* Only enable the VMM mitigation if the CPU buffer clear mitigation is
* not being used.
*/
if (rfds_mitigation == RFDS_MITIGATION_VERW ||
boot_cpu_has_bug(X86_BUG_MDS) ||
(boot_cpu_has_bug(X86_BUG_TAA) &&
boot_cpu_has(X86_FEATURE_RTM))) {
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
static_branch_disable(&mmio_stale_data_clear);
} else
static_branch_enable(&mmio_stale_data_clear);
Does that sound right?
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
2025-02-11 20:35 ` Kaplan, David
@ 2025-02-11 23:18 ` Josh Poimboeuf
2025-02-12 17:28 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 23:18 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Tue, Feb 11, 2025 at 08:35:27PM +0000, Kaplan, David wrote:
> > > @@ -538,21 +582,6 @@ static void __init mmio_select_mitigation(void)
> > > if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
> > > static_branch_enable(&mds_idle_clear);
> >
> > Right here it does the following:
> >
> > /*
> > * Enable CPU buffer clear mitigation for host and VMM, if also affected
> > * by MDS or TAA. Otherwise, enable mitigation for VMM only.
> > */
> > if (boot_cpu_has_bug(X86_BUG_MDS) ||
> > (boot_cpu_has_bug(X86_BUG_TAA) &&
> > boot_cpu_has(X86_FEATURE_RTM)))
> > setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> >
> > Isn't that a cross-mitigation dependency? i.e. if
> > X86_FEATURE_CLEAR_CPU_BUF gets enabled here then the other mitigations
> > would need to update their mitigation reporting?
>
> I don't think so, nobody should be looking at
> X86_FEATURE_CLEAR_CPU_BUF to determine their mitigation selection,
> they should only be looking at the other variables like taa_mitigation
> as done in the verw_mitigation_enabled() function.
But isn't that a bug in the reporting? AFAICT one of the main
motivations for the cross dependencies (and the *_update_mitigation()
functions) is to fix the reporting if something actually ends up getting
mitigated by something else.
For example, "mds=off tsx_async_abort=full" results in both MDS and TAA
being reported "Mitigated", because they share the same VERW mitigation.
But in the above case, with X86_BUG_MDS, "mds=off mmio_stale_data=full"
shows MDS as vulnerable despite it actually being mitigated by VERW.
> /*
> * Enable CPU buffer clear mitigation for host and VMM, if also affected
> * by MDS or TAA.
> *
> * Only enable the VMM mitigation if the CPU buffer clear mitigation is
> * not being used.
> */
> if (rfds_mitigation == RFDS_MITIGATION_VERW ||
> boot_cpu_has_bug(X86_BUG_MDS) ||
> (boot_cpu_has_bug(X86_BUG_TAA) &&
> boot_cpu_has(X86_FEATURE_RTM))) {
> setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> static_branch_disable(&mmio_stale_data_clear);
> } else
> static_branch_enable(&mmio_stale_data_clear);
>
> Does that sound right?
I *think* that's correct, but this still has the same issue that MDS/TAA
are now getting mitigated but not reported as such.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
2025-02-11 23:18 ` Josh Poimboeuf
@ 2025-02-12 17:28 ` Kaplan, David
2025-02-12 23:16 ` Josh Poimboeuf
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-12 17:28 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Tuesday, February 11, 2025 5:19 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Tue, Feb 11, 2025 at 08:35:27PM +0000, Kaplan, David wrote:
> > > > @@ -538,21 +582,6 @@ static void __init mmio_select_mitigation(void)
> > > > if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
> > > > static_branch_enable(&mds_idle_clear);
> > >
> > > Right here it does the following:
> > >
> > > /*
> > > * Enable CPU buffer clear mitigation for host and VMM, if also affected
> > > * by MDS or TAA. Otherwise, enable mitigation for VMM only.
> > > */
> > > if (boot_cpu_has_bug(X86_BUG_MDS) ||
> > > (boot_cpu_has_bug(X86_BUG_TAA) &&
> > > boot_cpu_has(X86_FEATURE_RTM)))
> > > setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> > >
> > > Isn't that a cross-mitigation dependency? i.e. if
> > > X86_FEATURE_CLEAR_CPU_BUF gets enabled here then the other
> > > mitigations would need to update their mitigation reporting?
> >
> > I don't think so, nobody should be looking at
> > X86_FEATURE_CLEAR_CPU_BUF to determine their mitigation selection,
> > they should only be looking at the other variables like taa_mitigation
> > as done in the verw_mitigation_enabled() function.
>
> But isn't that a bug in the reporting? AFAICT one of the main motivations for the
> cross dependencies (and the *_update_mitigation()
> functions) is to fix the reporting if something actually ends up getting mitigated by
> something else.
>
> For example, "mds=off tsx_async_abort=full" results in both MDS and TAA being
> reported "Mitigated", because they share the same VERW mitigation.
>
> But in the above case, with X86_BUG_MDS, "mds=off mmio_stale_data=full"
> shows MDS as vulnerable despite it actually being mitigated by VERW.
Does it? In that case, mds_update_mitigation() will see that verw_mitigation_enabled() is true (because mmio_mitigation!=MMIO_MITIGATION_OFF) and then enable the mds mitigation.
>
> > /*
> > * Enable CPU buffer clear mitigation for host and VMM, if also affected
> > * by MDS or TAA.
> > *
> > * Only enable the VMM mitigation if the CPU buffer clear mitigation is
> > * not being used.
> > */
> > if (rfds_mitigation == RFDS_MITIGATION_VERW ||
> > boot_cpu_has_bug(X86_BUG_MDS) ||
> > (boot_cpu_has_bug(X86_BUG_TAA) &&
> > boot_cpu_has(X86_FEATURE_RTM))) {
> > setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> > static_branch_disable(&mmio_stale_data_clear);
> > } else
> > static_branch_enable(&mmio_stale_data_clear);
> >
> > Does that sound right?
>
> I *think* that's correct, but this still has the same issue that MDS/TAA are now
> getting mitigated but not reported as such.
>
I think they are getting reported as mitigated because the mmio mitigation was enabled.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
2025-02-12 17:28 ` Kaplan, David
@ 2025-02-12 23:16 ` Josh Poimboeuf
2025-02-19 18:20 ` Borislav Petkov
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-12 23:16 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 12, 2025 at 05:28:23PM +0000, Kaplan, David wrote:
> > > > Right here it does the following:
> > > >
> > > > /*
> > > > * Enable CPU buffer clear mitigation for host and VMM, if also affected
> > > > * by MDS or TAA. Otherwise, enable mitigation for VMM only.
> > > > */
> > > > if (boot_cpu_has_bug(X86_BUG_MDS) ||
> > > > (boot_cpu_has_bug(X86_BUG_TAA) &&
> > > > boot_cpu_has(X86_FEATURE_RTM)))
> > > > setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> > > >
> > > > Isn't that a cross-mitigation dependency? i.e. if
> > > > X86_FEATURE_CLEAR_CPU_BUF gets enabled here then the other
> > > > mitigations would need to update their mitigation reporting?
> > >
> > > I don't think so, nobody should be looking at
> > > X86_FEATURE_CLEAR_CPU_BUF to determine their mitigation selection,
> > > they should only be looking at the other variables like taa_mitigation
> > > as done in the verw_mitigation_enabled() function.
> >
> > But isn't that a bug in the reporting? AFAICT one of the main motivations for the
> > cross dependencies (and the *_update_mitigation()
> > functions) is to fix the reporting if something actually ends up getting mitigated by
> > something else.
> >
> > For example, "mds=off tsx_async_abort=full" results in both MDS and TAA being
> > reported "Mitigated", because they share the same VERW mitigation.
> >
> > But in the above case, with X86_BUG_MDS, "mds=off mmio_stale_data=full"
> > shows MDS as vulnerable despite it actually being mitigated by VERW.
>
> Does it? In that case, mds_update_mitigation() will see that
> verw_mitigation_enabled() is true (because
> mmio_mitigation!=MMIO_MITIGATION_OFF) and then enable the mds
> mitigation.
Hrmmm, that's a bit of a maze.
static bool __init verw_mitigation_enabled(void)
{
return (mds_mitigation != MDS_MITIGATION_OFF ||
(taa_mitigation != TAA_MITIGATION_OFF &&
taa_mitigation != TAA_MITIGATION_TSX_DISABLED) ||
mmio_mitigation != MMIO_MITIGATION_OFF ||
rfds_mitigation != RFDS_MITIGATION_OFF);
}
That seems to work by accident. And I haven't managed to convince
myself it works for all edge cases.
Technically, checking !MMIO_MITIGATION_OFF alone isn't enough: MMIO only
needs global VERW if the MDS or TAA bugs are present.
Also, without ucode, the RFDS mitigation doesn't even attempt VERW for
some unknown reason, so just checking !RFDS_MITIGATION_OFF isn't
sufficient.
I'm thinking it should be something like
static bool __init taa_vulnerable(void)
{
boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM);
}
static bool __init mmio_needs_verw(void)
{
return boot_cpu_has_bug(X86_BUG_MDS) || taa_vulnerable();
}
static bool __init rfds_needs_ucode(void)
{
return !(x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR);
}
static bool __init verw_mitigation_enabled(void)
{
return mds_mitigation != MDS_MITIGATION_OFF ||
(taa_mitigation != TAA_MITIGATION_OFF && taa_vulnerable()) ||
(mmio_mitigation != MMIO_MITIGATION_OFF && mmio_needs_verw());
(rfds_mitigation != RFDS_MITIGATION_OFF && !rfds_needs_ucode());
}
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
2025-02-12 23:16 ` Josh Poimboeuf
@ 2025-02-19 18:20 ` Borislav Petkov
2025-02-21 21:48 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Borislav Petkov @ 2025-02-19 18:20 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Kaplan, David, Thomas Gleixner, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 12, 2025 at 03:16:46PM -0800, Josh Poimboeuf wrote:
> static bool __init verw_mitigation_enabled(void)
> {
> return mds_mitigation != MDS_MITIGATION_OFF ||
> (taa_mitigation != TAA_MITIGATION_OFF && taa_vulnerable()) ||
> (mmio_mitigation != MMIO_MITIGATION_OFF && mmio_needs_verw());
> (rfds_mitigation != RFDS_MITIGATION_OFF && !rfds_needs_ucode());
> }
Instead of turning it into a head-scratching madness, it might be a lot easier
if all the places which enable VERW mitigation, would do
verw_mitigation_enabled = true;
and then the code can simply check that static var...
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
2025-02-19 18:20 ` Borislav Petkov
@ 2025-02-21 21:48 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-21 21:48 UTC (permalink / raw)
To: Borislav Petkov, Josh Poimboeuf
Cc: Thomas Gleixner, Peter Zijlstra, Pawan Gupta, Ingo Molnar,
Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Wednesday, February 19, 2025 12:21 PM
> To: Josh Poimboeuf <jpoimboe@kernel.org>
> Cc: Kaplan, David <David.Kaplan@amd.com>; Thomas Gleixner
> <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Feb 12, 2025 at 03:16:46PM -0800, Josh Poimboeuf wrote:
> > static bool __init verw_mitigation_enabled(void) {
> > return mds_mitigation != MDS_MITIGATION_OFF ||
> > (taa_mitigation != TAA_MITIGATION_OFF && taa_vulnerable()) ||
> > (mmio_mitigation != MMIO_MITIGATION_OFF && mmio_needs_verw());
> > (rfds_mitigation != RFDS_MITIGATION_OFF &&
> > !rfds_needs_ucode()); }
>
> Instead of turning it into a head-scratching madness, it might be a lot easier if all the
> places which enable VERW mitigation, would do
>
> verw_mitigation_enabled = true;
>
> and then the code can simply check that static var...
>
Yeah, just implemented this and it does keep it pretty clean.
Thanks! --David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 07/35] x86/bugs: Restructure rfds mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (5 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 06/35] x86/bugs: Restructure mmio mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-10 23:36 ` Josh Poimboeuf
2025-01-08 20:24 ` [PATCH v3 08/35] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
` (28 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure rfds mitigation to use select/update/apply functions to
create consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 33 ++++++++++++++++++++++++++-------
1 file changed, 26 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index a8da097ab2d5..871b9f93b714 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -70,6 +70,9 @@ static void __init taa_apply_mitigation(void);
static void __init mmio_select_mitigation(void);
static void __init mmio_update_mitigation(void);
static void __init mmio_apply_mitigation(void);
+static void __init rfds_select_mitigation(void);
+static void __init rfds_update_mitigation(void);
+static void __init rfds_apply_mitigation(void);
static void __init srbds_select_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
@@ -193,6 +196,7 @@ void __init cpu_select_mitigations(void)
mds_select_mitigation();
taa_select_mitigation();
mmio_select_mitigation();
+ rfds_select_mitigation();
md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -211,10 +215,12 @@ void __init cpu_select_mitigations(void)
mds_update_mitigation();
taa_update_mitigation();
mmio_update_mitigation();
+ rfds_update_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
+ rfds_apply_mitigation();
}
/*
@@ -607,9 +613,6 @@ static int __init mmio_stale_data_parse_cmdline(char *str)
}
early_param("mmio_stale_data", mmio_stale_data_parse_cmdline);
-#undef pr_fmt
-#define pr_fmt(fmt) "Register File Data Sampling: " fmt
-
static const char * const rfds_strings[] = {
[RFDS_MITIGATION_OFF] = "Vulnerable",
[RFDS_MITIGATION_VERW] = "Mitigation: Clear Register File",
@@ -627,11 +630,28 @@ static void __init rfds_select_mitigation(void)
if (rfds_mitigation == RFDS_MITIGATION_AUTO)
rfds_mitigation = RFDS_MITIGATION_VERW;
+}
- if (x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR)
+static void __init rfds_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_RFDS) || cpu_mitigations_off())
+ return;
+
+ if (verw_mitigation_enabled())
+ rfds_mitigation = RFDS_MITIGATION_VERW;
+
+ if (rfds_mitigation == RFDS_MITIGATION_VERW) {
+ if (!(x86_arch_cap_msr & ARCH_CAP_RFDS_CLEAR))
+ rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
+ }
+
+ pr_info("Register File Data Sampling: %s\n", rfds_strings[rfds_mitigation]);
+}
+
+static void __init rfds_apply_mitigation(void)
+{
+ if (rfds_mitigation == RFDS_MITIGATION_VERW)
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
- else
- rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
}
static __init int rfds_parse_cmdline(char *str)
@@ -704,7 +724,6 @@ static void __init md_clear_update_mitigation(void)
static void __init md_clear_select_mitigation(void)
{
- rfds_select_mitigation();
/*
* As these mitigations are inter-related and rely on VERW instruction
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 07/35] x86/bugs: Restructure rfds mitigation
2025-01-08 20:24 ` [PATCH v3 07/35] x86/bugs: Restructure rfds mitigation David Kaplan
@ 2025-02-10 23:36 ` Josh Poimboeuf
2025-02-11 22:49 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-10 23:36 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:47PM -0600, David Kaplan wrote:
> @@ -627,11 +630,28 @@ static void __init rfds_select_mitigation(void)
>
> if (rfds_mitigation == RFDS_MITIGATION_AUTO)
> rfds_mitigation = RFDS_MITIGATION_VERW;
Another superfluous return above this one.
> +static void __init rfds_apply_mitigation(void)
> +{
> + if (rfds_mitigation == RFDS_MITIGATION_VERW)
> setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
> - else
> - rfds_mitigation = RFDS_MITIGATION_UCODE_NEEDED;
Hm, unlike the other VERW mitigations, this doesn't even attempt to do
VERW on missing ucode.
Ah well, it was already like that and I doubt anybody cares at this
point.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 07/35] x86/bugs: Restructure rfds mitigation
2025-02-10 23:36 ` Josh Poimboeuf
@ 2025-02-11 22:49 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-11 22:49 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Monday, February 10, 2025 5:36 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 07/35] x86/bugs: Restructure rfds mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:47PM -0600, David Kaplan wrote:
> > @@ -627,11 +630,28 @@ static void __init rfds_select_mitigation(void)
> >
> > if (rfds_mitigation == RFDS_MITIGATION_AUTO)
> > rfds_mitigation = RFDS_MITIGATION_VERW;
>
> Another superfluous return above this one.
Will fix
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 08/35] x86/bugs: Remove md_clear_*_mitigation()
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (6 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 07/35] x86/bugs: Restructure rfds mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-01-08 20:24 ` [PATCH v3 09/35] x86/bugs: Restructure srbds mitigation David Kaplan
` (27 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
The functionality in md_clear_update_mitigation() and
md_clear_select_mitigation() is now integrated into the select/update
functions for the MDS, TAA, MMIO, and RFDS vulnerabilities.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 65 --------------------------------------
1 file changed, 65 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 871b9f93b714..6c6a42b2dfe9 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -62,8 +62,6 @@ static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init mds_update_mitigation(void);
static void __init mds_apply_mitigation(void);
-static void __init md_clear_update_mitigation(void);
-static void __init md_clear_select_mitigation(void);
static void __init taa_select_mitigation(void);
static void __init taa_update_mitigation(void);
static void __init taa_apply_mitigation(void);
@@ -197,7 +195,6 @@ void __init cpu_select_mitigations(void)
taa_select_mitigation();
mmio_select_mitigation();
rfds_select_mitigation();
- md_clear_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
@@ -671,68 +668,6 @@ static __init int rfds_parse_cmdline(char *str)
}
early_param("reg_file_data_sampling", rfds_parse_cmdline);
-#undef pr_fmt
-#define pr_fmt(fmt) "" fmt
-
-static void __init md_clear_update_mitigation(void)
-{
- if (cpu_mitigations_off())
- return;
-
- if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
- goto out;
-
- /*
- * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO
- * Stale Data mitigation, if necessary.
- */
- if (mds_mitigation == MDS_MITIGATION_OFF &&
- boot_cpu_has_bug(X86_BUG_MDS)) {
- mds_mitigation = MDS_MITIGATION_FULL;
- mds_select_mitigation();
- }
- if (taa_mitigation == TAA_MITIGATION_OFF &&
- boot_cpu_has_bug(X86_BUG_TAA)) {
- taa_mitigation = TAA_MITIGATION_VERW;
- taa_select_mitigation();
- }
- /*
- * MMIO_MITIGATION_OFF is not checked here so that mmio_stale_data_clear
- * gets updated correctly as per X86_FEATURE_CLEAR_CPU_BUF state.
- */
- if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA)) {
- mmio_mitigation = MMIO_MITIGATION_VERW;
- mmio_select_mitigation();
- }
- if (rfds_mitigation == RFDS_MITIGATION_OFF &&
- boot_cpu_has_bug(X86_BUG_RFDS)) {
- rfds_mitigation = RFDS_MITIGATION_VERW;
- rfds_select_mitigation();
- }
-out:
- if (boot_cpu_has_bug(X86_BUG_MDS))
- pr_info("MDS: %s\n", mds_strings[mds_mitigation]);
- if (boot_cpu_has_bug(X86_BUG_TAA))
- pr_info("TAA: %s\n", taa_strings[taa_mitigation]);
- if (boot_cpu_has_bug(X86_BUG_MMIO_STALE_DATA))
- pr_info("MMIO Stale Data: %s\n", mmio_strings[mmio_mitigation]);
- else if (boot_cpu_has_bug(X86_BUG_MMIO_UNKNOWN))
- pr_info("MMIO Stale Data: Unknown: No mitigations\n");
- if (boot_cpu_has_bug(X86_BUG_RFDS))
- pr_info("Register File Data Sampling: %s\n", rfds_strings[rfds_mitigation]);
-}
-
-static void __init md_clear_select_mitigation(void)
-{
-
- /*
- * As these mitigations are inter-related and rely on VERW instruction
- * to clear the microarchitural buffers, update and print their status
- * after mitigation selection is done for each of these vulnerabilities.
- */
- md_clear_update_mitigation();
-}
-
#undef pr_fmt
#define pr_fmt(fmt) "SRBDS: " fmt
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 09/35] x86/bugs: Restructure srbds mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (7 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 08/35] x86/bugs: Remove md_clear_*_mitigation() David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-10 23:44 ` Josh Poimboeuf
2025-01-08 20:24 ` [PATCH v3 10/35] x86/bugs: Restructure gds mitigation David Kaplan
` (26 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure srbds to use select/apply functions to create consistent
vulnerability handling.
Define new AUTO mitigation for SRBDS.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 6c6a42b2dfe9..fedd693b2218 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -72,6 +72,7 @@ static void __init rfds_select_mitigation(void);
static void __init rfds_update_mitigation(void);
static void __init rfds_apply_mitigation(void);
static void __init srbds_select_mitigation(void);
+static void __init srbds_apply_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
static void __init gds_select_mitigation(void);
@@ -218,6 +219,7 @@ void __init cpu_select_mitigations(void)
taa_apply_mitigation();
mmio_apply_mitigation();
rfds_apply_mitigation();
+ srbds_apply_mitigation();
}
/*
@@ -673,6 +675,7 @@ early_param("reg_file_data_sampling", rfds_parse_cmdline);
enum srbds_mitigations {
SRBDS_MITIGATION_OFF,
+ SRBDS_MITIGATION_AUTO,
SRBDS_MITIGATION_UCODE_NEEDED,
SRBDS_MITIGATION_FULL,
SRBDS_MITIGATION_TSX_OFF,
@@ -680,7 +683,7 @@ enum srbds_mitigations {
};
static enum srbds_mitigations srbds_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_SRBDS) ? SRBDS_MITIGATION_FULL : SRBDS_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_SRBDS) ? SRBDS_MITIGATION_AUTO : SRBDS_MITIGATION_OFF;
static const char * const srbds_strings[] = {
[SRBDS_MITIGATION_OFF] = "Vulnerable",
@@ -734,6 +737,9 @@ static void __init srbds_select_mitigation(void)
if (!boot_cpu_has_bug(X86_BUG_SRBDS))
return;
+ if (srbds_mitigation == SRBDS_MITIGATION_AUTO)
+ srbds_mitigation = SRBDS_MITIGATION_FULL;
+
/*
* Check to see if this is one of the MDS_NO systems supporting TSX that
* are only exposed to SRBDS when TSX is enabled or when CPU is affected
@@ -748,6 +754,12 @@ static void __init srbds_select_mitigation(void)
srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
else if (cpu_mitigations_off() || srbds_off)
srbds_mitigation = SRBDS_MITIGATION_OFF;
+}
+
+static void __init srbds_apply_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_SRBDS))
+ return;
update_srbds_msr();
pr_info("%s\n", srbds_strings[srbds_mitigation]);
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 09/35] x86/bugs: Restructure srbds mitigation
2025-01-08 20:24 ` [PATCH v3 09/35] x86/bugs: Restructure srbds mitigation David Kaplan
@ 2025-02-10 23:44 ` Josh Poimboeuf
2025-02-11 22:59 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-10 23:44 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:49PM -0600, David Kaplan wrote:
> +static void __init srbds_apply_mitigation(void)
> +{
> + if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> + return;
I realize this is just preserving the existing behavior, but for
consistency with the others this should check for cpu_mitigations_off()
so the mitigation doesn't get printed.
> update_srbds_msr();
> pr_info("%s\n", srbds_strings[srbds_mitigation]);
More generally, IMO these should be printed in the select (or update)
functions rather than in the apply functions.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 09/35] x86/bugs: Restructure srbds mitigation
2025-02-10 23:44 ` Josh Poimboeuf
@ 2025-02-11 22:59 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-11 22:59 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Monday, February 10, 2025 5:44 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 09/35] x86/bugs: Restructure srbds mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:49PM -0600, David Kaplan wrote:
> > +static void __init srbds_apply_mitigation(void) {
> > + if (!boot_cpu_has_bug(X86_BUG_SRBDS))
> > + return;
>
> I realize this is just preserving the existing behavior, but for consistency with the
> others this should check for cpu_mitigations_off() so the mitigation doesn't get
> printed.
Yeah, we discussed this in v2 of the series. I believe your preference was not to print anything if cpu_mitigations_off() but to print if a bug-specific mitigation was disabled (e.g., retbleed=off). I see Boris was ok with that, so I guess we can go with that.
>
> > update_srbds_msr();
> > pr_info("%s\n", srbds_strings[srbds_mitigation]);
>
> More generally, IMO these should be printed in the select (or update) functions
> rather than in the apply functions.
Agreed.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 10/35] x86/bugs: Restructure gds mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (8 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 09/35] x86/bugs: Restructure srbds mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-10 17:06 ` Brendan Jackman
2025-02-10 23:52 ` Josh Poimboeuf
2025-01-08 20:24 ` [PATCH v3 11/35] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
` (25 subsequent siblings)
35 siblings, 2 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure gds mitigation to use select/apply functions to create
consistent vulnerability handling.
Define new AUTO mitigation for gds.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 24 +++++++++++++++++++-----
1 file changed, 19 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index fedd693b2218..58ac99b74bd3 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -76,6 +76,7 @@ static void __init srbds_apply_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
static void __init gds_select_mitigation(void);
+static void __init gds_apply_mitigation(void);
/* The base value of the SPEC_CTRL MSR without task-specific bits set */
u64 x86_spec_ctrl_base;
@@ -220,6 +221,7 @@ void __init cpu_select_mitigations(void)
mmio_apply_mitigation();
rfds_apply_mitigation();
srbds_apply_mitigation();
+ gds_apply_mitigation();
}
/*
@@ -811,6 +813,7 @@ early_param("l1d_flush", l1d_flush_parse_cmdline);
enum gds_mitigations {
GDS_MITIGATION_OFF,
+ GDS_MITIGATION_AUTO,
GDS_MITIGATION_UCODE_NEEDED,
GDS_MITIGATION_FORCE,
GDS_MITIGATION_FULL,
@@ -819,7 +822,7 @@ enum gds_mitigations {
};
static enum gds_mitigations gds_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_FULL : GDS_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_AUTO : GDS_MITIGATION_OFF;
static const char * const gds_strings[] = {
[GDS_MITIGATION_OFF] = "Vulnerable",
@@ -860,6 +863,7 @@ void update_gds_msr(void)
case GDS_MITIGATION_FORCE:
case GDS_MITIGATION_UCODE_NEEDED:
case GDS_MITIGATION_HYPERVISOR:
+ case GDS_MITIGATION_AUTO:
return;
}
@@ -883,13 +887,16 @@ static void __init gds_select_mitigation(void)
if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
gds_mitigation = GDS_MITIGATION_HYPERVISOR;
- goto out;
+ return;
}
if (cpu_mitigations_off())
gds_mitigation = GDS_MITIGATION_OFF;
/* Will verify below that mitigation _can_ be disabled */
+ if (gds_mitigation == GDS_MITIGATION_AUTO)
+ gds_mitigation = GDS_MITIGATION_FULL;
+
/* No microcode */
if (!(x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)) {
if (gds_mitigation == GDS_MITIGATION_FORCE) {
@@ -902,7 +909,7 @@ static void __init gds_select_mitigation(void)
} else {
gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
}
- goto out;
+ return;
}
/* Microcode has mitigation, use it */
@@ -923,9 +930,16 @@ static void __init gds_select_mitigation(void)
*/
gds_mitigation = GDS_MITIGATION_FULL_LOCKED;
}
+}
+
+static void __init gds_apply_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_GDS))
+ return;
+
+ if (x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)
+ update_gds_msr();
- update_gds_msr();
-out:
pr_info("%s\n", gds_strings[gds_mitigation]);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 10/35] x86/bugs: Restructure gds mitigation
2025-01-08 20:24 ` [PATCH v3 10/35] x86/bugs: Restructure gds mitigation David Kaplan
@ 2025-02-10 17:06 ` Brendan Jackman
2025-02-10 17:27 ` Kaplan, David
2025-02-10 23:52 ` Josh Poimboeuf
1 sibling, 1 reply; 138+ messages in thread
From: Brendan Jackman @ 2025-02-10 17:06 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
linux-kernel
On Wed, 8 Jan 2025 at 21:28, David Kaplan <david.kaplan@amd.com> wrote:
>
> Restructure gds mitigation to use select/apply functions to create
> consistent vulnerability handling.
>
> Define new AUTO mitigation for gds.
>
> Signed-off-by: David Kaplan <david.kaplan@amd.com>
> ---
> arch/x86/kernel/cpu/bugs.c | 24 +++++++++++++++++++-----
> 1 file changed, 19 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index fedd693b2218..58ac99b74bd3 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -76,6 +76,7 @@ static void __init srbds_apply_mitigation(void);
> static void __init l1d_flush_select_mitigation(void);
> static void __init srso_select_mitigation(void);
> static void __init gds_select_mitigation(void);
> +static void __init gds_apply_mitigation(void);
>
> /* The base value of the SPEC_CTRL MSR without task-specific bits set */
> u64 x86_spec_ctrl_base;
> @@ -220,6 +221,7 @@ void __init cpu_select_mitigations(void)
> mmio_apply_mitigation();
> rfds_apply_mitigation();
> srbds_apply_mitigation();
> + gds_apply_mitigation();
> }
>
> /*
> @@ -811,6 +813,7 @@ early_param("l1d_flush", l1d_flush_parse_cmdline);
>
> enum gds_mitigations {
> GDS_MITIGATION_OFF,
> + GDS_MITIGATION_AUTO,
> GDS_MITIGATION_UCODE_NEEDED,
> GDS_MITIGATION_FORCE,
> GDS_MITIGATION_FULL,
> @@ -819,7 +822,7 @@ enum gds_mitigations {
> };
>
> static enum gds_mitigations gds_mitigation __ro_after_init =
> - IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_FULL : GDS_MITIGATION_OFF;
> + IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_AUTO : GDS_MITIGATION_OFF;
>
> static const char * const gds_strings[] = {
> [GDS_MITIGATION_OFF] = "Vulnerable",
> @@ -860,6 +863,7 @@ void update_gds_msr(void)
> case GDS_MITIGATION_FORCE:
> case GDS_MITIGATION_UCODE_NEEDED:
> case GDS_MITIGATION_HYPERVISOR:
> + case GDS_MITIGATION_AUTO:
> return;
> }
>
> @@ -883,13 +887,16 @@ static void __init gds_select_mitigation(void)
>
> if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> gds_mitigation = GDS_MITIGATION_HYPERVISOR;
> - goto out;
> + return;
> }
>
> if (cpu_mitigations_off())
> gds_mitigation = GDS_MITIGATION_OFF;
> /* Will verify below that mitigation _can_ be disabled */
>
> + if (gds_mitigation == GDS_MITIGATION_AUTO)
> + gds_mitigation = GDS_MITIGATION_FULL;
> +
> /* No microcode */
> if (!(x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)) {
> if (gds_mitigation == GDS_MITIGATION_FORCE) {
> @@ -902,7 +909,7 @@ static void __init gds_select_mitigation(void)
> } else {
> gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
> }
> - goto out;
> + return;
> }
>
> /* Microcode has mitigation, use it */
> @@ -923,9 +930,16 @@ static void __init gds_select_mitigation(void)
> */
> gds_mitigation = GDS_MITIGATION_FULL_LOCKED;
> }
> +}
> +
> +static void __init gds_apply_mitigation(void)
> +{
> + if (!boot_cpu_has_bug(X86_BUG_GDS))
> + return;
> +
> + if (x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)
> + update_gds_msr();
IMO it's a shame to be looking at MSR bits in here instead of just
relying on the direct output of the select/update functions.
I think in this case we can just remove the conditional since if
!ARCH_CAP_GDS_CTRL then gds_mitigation must be FORCE or UCODE_NEEDED
in which case update_gds_msr() is a nop.
Now I make these comments I realise maybe my expectation about these
three functions is not actually the same as yours. Here's how I
envisaged your design:
- select: Look around at the hardware and the cmdline and decide what
we think we wanna do in fairly abstract terms. Record that result in
*_mitigation.
- update: Look around at the other mitigations and potentially change
our mind (or perhaps just update *_mitigation to reflect mitigation
that is being done regardless for other vulns, which also mitigate
this vuln).
- apply: Poke the hardware/cap flags/static keys/etc to actuate the
decision we made in the previous steps.
Let me know if that's aligned with your vision or not.
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 10/35] x86/bugs: Restructure gds mitigation
2025-02-10 17:06 ` Brendan Jackman
@ 2025-02-10 17:27 ` Kaplan, David
2025-02-10 17:40 ` Brendan Jackman
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-10 17:27 UTC (permalink / raw)
To: Brendan Jackman
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86@kernel.org,
H . Peter Anvin, linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Brendan Jackman <jackmanb@google.com>
> Sent: Monday, February 10, 2025 11:06 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Pawan
> Gupta <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>;
> Dave Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 10/35] x86/bugs: Restructure gds mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, 8 Jan 2025 at 21:28, David Kaplan <david.kaplan@amd.com> wrote:
> >
> > Restructure gds mitigation to use select/apply functions to create
> > consistent vulnerability handling.
> >
> > Define new AUTO mitigation for gds.
> >
> > Signed-off-by: David Kaplan <david.kaplan@amd.com>
> > ---
> > arch/x86/kernel/cpu/bugs.c | 24 +++++++++++++++++++-----
> > 1 file changed, 19 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index fedd693b2218..58ac99b74bd3 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -76,6 +76,7 @@ static void __init srbds_apply_mitigation(void);
> > static void __init l1d_flush_select_mitigation(void);
> > static void __init srso_select_mitigation(void); static void __init
> > gds_select_mitigation(void);
> > +static void __init gds_apply_mitigation(void);
> >
> > /* The base value of the SPEC_CTRL MSR without task-specific bits set
> > */
> > u64 x86_spec_ctrl_base;
> > @@ -220,6 +221,7 @@ void __init cpu_select_mitigations(void)
> > mmio_apply_mitigation();
> > rfds_apply_mitigation();
> > srbds_apply_mitigation();
> > + gds_apply_mitigation();
> > }
> >
> > /*
> > @@ -811,6 +813,7 @@ early_param("l1d_flush", l1d_flush_parse_cmdline);
> >
> > enum gds_mitigations {
> > GDS_MITIGATION_OFF,
> > + GDS_MITIGATION_AUTO,
> > GDS_MITIGATION_UCODE_NEEDED,
> > GDS_MITIGATION_FORCE,
> > GDS_MITIGATION_FULL,
> > @@ -819,7 +822,7 @@ enum gds_mitigations { };
> >
> > static enum gds_mitigations gds_mitigation __ro_after_init =
> > - IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_FULL :
> GDS_MITIGATION_OFF;
> > + IS_ENABLED(CONFIG_MITIGATION_GDS) ? GDS_MITIGATION_AUTO
> :
> > + GDS_MITIGATION_OFF;
> >
> > static const char * const gds_strings[] = {
> > [GDS_MITIGATION_OFF] = "Vulnerable",
> > @@ -860,6 +863,7 @@ void update_gds_msr(void)
> > case GDS_MITIGATION_FORCE:
> > case GDS_MITIGATION_UCODE_NEEDED:
> > case GDS_MITIGATION_HYPERVISOR:
> > + case GDS_MITIGATION_AUTO:
> > return;
> > }
> >
> > @@ -883,13 +887,16 @@ static void __init gds_select_mitigation(void)
> >
> > if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
> > gds_mitigation = GDS_MITIGATION_HYPERVISOR;
> > - goto out;
> > + return;
> > }
> >
> > if (cpu_mitigations_off())
> > gds_mitigation = GDS_MITIGATION_OFF;
> > /* Will verify below that mitigation _can_ be disabled */
> >
> > + if (gds_mitigation == GDS_MITIGATION_AUTO)
> > + gds_mitigation = GDS_MITIGATION_FULL;
> > +
> > /* No microcode */
> > if (!(x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)) {
> > if (gds_mitigation == GDS_MITIGATION_FORCE) { @@
> > -902,7 +909,7 @@ static void __init gds_select_mitigation(void)
> > } else {
> > gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
> > }
> > - goto out;
> > + return;
> > }
> >
> > /* Microcode has mitigation, use it */ @@ -923,9 +930,16 @@
> > static void __init gds_select_mitigation(void)
> > */
> > gds_mitigation = GDS_MITIGATION_FULL_LOCKED;
> > }
> > +}
> > +
> > +static void __init gds_apply_mitigation(void) {
> > + if (!boot_cpu_has_bug(X86_BUG_GDS))
> > + return;
> > +
> > + if (x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)
> > + update_gds_msr();
>
> IMO it's a shame to be looking at MSR bits in here instead of just relying on the
> direct output of the select/update functions.
>
> I think in this case we can just remove the conditional since if
> !ARCH_CAP_GDS_CTRL then gds_mitigation must be FORCE or
> UCODE_NEEDED in which case update_gds_msr() is a nop.
>
> Now I make these comments I realise maybe my expectation about these three
> functions is not actually the same as yours. Here's how I envisaged your design:
>
> - select: Look around at the hardware and the cmdline and decide what we think we
> wanna do in fairly abstract terms. Record that result in *_mitigation.
>
> - update: Look around at the other mitigations and potentially change our mind (or
> perhaps just update *_mitigation to reflect mitigation that is being done regardless
> for other vulns, which also mitigate this vuln).
>
> - apply: Poke the hardware/cap flags/static keys/etc to actuate the decision we
> made in the previous steps.
>
> Let me know if that's aligned with your vision or not.
Yes, that is well-aligned. Basically select picks a mitigation based on the hardware and cmdline but not the mitigations of any other bugs. Then update re-evaluates that and may change our mind, and apply pokes the hardware.
On the ARCH_CAP_GDS_CTRL, I thought that check is really just to check if the MSR is present, before we try to read it.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 10/35] x86/bugs: Restructure gds mitigation
2025-02-10 17:27 ` Kaplan, David
@ 2025-02-10 17:40 ` Brendan Jackman
0 siblings, 0 replies; 138+ messages in thread
From: Brendan Jackman @ 2025-02-10 17:40 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86@kernel.org,
H . Peter Anvin, linux-kernel@vger.kernel.org
On Mon, 10 Feb 2025 at 18:27, Kaplan, David <David.Kaplan@amd.com> wrote:
> On the ARCH_CAP_GDS_CTRL, I thought that check is really just to check if the MSR is present, before we try to read it.
Ah, yeah you view the conditional as saying "does that mitigation
control MSR exist" rather than making a mitigation policy decision
then I agree it makes sense.
#define ARCH_CAP_GDS_CTRL BIT(25) /*
* CPU is vulnerable to Gather
* Data Sampling (GDS) and
* has controls for mitigation.
*/
So... shrug, seems fine to me.
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 10/35] x86/bugs: Restructure gds mitigation
2025-01-08 20:24 ` [PATCH v3 10/35] x86/bugs: Restructure gds mitigation David Kaplan
2025-02-10 17:06 ` Brendan Jackman
@ 2025-02-10 23:52 ` Josh Poimboeuf
2025-02-12 15:36 ` Kaplan, David
1 sibling, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-10 23:52 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:50PM -0600, David Kaplan wrote:
> @@ -902,7 +909,7 @@ static void __init gds_select_mitigation(void)
> } else {
> gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
> }
> - goto out;
> + return;
So right above this it clears X86_FEATURE_AVX, should that not be
deferred until gds_apply_mitigation()?
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 10/35] x86/bugs: Restructure gds mitigation
2025-02-10 23:52 ` Josh Poimboeuf
@ 2025-02-12 15:36 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-12 15:36 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Monday, February 10, 2025 5:53 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 10/35] x86/bugs: Restructure gds mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:50PM -0600, David Kaplan wrote:
> > @@ -902,7 +909,7 @@ static void __init gds_select_mitigation(void)
> > } else {
> > gds_mitigation = GDS_MITIGATION_UCODE_NEEDED;
> > }
> > - goto out;
> > + return;
>
> So right above this it clears X86_FEATURE_AVX, should that not be deferred until
> gds_apply_mitigation()?
Yes, will move it.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 11/35] x86/bugs: Restructure spectre_v1 mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (9 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 10/35] x86/bugs: Restructure gds mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-01-08 20:24 ` [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation David Kaplan
` (24 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure spectre_v1 to use select/apply functions to create
consistent vulnerability handling.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 58ac99b74bd3..3d468bd9573f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -54,6 +54,7 @@
*/
static void __init spectre_v1_select_mitigation(void);
+static void __init spectre_v1_apply_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init retbleed_select_mitigation(void);
static void __init spectre_v2_user_select_mitigation(void);
@@ -216,6 +217,7 @@ void __init cpu_select_mitigations(void)
mmio_update_mitigation();
rfds_update_mitigation();
+ spectre_v1_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -1000,10 +1002,14 @@ static bool smap_works_speculatively(void)
static void __init spectre_v1_select_mitigation(void)
{
- if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) {
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
+}
+
+static void __init spectre_v1_apply_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
return;
- }
if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (10 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 11/35] x86/bugs: Restructure spectre_v1 mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-01-09 5:22 ` Pawan Gupta
` (3 more replies)
2025-01-08 20:24 ` [PATCH v3 13/35] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
` (23 subsequent siblings)
35 siblings, 4 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure retbleed mitigation to use select/update/apply functions to
create consistent vulnerability handling. The retbleed_update_mitigation()
simplifies the dependency between spectre_v2 and retbleed.
The command line options now directly select a preferred mitigation
which simplifies the logic.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 170 +++++++++++++++++--------------------
1 file changed, 77 insertions(+), 93 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 3d468bd9573f..66abc398d5b4 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -57,6 +57,8 @@ static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v1_apply_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
static void __init retbleed_select_mitigation(void);
+static void __init retbleed_update_mitigation(void);
+static void __init retbleed_apply_mitigation(void);
static void __init spectre_v2_user_select_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
@@ -180,11 +182,6 @@ void __init cpu_select_mitigations(void)
/* Select the proper CPU mitigations before patching alternatives: */
spectre_v1_select_mitigation();
spectre_v2_select_mitigation();
- /*
- * retbleed_select_mitigation() relies on the state set by
- * spectre_v2_select_mitigation(); specifically it wants to know about
- * spectre_v2=ibrs.
- */
retbleed_select_mitigation();
/*
* spectre_v2_user_select_mitigation() relies on the state set by
@@ -212,12 +209,14 @@ void __init cpu_select_mitigations(void)
* After mitigations are selected, some may need to update their
* choices.
*/
+ retbleed_update_mitigation();
mds_update_mitigation();
taa_update_mitigation();
mmio_update_mitigation();
rfds_update_mitigation();
spectre_v1_apply_mitigation();
+ retbleed_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -1064,6 +1063,7 @@ enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE;
enum retbleed_mitigation {
RETBLEED_MITIGATION_NONE,
+ RETBLEED_MITIGATION_AUTO,
RETBLEED_MITIGATION_UNRET,
RETBLEED_MITIGATION_IBPB,
RETBLEED_MITIGATION_IBRS,
@@ -1071,14 +1071,6 @@ enum retbleed_mitigation {
RETBLEED_MITIGATION_STUFF,
};
-enum retbleed_mitigation_cmd {
- RETBLEED_CMD_OFF,
- RETBLEED_CMD_AUTO,
- RETBLEED_CMD_UNRET,
- RETBLEED_CMD_IBPB,
- RETBLEED_CMD_STUFF,
-};
-
static const char * const retbleed_strings[] = {
[RETBLEED_MITIGATION_NONE] = "Vulnerable",
[RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk",
@@ -1089,9 +1081,7 @@ static const char * const retbleed_strings[] = {
};
static enum retbleed_mitigation retbleed_mitigation __ro_after_init =
- RETBLEED_MITIGATION_NONE;
-static enum retbleed_mitigation_cmd retbleed_cmd __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_CMD_AUTO : RETBLEED_CMD_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_RETBLEED) ? RETBLEED_MITIGATION_AUTO : RETBLEED_MITIGATION_NONE;
static int __ro_after_init retbleed_nosmt = false;
@@ -1108,15 +1098,15 @@ static int __init retbleed_parse_cmdline(char *str)
}
if (!strcmp(str, "off")) {
- retbleed_cmd = RETBLEED_CMD_OFF;
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
} else if (!strcmp(str, "auto")) {
- retbleed_cmd = RETBLEED_CMD_AUTO;
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
} else if (!strcmp(str, "unret")) {
- retbleed_cmd = RETBLEED_CMD_UNRET;
+ retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
} else if (!strcmp(str, "ibpb")) {
- retbleed_cmd = RETBLEED_CMD_IBPB;
+ retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
} else if (!strcmp(str, "stuff")) {
- retbleed_cmd = RETBLEED_CMD_STUFF;
+ retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
} else if (!strcmp(str, "nosmt")) {
retbleed_nosmt = true;
} else if (!strcmp(str, "force")) {
@@ -1137,53 +1127,38 @@ early_param("retbleed", retbleed_parse_cmdline);
static void __init retbleed_select_mitigation(void)
{
- bool mitigate_smt = false;
-
- if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
- return;
-
- switch (retbleed_cmd) {
- case RETBLEED_CMD_OFF:
+ if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off()) {
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
return;
+ }
- case RETBLEED_CMD_UNRET:
- if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
- retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
- } else {
+ switch (retbleed_mitigation) {
+ case RETBLEED_MITIGATION_UNRET:
+ if (!IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY)) {
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
pr_err("WARNING: kernel not compiled with MITIGATION_UNRET_ENTRY.\n");
- goto do_cmd_auto;
}
break;
-
- case RETBLEED_CMD_IBPB:
+ case RETBLEED_MITIGATION_IBPB:
if (!boot_cpu_has(X86_FEATURE_IBPB)) {
pr_err("WARNING: CPU does not support IBPB.\n");
- goto do_cmd_auto;
- } else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
- retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
- } else {
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+ } else if (!IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
- goto do_cmd_auto;
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
}
break;
-
- case RETBLEED_CMD_STUFF:
- if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) &&
- spectre_v2_enabled == SPECTRE_V2_RETPOLINE) {
- retbleed_mitigation = RETBLEED_MITIGATION_STUFF;
-
- } else {
- if (IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING))
- pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
- else
- pr_err("WARNING: kernel not compiled with MITIGATION_CALL_DEPTH_TRACKING.\n");
-
- goto do_cmd_auto;
+ case RETBLEED_MITIGATION_STUFF:
+ if (!IS_ENABLED(CONFIG_MITIGATION_CALL_DEPTH_TRACKING)) {
+ pr_err("WARNING: kernel not compiled with MITIGATION_CALL_DEPTH_TRACKING.\n");
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
}
break;
+ default:
+ break;
+ }
-do_cmd_auto:
- case RETBLEED_CMD_AUTO:
+ if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO) {
if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
@@ -1192,17 +1167,57 @@ static void __init retbleed_select_mitigation(void)
boot_cpu_has(X86_FEATURE_IBPB))
retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
}
+ }
+}
- /*
- * The Intel mitigation (IBRS or eIBRS) was already selected in
- * spectre_v2_select_mitigation(). 'retbleed_mitigation' will
- * be set accordingly below.
- */
+static void __init retbleed_update_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
+ return;
- break;
+ if (retbleed_mitigation == RETBLEED_MITIGATION_NONE)
+ goto out;
+ /*
+ * Let IBRS trump all on Intel without affecting the effects of the
+ * retbleed= cmdline option except for call depth based stuffing
+ */
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+ switch (spectre_v2_enabled) {
+ case SPECTRE_V2_IBRS:
+ retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
+ break;
+ case SPECTRE_V2_EIBRS:
+ case SPECTRE_V2_EIBRS_RETPOLINE:
+ case SPECTRE_V2_EIBRS_LFENCE:
+ retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
+ break;
+ default:
+ if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
+ pr_err(RETBLEED_INTEL_MSG);
+ }
}
+ if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF) {
+ if (spectre_v2_enabled != SPECTRE_V2_RETPOLINE) {
+ pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
+ retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
+ /* Try again */
+ retbleed_select_mitigation();
+ }
+ }
+out:
+ pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
+}
+
+
+static void __init retbleed_apply_mitigation(void)
+{
+ bool mitigate_smt = false;
+
switch (retbleed_mitigation) {
+ case RETBLEED_MITIGATION_NONE:
+ return;
+
case RETBLEED_MITIGATION_UNRET:
setup_force_cpu_cap(X86_FEATURE_RETHUNK);
setup_force_cpu_cap(X86_FEATURE_UNRET);
@@ -1254,27 +1269,6 @@ static void __init retbleed_select_mitigation(void)
(retbleed_nosmt || cpu_mitigations_auto_nosmt()))
cpu_smt_disable(false);
- /*
- * Let IBRS trump all on Intel without affecting the effects of the
- * retbleed= cmdline option except for call depth based stuffing
- */
- if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
- switch (spectre_v2_enabled) {
- case SPECTRE_V2_IBRS:
- retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
- break;
- case SPECTRE_V2_EIBRS:
- case SPECTRE_V2_EIBRS_RETPOLINE:
- case SPECTRE_V2_EIBRS_LFENCE:
- retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
- break;
- default:
- if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
- pr_err(RETBLEED_INTEL_MSG);
- }
- }
-
- pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
}
#undef pr_fmt
@@ -1827,16 +1821,6 @@ static void __init spectre_v2_select_mitigation(void)
break;
}
- if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
- boot_cpu_has_bug(X86_BUG_RETBLEED) &&
- retbleed_cmd != RETBLEED_CMD_OFF &&
- retbleed_cmd != RETBLEED_CMD_STUFF &&
- boot_cpu_has(X86_FEATURE_IBRS) &&
- boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
- mode = SPECTRE_V2_IBRS;
- break;
- }
-
mode = spectre_v2_select_retpoline();
break;
@@ -1979,7 +1963,7 @@ static void __init spectre_v2_select_mitigation(void)
(boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)) {
- if (retbleed_cmd != RETBLEED_CMD_IBPB) {
+ if (retbleed_mitigation != RETBLEED_MITIGATION_IBPB) {
setup_force_cpu_cap(X86_FEATURE_USE_IBPB_FW);
pr_info("Enabling Speculation Barrier for firmware calls\n");
}
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-01-08 20:24 ` [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation David Kaplan
@ 2025-01-09 5:22 ` Pawan Gupta
2025-01-09 15:26 ` Kaplan, David
2025-01-10 18:45 ` David Laight
2025-02-10 18:35 ` Brendan Jackman
` (2 subsequent siblings)
3 siblings, 2 replies; 138+ messages in thread
From: Pawan Gupta @ 2025-01-09 5:22 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:52PM -0600, David Kaplan wrote:
[...]
> @@ -1064,6 +1063,7 @@ enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE;
>
> enum retbleed_mitigation {
> RETBLEED_MITIGATION_NONE,
> + RETBLEED_MITIGATION_AUTO,
This new enum ...
> RETBLEED_MITIGATION_UNRET,
> RETBLEED_MITIGATION_IBPB,
> RETBLEED_MITIGATION_IBRS,
> @@ -1071,14 +1071,6 @@ enum retbleed_mitigation {
> RETBLEED_MITIGATION_STUFF,
> };
>
> -enum retbleed_mitigation_cmd {
> - RETBLEED_CMD_OFF,
> - RETBLEED_CMD_AUTO,
> - RETBLEED_CMD_UNRET,
> - RETBLEED_CMD_IBPB,
> - RETBLEED_CMD_STUFF,
> -};
> -
> static const char * const retbleed_strings[] = {
> [RETBLEED_MITIGATION_NONE] = "Vulnerable",
> [RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk",
... does not have a corresponding entry in the strings array. AUTO is the
default, and it is possible that mitigation mode can stay AUTO throughout
the retbleed mitigation selection depending on cmdline and CONFIGs. e.g.
retbleed=stuff and spectre_v2=off.
Other issue is below print in retbleed_update_mitigation() will dereference
a NULL pointer:
pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-01-09 5:22 ` Pawan Gupta
@ 2025-01-09 15:26 ` Kaplan, David
2025-01-09 16:40 ` Pawan Gupta
2025-01-10 18:45 ` David Laight
1 sibling, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-01-09 15:26 UTC (permalink / raw)
To: Pawan Gupta
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Sent: Wednesday, January 8, 2025 11:23 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Ingo
> Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:52PM -0600, David Kaplan wrote:
> [...]
> > @@ -1064,6 +1063,7 @@ enum spectre_v2_mitigation spectre_v2_enabled
> > __ro_after_init = SPECTRE_V2_NONE;
> >
> > enum retbleed_mitigation {
> > RETBLEED_MITIGATION_NONE,
> > + RETBLEED_MITIGATION_AUTO,
>
> This new enum ...
>
> > RETBLEED_MITIGATION_UNRET,
> > RETBLEED_MITIGATION_IBPB,
> > RETBLEED_MITIGATION_IBRS,
> > @@ -1071,14 +1071,6 @@ enum retbleed_mitigation {
> > RETBLEED_MITIGATION_STUFF,
> > };
> >
> > -enum retbleed_mitigation_cmd {
> > - RETBLEED_CMD_OFF,
> > - RETBLEED_CMD_AUTO,
> > - RETBLEED_CMD_UNRET,
> > - RETBLEED_CMD_IBPB,
> > - RETBLEED_CMD_STUFF,
> > -};
> > -
> > static const char * const retbleed_strings[] = {
> > [RETBLEED_MITIGATION_NONE] = "Vulnerable",
> > [RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk",
>
> ... does not have a corresponding entry in the strings array. AUTO is the default,
> and it is possible that mitigation mode can stay AUTO throughout the retbleed
> mitigation selection depending on cmdline and CONFIGs. e.g.
> retbleed=stuff and spectre_v2=off.
The intent was never to allow AUTO to persist, it should always be turned into a real mitigation. However it looks like I did miss a case there, where if the mitigation is AUTO when retbleed_select_mitigation() is called, the bug should be mitigated but the vendor isn't AMD/Hygon, it wasn't being transformed.
I'll figure out how to fix this to match the existing functionality, thanks for pointing this out.
--David Kaplan
>
> Other issue is below print in retbleed_update_mitigation() will dereference a NULL
> pointer:
>
> pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-01-09 15:26 ` Kaplan, David
@ 2025-01-09 16:40 ` Pawan Gupta
2025-01-09 16:42 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Pawan Gupta @ 2025-01-09 16:40 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Thu, Jan 09, 2025 at 03:26:58PM +0000, Kaplan, David wrote:
> The intent was never to allow AUTO to persist, it should always be turned
> into a real mitigation. However it looks like I did miss a case there,
> where if the mitigation is AUTO when retbleed_select_mitigation() is
> called, the bug should be mitigated but the vendor isn't AMD/Hygon, it
> wasn't being transformed.
>
> I'll figure out how to fix this to match the existing functionality,
> thanks for pointing this out.
Also, adding a guard to ensure AUTO never persists would be good.
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 5bc2782f4ce1..ad63b5678250 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1383,6 +1383,9 @@ static void __init retbleed_update_mitigation(void)
}
}
out:
+ if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO)
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
+
pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
}
^ permalink raw reply related [flat|nested] 138+ messages in thread* RE: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-01-09 16:40 ` Pawan Gupta
@ 2025-01-09 16:42 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-01-09 16:42 UTC (permalink / raw)
To: Pawan Gupta
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Sent: Thursday, January 9, 2025 10:41 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Ingo
> Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Thu, Jan 09, 2025 at 03:26:58PM +0000, Kaplan, David wrote:
> > The intent was never to allow AUTO to persist, it should always be
> > turned into a real mitigation. However it looks like I did miss a
> > case there, where if the mitigation is AUTO when
> > retbleed_select_mitigation() is called, the bug should be mitigated
> > but the vendor isn't AMD/Hygon, it wasn't being transformed.
> >
> > I'll figure out how to fix this to match the existing functionality,
> > thanks for pointing this out.
>
> Also, adding a guard to ensure AUTO never persists would be good.
>
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index
> 5bc2782f4ce1..ad63b5678250 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -1383,6 +1383,9 @@ static void __init retbleed_update_mitigation(void)
> }
> }
> out:
> + if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO)
> + retbleed_mitigation = RETBLEED_MITIGATION_NONE;
> +
> pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
> }
I had the same idea, I think to make this work I will let the mitigation stay as AUTO until the end of the update function where it will then be turned into NONE, at least for Intel.
For AMD, I can ensure it always is transformed in the select mitigation function.
Thanks --David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-01-09 5:22 ` Pawan Gupta
2025-01-09 15:26 ` Kaplan, David
@ 2025-01-10 18:45 ` David Laight
2025-01-10 20:30 ` Pawan Gupta
1 sibling, 1 reply; 138+ messages in thread
From: David Laight @ 2025-01-10 18:45 UTC (permalink / raw)
To: Pawan Gupta
Cc: David Kaplan, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
Josh Poimboeuf, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
linux-kernel
On Wed, 8 Jan 2025 21:22:37 -0800
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> wrote:
> On Wed, Jan 08, 2025 at 02:24:52PM -0600, David Kaplan wrote:
> [...]
> > @@ -1064,6 +1063,7 @@ enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE;
> >
> > enum retbleed_mitigation {
> > RETBLEED_MITIGATION_NONE,
> > + RETBLEED_MITIGATION_AUTO,
>
> This new enum ...
>
> > RETBLEED_MITIGATION_UNRET,
> > RETBLEED_MITIGATION_IBPB,
> > RETBLEED_MITIGATION_IBRS,
> > @@ -1071,14 +1071,6 @@ enum retbleed_mitigation {
> > RETBLEED_MITIGATION_STUFF,
> > };
...
> > static const char * const retbleed_strings[] = {
> > [RETBLEED_MITIGATION_NONE] = "Vulnerable",
> > [RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk",
>
> ... does not have a corresponding entry in the strings array. AUTO is the
> default, and it is possible that mitigation mode can stay AUTO throughout
> the retbleed mitigation selection depending on cmdline and CONFIGs. e.g.
> retbleed=stuff and spectre_v2=off.
It is possible to use 'a bit of cpp magic' to put the definitions on one line.
Something like:
#define RETBLEED_MITIGATION(x) \
x(NONE, "Vulnerable") \
x(AUTO, "xxxx") \
x(UNRET, "Mitigation: untrained return thunk") \
...
#define X(NAME, msg) RETBLEED_MITIGATION_##NAME),
enum retbleed_mitigation { RETBLEED_MITIGATION(X) };
#undef X
#define X(NAME, msg) [RETBLEED_MITIGATION_##NAME] = msg,
static const char * const retbleed_strings[] = { RETBLEED_MITIGATION(X) };
#undef X
Then you can't lose message texts even when they are in a different file.
The lower case name (for the strcmp() loop) can also be added.
(and don't let the rust bindgen near it :-)
David
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-01-10 18:45 ` David Laight
@ 2025-01-10 20:30 ` Pawan Gupta
2025-01-10 20:35 ` Borislav Petkov
0 siblings, 1 reply; 138+ messages in thread
From: Pawan Gupta @ 2025-01-10 20:30 UTC (permalink / raw)
To: David Laight
Cc: David Kaplan, Thomas Gleixner, Borislav Petkov, Peter Zijlstra,
Josh Poimboeuf, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
linux-kernel
On Fri, Jan 10, 2025 at 06:45:45PM +0000, David Laight wrote:
> On Wed, 8 Jan 2025 21:22:37 -0800
> Pawan Gupta <pawan.kumar.gupta@linux.intel.com> wrote:
>
> > On Wed, Jan 08, 2025 at 02:24:52PM -0600, David Kaplan wrote:
> > [...]
> > > @@ -1064,6 +1063,7 @@ enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE;
> > >
> > > enum retbleed_mitigation {
> > > RETBLEED_MITIGATION_NONE,
> > > + RETBLEED_MITIGATION_AUTO,
> >
> > This new enum ...
> >
> > > RETBLEED_MITIGATION_UNRET,
> > > RETBLEED_MITIGATION_IBPB,
> > > RETBLEED_MITIGATION_IBRS,
> > > @@ -1071,14 +1071,6 @@ enum retbleed_mitigation {
> > > RETBLEED_MITIGATION_STUFF,
> > > };
> ...
> > > static const char * const retbleed_strings[] = {
> > > [RETBLEED_MITIGATION_NONE] = "Vulnerable",
> > > [RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk",
> >
> > ... does not have a corresponding entry in the strings array. AUTO is the
> > default, and it is possible that mitigation mode can stay AUTO throughout
> > the retbleed mitigation selection depending on cmdline and CONFIGs. e.g.
> > retbleed=stuff and spectre_v2=off.
>
> It is possible to use 'a bit of cpp magic' to put the definitions on one line.
> Something like:
> #define RETBLEED_MITIGATION(x) \
> x(NONE, "Vulnerable") \
> x(AUTO, "xxxx") \
> x(UNRET, "Mitigation: untrained return thunk") \
> ...
>
> #define X(NAME, msg) RETBLEED_MITIGATION_##NAME),
> enum retbleed_mitigation { RETBLEED_MITIGATION(X) };
> #undef X
>
> #define X(NAME, msg) [RETBLEED_MITIGATION_##NAME] = msg,
> static const char * const retbleed_strings[] = { RETBLEED_MITIGATION(X) };
> #undef X
>
> Then you can't lose message texts even when they are in a different file.
> The lower case name (for the strcmp() loop) can also be added.
>
> (and don't let the rust bindgen near it :-)
Wow, this is mind blowing!
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-01-10 20:30 ` Pawan Gupta
@ 2025-01-10 20:35 ` Borislav Petkov
0 siblings, 0 replies; 138+ messages in thread
From: Borislav Petkov @ 2025-01-10 20:35 UTC (permalink / raw)
To: Pawan Gupta
Cc: David Laight, David Kaplan, Thomas Gleixner, Peter Zijlstra,
Josh Poimboeuf, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
linux-kernel
On Fri, Jan 10, 2025 at 12:30:58PM -0800, Pawan Gupta wrote:
> Wow, this is mind blowing!
Yeah, no, we'll never do that, no worries.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-01-08 20:24 ` [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation David Kaplan
2025-01-09 5:22 ` Pawan Gupta
@ 2025-02-10 18:35 ` Brendan Jackman
2025-02-10 20:50 ` Kaplan, David
2025-02-11 0:10 ` Josh Poimboeuf
2025-02-24 15:45 ` Borislav Petkov
3 siblings, 1 reply; 138+ messages in thread
From: Brendan Jackman @ 2025-02-10 18:35 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin,
linux-kernel
On Wed, 8 Jan 2025 at 21:29, David Kaplan <david.kaplan@amd.com> wrote:
> @@ -1827,16 +1821,6 @@ static void __init spectre_v2_select_mitigation(void)
> break;
> }
>
> - if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
> - boot_cpu_has_bug(X86_BUG_RETBLEED) &&
> - retbleed_cmd != RETBLEED_CMD_OFF &&
> - retbleed_cmd != RETBLEED_CMD_STUFF &&
> - boot_cpu_has(X86_FEATURE_IBRS) &&
> - boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
> - mode = SPECTRE_V2_IBRS;
> - break;
> - }
> -
> mode = spectre_v2_select_retpoline();
> break;
It isn't quite clear why this gets removed here. Doesn't
retbleed_update_mitigation() still depend on this?
It gets added back in 15/35 so this would be at most a problem of git history.
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-02-10 18:35 ` Brendan Jackman
@ 2025-02-10 20:50 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-10 20:50 UTC (permalink / raw)
To: Brendan Jackman
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86@kernel.org,
H . Peter Anvin, linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Brendan Jackman <jackmanb@google.com>
> Sent: Monday, February 10, 2025 12:35 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Pawan
> Gupta <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>;
> Dave Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, 8 Jan 2025 at 21:29, David Kaplan <david.kaplan@amd.com> wrote:
> > @@ -1827,16 +1821,6 @@ static void __init spectre_v2_select_mitigation(void)
> > break;
> > }
> >
> > - if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
> > - boot_cpu_has_bug(X86_BUG_RETBLEED) &&
> > - retbleed_cmd != RETBLEED_CMD_OFF &&
> > - retbleed_cmd != RETBLEED_CMD_STUFF &&
> > - boot_cpu_has(X86_FEATURE_IBRS) &&
> > - boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
> > - mode = SPECTRE_V2_IBRS;
> > - break;
> > - }
> > -
> > mode = spectre_v2_select_retpoline();
> > break;
>
> It isn't quite clear why this gets removed here. Doesn't
> retbleed_update_mitigation() still depend on this?
>
> It gets added back in 15/35 so this would be at most a problem of git history.
Thanks, I'll fix this. The logic had to change because retbleed_cmd is removed as part of the restructure, but that doesn't mean the whole block needs to be removed...and it does move into spectre_v2_update_mitigation() in patch 15.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-01-08 20:24 ` [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation David Kaplan
2025-01-09 5:22 ` Pawan Gupta
2025-02-10 18:35 ` Brendan Jackman
@ 2025-02-11 0:10 ` Josh Poimboeuf
2025-02-24 15:45 ` Borislav Petkov
3 siblings, 0 replies; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 0:10 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:52PM -0600, David Kaplan wrote:
> @@ -1254,27 +1269,6 @@ static void __init retbleed_select_mitigation(void)
> (retbleed_nosmt || cpu_mitigations_auto_nosmt()))
> cpu_smt_disable(false);
>
> - /*
> - * Let IBRS trump all on Intel without affecting the effects of the
> - * retbleed= cmdline option except for call depth based stuffing
> - */
> - if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
> - switch (spectre_v2_enabled) {
> - case SPECTRE_V2_IBRS:
> - retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
> - break;
> - case SPECTRE_V2_EIBRS:
> - case SPECTRE_V2_EIBRS_RETPOLINE:
> - case SPECTRE_V2_EIBRS_LFENCE:
> - retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
> - break;
> - default:
> - if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
> - pr_err(RETBLEED_INTEL_MSG);
> - }
> - }
> -
> - pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
> }
Extra whitespace at end of function.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-01-08 20:24 ` [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation David Kaplan
` (2 preceding siblings ...)
2025-02-11 0:10 ` Josh Poimboeuf
@ 2025-02-24 15:45 ` Borislav Petkov
2025-02-24 15:59 ` Kaplan, David
3 siblings, 1 reply; 138+ messages in thread
From: Borislav Petkov @ 2025-02-24 15:45 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:52PM -0600, David Kaplan wrote:
> +static void __init retbleed_update_mitigation(void)
> +{
> + if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
> + return;
>
> - break;
> + if (retbleed_mitigation == RETBLEED_MITIGATION_NONE)
> + goto out;
> + /*
> + * Let IBRS trump all on Intel without affecting the effects of the
> + * retbleed= cmdline option except for call depth based stuffing
> + */
> + if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
> + switch (spectre_v2_enabled) {
> + case SPECTRE_V2_IBRS:
> + retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
> + break;
> + case SPECTRE_V2_EIBRS:
> + case SPECTRE_V2_EIBRS_RETPOLINE:
> + case SPECTRE_V2_EIBRS_LFENCE:
> + retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
> + break;
> + default:
> + if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
> + pr_err(RETBLEED_INTEL_MSG);
> + }
> }
>
> + if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF) {
> + if (spectre_v2_enabled != SPECTRE_V2_RETPOLINE) {
> + pr_err("WARNING: retbleed=stuff depends on spectre_v2=retpoline\n");
> + retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
> + /* Try again */
> + retbleed_select_mitigation();
Err, why?
spectre_v2 and spectre_v2_enabled cannot change anymore - the select function
has set them. Why try again here?
This kinda defeats the whole purpose of having the select -> update -> apply
rounds...
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
2025-02-24 15:45 ` Borislav Petkov
@ 2025-02-24 15:59 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-24 15:59 UTC (permalink / raw)
To: Borislav Petkov
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Monday, February 24, 2025 9:45 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>;
> Josh Poimboeuf <jpoimboe@kernel.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:52PM -0600, David Kaplan wrote:
> > +static void __init retbleed_update_mitigation(void) {
> > + if (!boot_cpu_has_bug(X86_BUG_RETBLEED) || cpu_mitigations_off())
> > + return;
> >
> > - break;
> > + if (retbleed_mitigation == RETBLEED_MITIGATION_NONE)
> > + goto out;
> > + /*
> > + * Let IBRS trump all on Intel without affecting the effects of the
> > + * retbleed= cmdline option except for call depth based stuffing
> > + */
> > + if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
> > + switch (spectre_v2_enabled) {
> > + case SPECTRE_V2_IBRS:
> > + retbleed_mitigation = RETBLEED_MITIGATION_IBRS;
> > + break;
> > + case SPECTRE_V2_EIBRS:
> > + case SPECTRE_V2_EIBRS_RETPOLINE:
> > + case SPECTRE_V2_EIBRS_LFENCE:
> > + retbleed_mitigation = RETBLEED_MITIGATION_EIBRS;
> > + break;
> > + default:
> > + if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF)
> > + pr_err(RETBLEED_INTEL_MSG);
> > + }
> > }
> >
> > + if (retbleed_mitigation == RETBLEED_MITIGATION_STUFF) {
> > + if (spectre_v2_enabled != SPECTRE_V2_RETPOLINE) {
> > + pr_err("WARNING: retbleed=stuff depends on
> spectre_v2=retpoline\n");
> > + retbleed_mitigation = RETBLEED_MITIGATION_AUTO;
> > + /* Try again */
> > + retbleed_select_mitigation();
>
> Err, why?
>
> spectre_v2 and spectre_v2_enabled cannot change anymore - the select function
> has set them. Why try again here?
>
> This kinda defeats the whole purpose of having the select -> update -> apply
> rounds...
>
This code is gone from the latest version, I was able to simplify this and it only mattered for some corner cases related to Intel retbleed. Now the update function no longer has to re-call the select function.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 13/35] x86/bugs: Restructure spectre_v2_user mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (11 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 12/35] x86/bugs: Restructure retbleed mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-11 0:53 ` Josh Poimboeuf
2025-01-08 20:24 ` [PATCH v3 14/35] x86/bugs: Restructure bhi mitigation David Kaplan
` (22 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure spectre_v2_user to use select/update/apply functions to
create consistent vulnerability handling.
The ibpb/stibp choices are first decided based on the spectre_v2_user
command line but can be modified by the spectre_v2 command line option
as well.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 150 +++++++++++++++++++++----------------
1 file changed, 84 insertions(+), 66 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 66abc398d5b4..849abdc0da91 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -60,6 +60,8 @@ static void __init retbleed_select_mitigation(void);
static void __init retbleed_update_mitigation(void);
static void __init retbleed_apply_mitigation(void);
static void __init spectre_v2_user_select_mitigation(void);
+static void __init spectre_v2_user_update_mitigation(void);
+static void __init spectre_v2_user_apply_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
@@ -183,11 +185,6 @@ void __init cpu_select_mitigations(void)
spectre_v1_select_mitigation();
spectre_v2_select_mitigation();
retbleed_select_mitigation();
- /*
- * spectre_v2_user_select_mitigation() relies on the state set by
- * retbleed_select_mitigation(); specifically the STIBP selection is
- * forced for UNRET or IBPB.
- */
spectre_v2_user_select_mitigation();
ssb_select_mitigation();
l1tf_select_mitigation();
@@ -210,6 +207,7 @@ void __init cpu_select_mitigations(void)
* choices.
*/
retbleed_update_mitigation();
+ spectre_v2_user_update_mitigation();
mds_update_mitigation();
taa_update_mitigation();
mmio_update_mitigation();
@@ -217,6 +215,7 @@ void __init cpu_select_mitigations(void)
spectre_v1_apply_mitigation();
retbleed_apply_mitigation();
+ spectre_v2_user_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -1348,6 +1347,8 @@ enum spectre_v2_mitigation_cmd {
SPECTRE_V2_CMD_IBRS,
};
+static enum spectre_v2_mitigation_cmd spectre_v2_cmd __ro_after_init = SPECTRE_V2_CMD_AUTO;
+
enum spectre_v2_user_cmd {
SPECTRE_V2_USER_CMD_NONE,
SPECTRE_V2_USER_CMD_AUTO,
@@ -1386,22 +1387,14 @@ static void __init spec_v2_user_print_cond(const char *reason, bool secure)
pr_info("spectre_v2_user=%s forced on command line.\n", reason);
}
-static __ro_after_init enum spectre_v2_mitigation_cmd spectre_v2_cmd;
-
static enum spectre_v2_user_cmd __init
spectre_v2_parse_user_cmdline(void)
{
char arg[20];
int ret, i;
- switch (spectre_v2_cmd) {
- case SPECTRE_V2_CMD_NONE:
+ if (cpu_mitigations_off())
return SPECTRE_V2_USER_CMD_NONE;
- case SPECTRE_V2_CMD_FORCE:
- return SPECTRE_V2_USER_CMD_FORCE;
- default:
- break;
- }
ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
arg, sizeof(arg));
@@ -1425,65 +1418,73 @@ static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS;
}
+
static void __init
spectre_v2_user_select_mitigation(void)
{
- enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
- bool smt_possible = IS_ENABLED(CONFIG_SMP);
enum spectre_v2_user_cmd cmd;
if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
return;
- if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
- cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
- smt_possible = false;
-
cmd = spectre_v2_parse_user_cmdline();
switch (cmd) {
case SPECTRE_V2_USER_CMD_NONE:
- goto set_mode;
+ return;
case SPECTRE_V2_USER_CMD_FORCE:
- mode = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
break;
case SPECTRE_V2_USER_CMD_AUTO:
case SPECTRE_V2_USER_CMD_PRCTL:
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
+ break;
case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
- mode = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
break;
case SPECTRE_V2_USER_CMD_SECCOMP:
- case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
if (IS_ENABLED(CONFIG_SECCOMP))
- mode = SPECTRE_V2_USER_SECCOMP;
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_SECCOMP;
else
- mode = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_stibp = spectre_v2_user_ibpb;
+ break;
+ case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
break;
}
- /* Initialize Indirect Branch Prediction Barrier */
- if (boot_cpu_has(X86_FEATURE_IBPB)) {
- setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+ /*
+ * At this point, an STIBP mode other than "off" has been set.
+ * If STIBP support is not being forced, check if STIBP always-on
+ * is preferred.
+ */
+ if (spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT &&
+ boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
+ spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT_PREFERRED;
+}
- spectre_v2_user_ibpb = mode;
- switch (cmd) {
- case SPECTRE_V2_USER_CMD_NONE:
- break;
- case SPECTRE_V2_USER_CMD_FORCE:
- case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
- case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
- static_branch_enable(&switch_mm_always_ibpb);
- spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
- break;
- case SPECTRE_V2_USER_CMD_PRCTL:
- case SPECTRE_V2_USER_CMD_AUTO:
- case SPECTRE_V2_USER_CMD_SECCOMP:
- static_branch_enable(&switch_mm_cond_ibpb);
- break;
- }
+static void __init spectre_v2_user_update_mitigation(void)
+{
+ bool smt_possible = IS_ENABLED(CONFIG_SMP);
- pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
- static_key_enabled(&switch_mm_always_ibpb) ?
- "always-on" : "conditional");
+ if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP))
+ return;
+
+ if (cpu_smt_control == CPU_SMT_FORCE_DISABLED ||
+ cpu_smt_control == CPU_SMT_NOT_SUPPORTED)
+ smt_possible = false;
+
+ /* The spectre_v2 cmd line can override spectre_v2_user options */
+ if (spectre_v2_cmd == SPECTRE_V2_CMD_NONE) {
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_NONE;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_NONE;
+ } else if (spectre_v2_cmd == SPECTRE_V2_CMD_FORCE) {
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_STRICT;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
}
/*
@@ -1501,30 +1502,47 @@ spectre_v2_user_select_mitigation(void)
if (!boot_cpu_has(X86_FEATURE_STIBP) ||
!smt_possible ||
(spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
- !boot_cpu_has(X86_FEATURE_AUTOIBRS)))
+ !boot_cpu_has(X86_FEATURE_AUTOIBRS))) {
+ spectre_v2_user_stibp = SPECTRE_V2_USER_NONE;
return;
+ }
- /*
- * At this point, an STIBP mode other than "off" has been set.
- * If STIBP support is not being forced, check if STIBP always-on
- * is preferred.
- */
- if (mode != SPECTRE_V2_USER_STRICT &&
- boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
- mode = SPECTRE_V2_USER_STRICT_PREFERRED;
-
- if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
- retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
- if (mode != SPECTRE_V2_USER_STRICT &&
- mode != SPECTRE_V2_USER_STRICT_PREFERRED)
+ if (spectre_v2_user_stibp != SPECTRE_V2_USER_NONE &&
+ (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
+ retbleed_mitigation == RETBLEED_MITIGATION_IBPB)) {
+ if (spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT &&
+ spectre_v2_user_stibp != SPECTRE_V2_USER_STRICT_PREFERRED)
pr_info("Selecting STIBP always-on mode to complement retbleed mitigation\n");
- mode = SPECTRE_V2_USER_STRICT_PREFERRED;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT_PREFERRED;
}
+ pr_info("%s\n", spectre_v2_user_strings[spectre_v2_user_stibp]);
+}
- spectre_v2_user_stibp = mode;
+static void __init spectre_v2_user_apply_mitigation(void)
+{
+ /* Initialize Indirect Branch Prediction Barrier */
+ if (boot_cpu_has(X86_FEATURE_IBPB) &&
+ spectre_v2_user_ibpb != SPECTRE_V2_USER_NONE) {
+ setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
-set_mode:
- pr_info("%s\n", spectre_v2_user_strings[mode]);
+ switch (spectre_v2_user_ibpb) {
+ case SPECTRE_V2_USER_NONE:
+ break;
+ case SPECTRE_V2_USER_STRICT:
+ static_branch_enable(&switch_mm_always_ibpb);
+ break;
+ case SPECTRE_V2_USER_PRCTL:
+ case SPECTRE_V2_USER_SECCOMP:
+ static_branch_enable(&switch_mm_cond_ibpb);
+ break;
+ default:
+ break;
+ }
+
+ pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
+ static_key_enabled(&switch_mm_always_ibpb) ?
+ "always-on" : "conditional");
+ }
}
static const char * const spectre_v2_strings[] = {
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 13/35] x86/bugs: Restructure spectre_v2_user mitigation
2025-01-08 20:24 ` [PATCH v3 13/35] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
@ 2025-02-11 0:53 ` Josh Poimboeuf
2025-02-12 15:59 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 0:53 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:53PM -0600, David Kaplan wrote:
> - if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
> - retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
> - if (mode != SPECTRE_V2_USER_STRICT &&
> - mode != SPECTRE_V2_USER_STRICT_PREFERRED)
> + if (spectre_v2_user_stibp != SPECTRE_V2_USER_NONE &&
> + (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
> + retbleed_mitigation == RETBLEED_MITIGATION_IBPB)) {
This adds a hidden dependency on retbleed_update_mitigation()?
Also, that last line should be aligned one more space to the right:
if (spectre_v2_user_stibp != SPECTRE_V2_USER_NONE &&
(retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
retbleed_mitigation == RETBLEED_MITIGATION_IBPB)) {
> +static void __init spectre_v2_user_apply_mitigation(void)
> +{
> + /* Initialize Indirect Branch Prediction Barrier */
> + if (boot_cpu_has(X86_FEATURE_IBPB) &&
> + spectre_v2_user_ibpb != SPECTRE_V2_USER_NONE) {
> + setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
>
> -set_mode:
> - pr_info("%s\n", spectre_v2_user_strings[mode]);
> + switch (spectre_v2_user_ibpb) {
> + case SPECTRE_V2_USER_NONE:
> + break;
This case can't happen, spectre_v2_user_ibpb was already checked for
!SPECTRE_V2_USER_NONE above.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 13/35] x86/bugs: Restructure spectre_v2_user mitigation
2025-02-11 0:53 ` Josh Poimboeuf
@ 2025-02-12 15:59 ` Kaplan, David
2025-02-12 21:35 ` Josh Poimboeuf
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-12 15:59 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Monday, February 10, 2025 6:54 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 13/35] x86/bugs: Restructure spectre_v2_user mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:53PM -0600, David Kaplan wrote:
> > - if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
> > - retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
> > - if (mode != SPECTRE_V2_USER_STRICT &&
> > - mode != SPECTRE_V2_USER_STRICT_PREFERRED)
> > + if (spectre_v2_user_stibp != SPECTRE_V2_USER_NONE &&
> > + (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
> > + retbleed_mitigation == RETBLEED_MITIGATION_IBPB)) {
>
> This adds a hidden dependency on retbleed_update_mitigation()?
Yeah I guess it does. I'm not sure of a way to cleanly avoid this if the logic is kept as-is, do you think it's ok just to document this dependency explicitly?
The only case I think where this matters is if 'stuff' is selected for retbleed, and then retbleed_update_mitigation decides you can't do that and it has to re-select and may end up with unret or ibpb. That case doesn't even make much sense since 'retbleed=stuff' isn't a mitigation for AMD.
One idea, which would involve changing the logic vs upstream, is that 'retbleed=stuff' should only be allowed on Intel and it should be converted to AUTO on AMD. If that's the case, then there isn't really a hidden dependency anymore since the retbleed mitigation will never change to unret/ibpb during retbleed_update_mitigation(). Thoughts?
>
> Also, that last line should be aligned one more space to the right:
>
> if (spectre_v2_user_stibp != SPECTRE_V2_USER_NONE &&
> (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
> retbleed_mitigation == RETBLEED_MITIGATION_IBPB)) {
Ack
>
> > +static void __init spectre_v2_user_apply_mitigation(void)
> > +{
> > + /* Initialize Indirect Branch Prediction Barrier */
> > + if (boot_cpu_has(X86_FEATURE_IBPB) &&
> > + spectre_v2_user_ibpb != SPECTRE_V2_USER_NONE) {
> > + setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
> >
> > -set_mode:
> > - pr_info("%s\n", spectre_v2_user_strings[mode]);
> > + switch (spectre_v2_user_ibpb) {
> > + case SPECTRE_V2_USER_NONE:
> > + break;
>
> This case can't happen, spectre_v2_user_ibpb was already checked for
> !SPECTRE_V2_USER_NONE above.
Ack
Thanks --David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 13/35] x86/bugs: Restructure spectre_v2_user mitigation
2025-02-12 15:59 ` Kaplan, David
@ 2025-02-12 21:35 ` Josh Poimboeuf
0 siblings, 0 replies; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-12 21:35 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 12, 2025 at 03:59:39PM +0000, Kaplan, David wrote:
> > On Wed, Jan 08, 2025 at 02:24:53PM -0600, David Kaplan wrote:
> > > - if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
> > > - retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
> > > - if (mode != SPECTRE_V2_USER_STRICT &&
> > > - mode != SPECTRE_V2_USER_STRICT_PREFERRED)
> > > + if (spectre_v2_user_stibp != SPECTRE_V2_USER_NONE &&
> > > + (retbleed_mitigation == RETBLEED_MITIGATION_UNRET ||
> > > + retbleed_mitigation == RETBLEED_MITIGATION_IBPB)) {
> >
> > This adds a hidden dependency on retbleed_update_mitigation()?
>
> Yeah I guess it does. I'm not sure of a way to cleanly avoid this if
> the logic is kept as-is, do you think it's ok just to document this
> dependency explicitly?
Yeah, if the dependencies can't be cleanly unwound, at least they should
be explicitly documented with comments at the call sites, similar to
what we attempt to do today.
> The only case I think where this matters is if 'stuff' is selected for
> retbleed, and then retbleed_update_mitigation decides you can't do
> that and it has to re-select and may end up with unret or ibpb. That
> case doesn't even make much sense since 'retbleed=stuff' isn't a
> mitigation for AMD.
True, though generally we should treat such things as hard dependencies.
It would be really easy for a future person to come along and introduce
a bug when they see the 'retbleed_mitigation' reference and assume the
dependency has already been handled.
So basically any "update" function which references the output of
another "update" function should be treated as a dependency. Because
even it's not technically a dependency, that could easily change in the
future without being noticed, with a patch to *either* of the functions.
> One idea, which would involve changing the logic vs upstream, is that
> 'retbleed=stuff' should only be allowed on Intel and it should be
> converted to AUTO on AMD. If that's the case, then there isn't really
> a hidden dependency anymore since the retbleed mitigation will never
> change to unret/ibpb during retbleed_update_mitigation(). Thoughts?
Yeah, I'm strongly in favor of any such simplification. We spend *way*
too much maintenance effort on all these weird options and combinations
which don't make sense.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 14/35] x86/bugs: Restructure bhi mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (12 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 13/35] x86/bugs: Restructure spectre_v2_user mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-01-08 20:24 ` [PATCH v3 15/35] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
` (21 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure bhi mitigation to use select/apply functions to create
consistent vulnerability handling.
Define new AUTO mitigation for bhi.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 849abdc0da91..fb92344d63cd 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -82,6 +82,8 @@ static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
static void __init gds_select_mitigation(void);
static void __init gds_apply_mitigation(void);
+static void __init bhi_select_mitigation(void);
+static void __init bhi_apply_mitigation(void);
/* The base value of the SPEC_CTRL MSR without task-specific bits set */
u64 x86_spec_ctrl_base;
@@ -201,6 +203,7 @@ void __init cpu_select_mitigations(void)
*/
srso_select_mitigation();
gds_select_mitigation();
+ bhi_select_mitigation();
/*
* After mitigations are selected, some may need to update their
@@ -222,6 +225,7 @@ void __init cpu_select_mitigations(void)
rfds_apply_mitigation();
srbds_apply_mitigation();
gds_apply_mitigation();
+ bhi_apply_mitigation();
}
/*
@@ -1759,12 +1763,13 @@ static bool __init spec_ctrl_bhi_dis(void)
enum bhi_mitigations {
BHI_MITIGATION_OFF,
+ BHI_MITIGATION_AUTO,
BHI_MITIGATION_ON,
BHI_MITIGATION_VMEXIT_ONLY,
};
static enum bhi_mitigations bhi_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_SPECTRE_BHI) ? BHI_MITIGATION_ON : BHI_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_SPECTRE_BHI) ? BHI_MITIGATION_AUTO : BHI_MITIGATION_OFF;
static int __init spectre_bhi_parse_cmdline(char *str)
{
@@ -1785,6 +1790,15 @@ static int __init spectre_bhi_parse_cmdline(char *str)
early_param("spectre_bhi", spectre_bhi_parse_cmdline);
static void __init bhi_select_mitigation(void)
+{
+ if (!boot_cpu_has(X86_BUG_BHI) || cpu_mitigations_off())
+ bhi_mitigation = BHI_MITIGATION_OFF;
+
+ if (bhi_mitigation == BHI_MITIGATION_AUTO)
+ bhi_mitigation = BHI_MITIGATION_ON;
+}
+
+static void __init bhi_apply_mitigation(void)
{
if (bhi_mitigation == BHI_MITIGATION_OFF)
return;
@@ -1916,9 +1930,6 @@ static void __init spectre_v2_select_mitigation(void)
mode == SPECTRE_V2_RETPOLINE)
spec_ctrl_disable_kernel_rrsba();
- if (boot_cpu_has(X86_BUG_BHI))
- bhi_select_mitigation();
-
spectre_v2_enabled = mode;
pr_info("%s\n", spectre_v2_strings[mode]);
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 15/35] x86/bugs: Restructure spectre_v2 mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (13 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 14/35] x86/bugs: Restructure bhi mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-11 1:07 ` Josh Poimboeuf
2025-01-08 20:24 ` [PATCH v3 16/35] x86/bugs: Restructure ssb mitigation David Kaplan
` (20 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure spectre_v2 to use select/update/apply functions to create
consistent vulnerability handling.
The spectre_v2 mitigation may be updated based on the selected retbleed
mitigation.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 65 ++++++++++++++++++++++++++------------
1 file changed, 44 insertions(+), 21 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index fb92344d63cd..440fe9ee1c63 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -56,6 +56,8 @@
static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v1_apply_mitigation(void);
static void __init spectre_v2_select_mitigation(void);
+static void __init spectre_v2_update_mitigation(void);
+static void __init spectre_v2_apply_mitigation(void);
static void __init retbleed_select_mitigation(void);
static void __init retbleed_update_mitigation(void);
static void __init retbleed_apply_mitigation(void);
@@ -209,6 +211,7 @@ void __init cpu_select_mitigations(void)
* After mitigations are selected, some may need to update their
* choices.
*/
+ spectre_v2_update_mitigation();
retbleed_update_mitigation();
spectre_v2_user_update_mitigation();
mds_update_mitigation();
@@ -217,6 +220,7 @@ void __init cpu_select_mitigations(void)
rfds_update_mitigation();
spectre_v1_apply_mitigation();
+ spectre_v2_apply_mitigation();
retbleed_apply_mitigation();
spectre_v2_user_apply_mitigation();
mds_apply_mitigation();
@@ -1831,18 +1835,18 @@ static void __init bhi_apply_mitigation(void)
static void __init spectre_v2_select_mitigation(void)
{
- enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
+ spectre_v2_cmd = spectre_v2_parse_cmdline();
/*
* If the CPU is not affected and the command line mode is NONE or AUTO
* then nothing to do.
*/
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2) &&
- (cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO))
+ (spectre_v2_cmd == SPECTRE_V2_CMD_NONE || spectre_v2_cmd == SPECTRE_V2_CMD_AUTO))
return;
- switch (cmd) {
+ switch (spectre_v2_cmd) {
case SPECTRE_V2_CMD_NONE:
return;
@@ -1886,10 +1890,32 @@ static void __init spectre_v2_select_mitigation(void)
break;
}
- if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
+ spectre_v2_enabled = mode;
+}
+
+static void __init spectre_v2_update_mitigation(void)
+{
+ if (spectre_v2_cmd == SPECTRE_V2_CMD_AUTO) {
+ if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
+ boot_cpu_has_bug(X86_BUG_RETBLEED) &&
+ retbleed_mitigation != RETBLEED_MITIGATION_NONE &&
+ retbleed_mitigation != RETBLEED_MITIGATION_STUFF &&
+ boot_cpu_has(X86_FEATURE_IBRS) &&
+ boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
+ spectre_v2_enabled = SPECTRE_V2_IBRS;
+ }
+ }
+
+ if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) && !cpu_mitigations_off())
+ pr_info("%s\n", spectre_v2_strings[spectre_v2_enabled]);
+}
+
+static void __init spectre_v2_apply_mitigation(void)
+{
+ if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
- if (spectre_v2_in_ibrs_mode(mode)) {
+ if (spectre_v2_in_ibrs_mode(spectre_v2_enabled)) {
if (boot_cpu_has(X86_FEATURE_AUTOIBRS)) {
msr_set_bit(MSR_EFER, _EFER_AUTOIBRS);
} else {
@@ -1898,8 +1924,10 @@ static void __init spectre_v2_select_mitigation(void)
}
}
- switch (mode) {
+ switch (spectre_v2_enabled) {
case SPECTRE_V2_NONE:
+ return;
+
case SPECTRE_V2_EIBRS:
break;
@@ -1925,14 +1953,11 @@ static void __init spectre_v2_select_mitigation(void)
* JMPs gets protection against BHI and Intramode-BTI, but RET
* prediction from a non-RSB predictor is still a risk.
*/
- if (mode == SPECTRE_V2_EIBRS_LFENCE ||
- mode == SPECTRE_V2_EIBRS_RETPOLINE ||
- mode == SPECTRE_V2_RETPOLINE)
+ if (spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE ||
+ spectre_v2_enabled == SPECTRE_V2_EIBRS_RETPOLINE ||
+ spectre_v2_enabled == SPECTRE_V2_RETPOLINE)
spec_ctrl_disable_kernel_rrsba();
- spectre_v2_enabled = mode;
- pr_info("%s\n", spectre_v2_strings[mode]);
-
/*
* If Spectre v2 protection has been enabled, fill the RSB during a
* context switch. In general there are two types of RSB attacks
@@ -1974,7 +1999,7 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
- spectre_v2_determine_rsb_fill_type_at_vmexit(mode);
+ spectre_v2_determine_rsb_fill_type_at_vmexit(spectre_v2_enabled);
/*
* Retpoline protects the kernel, but doesn't protect firmware. IBRS
@@ -1982,10 +2007,10 @@ static void __init spectre_v2_select_mitigation(void)
* firmware calls only when IBRS / Enhanced / Automatic IBRS aren't
* otherwise enabled.
*
- * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
- * the user might select retpoline on the kernel command line and if
- * the CPU supports Enhanced IBRS, kernel might un-intentionally not
- * enable IBRS around firmware calls.
+ * Use "spectre_v2_enabled" to check Enhanced IBRS instead of
+ * boot_cpu_has(), because the user might select retpoline on the kernel
+ * command line and if the CPU supports Enhanced IBRS, kernel might
+ * un-intentionally not enable IBRS around firmware calls.
*/
if (boot_cpu_has_bug(X86_BUG_RETBLEED) &&
boot_cpu_has(X86_FEATURE_IBPB) &&
@@ -1997,13 +2022,11 @@ static void __init spectre_v2_select_mitigation(void)
pr_info("Enabling Speculation Barrier for firmware calls\n");
}
- } else if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_ibrs_mode(mode)) {
+ } else if (boot_cpu_has(X86_FEATURE_IBRS) &&
+ !spectre_v2_in_ibrs_mode(spectre_v2_enabled)) {
setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
pr_info("Enabling Restricted Speculation for firmware calls\n");
}
-
- /* Set up IBPB and STIBP depending on the general spectre V2 command */
- spectre_v2_cmd = cmd;
}
static void update_stibp_msr(void * __unused)
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 15/35] x86/bugs: Restructure spectre_v2 mitigation
2025-01-08 20:24 ` [PATCH v3 15/35] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
@ 2025-02-11 1:07 ` Josh Poimboeuf
2025-02-12 16:40 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 1:07 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:55PM -0600, David Kaplan wrote:
> +static void __init spectre_v2_update_mitigation(void)
> +{
> + if (spectre_v2_cmd == SPECTRE_V2_CMD_AUTO) {
> + if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
> + boot_cpu_has_bug(X86_BUG_RETBLEED) &&
> + retbleed_mitigation != RETBLEED_MITIGATION_NONE &&
> + retbleed_mitigation != RETBLEED_MITIGATION_STUFF &&
> + boot_cpu_has(X86_FEATURE_IBRS) &&
> + boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
> + spectre_v2_enabled = SPECTRE_V2_IBRS;
> + }
> + }
This has a dependency on retbleed_update_mitigation() which hasn't run
yet?
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 15/35] x86/bugs: Restructure spectre_v2 mitigation
2025-02-11 1:07 ` Josh Poimboeuf
@ 2025-02-12 16:40 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-12 16:40 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Monday, February 10, 2025 7:08 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 15/35] x86/bugs: Restructure spectre_v2 mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:55PM -0600, David Kaplan wrote:
> > +static void __init spectre_v2_update_mitigation(void)
> > +{
> > + if (spectre_v2_cmd == SPECTRE_V2_CMD_AUTO) {
> > + if (IS_ENABLED(CONFIG_MITIGATION_IBRS_ENTRY) &&
> > + boot_cpu_has_bug(X86_BUG_RETBLEED) &&
> > + retbleed_mitigation != RETBLEED_MITIGATION_NONE &&
> > + retbleed_mitigation != RETBLEED_MITIGATION_STUFF &&
> > + boot_cpu_has(X86_FEATURE_IBRS) &&
> > + boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) {
> > + spectre_v2_enabled = SPECTRE_V2_IBRS;
> > + }
> > + }
>
> This has a dependency on retbleed_update_mitigation() which hasn't run yet?
>
It's actually the reverse, retbleed_update_mitigation() needs to run after this. That hasn't changed vs upstream, although I do need to document that.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 16/35] x86/bugs: Restructure ssb mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (14 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 15/35] x86/bugs: Restructure spectre_v2 mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-11 1:10 ` Josh Poimboeuf
2025-01-08 20:24 ` [PATCH v3 17/35] x86/bugs: Restructure l1tf mitigation David Kaplan
` (19 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure ssb to use select/apply functions to create consistent
vulnerability handling.
Remove __ssb_select_mitigation() and split the functionality between the
select/apply functions.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 35 +++++++++++++++++------------------
1 file changed, 17 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 440fe9ee1c63..b07726a8dd3b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -65,6 +65,7 @@ static void __init spectre_v2_user_select_mitigation(void);
static void __init spectre_v2_user_update_mitigation(void);
static void __init spectre_v2_user_apply_mitigation(void);
static void __init ssb_select_mitigation(void);
+static void __init ssb_apply_mitigation(void);
static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init mds_update_mitigation(void);
@@ -223,6 +224,7 @@ void __init cpu_select_mitigations(void)
spectre_v2_apply_mitigation();
retbleed_apply_mitigation();
spectre_v2_user_apply_mitigation();
+ ssb_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -2215,19 +2217,18 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
return cmd;
}
-static enum ssb_mitigation __init __ssb_select_mitigation(void)
+static void ssb_select_mitigation(void)
{
- enum ssb_mitigation mode = SPEC_STORE_BYPASS_NONE;
enum ssb_mitigation_cmd cmd;
if (!boot_cpu_has(X86_FEATURE_SSBD))
- return mode;
+ goto out;
cmd = ssb_parse_cmdline();
if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS) &&
(cmd == SPEC_STORE_BYPASS_CMD_NONE ||
cmd == SPEC_STORE_BYPASS_CMD_AUTO))
- return mode;
+ return;
switch (cmd) {
case SPEC_STORE_BYPASS_CMD_SECCOMP:
@@ -2236,28 +2237,35 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
* enabled.
*/
if (IS_ENABLED(CONFIG_SECCOMP))
- mode = SPEC_STORE_BYPASS_SECCOMP;
+ ssb_mode = SPEC_STORE_BYPASS_SECCOMP;
else
- mode = SPEC_STORE_BYPASS_PRCTL;
+ ssb_mode = SPEC_STORE_BYPASS_PRCTL;
break;
case SPEC_STORE_BYPASS_CMD_ON:
- mode = SPEC_STORE_BYPASS_DISABLE;
+ ssb_mode = SPEC_STORE_BYPASS_DISABLE;
break;
case SPEC_STORE_BYPASS_CMD_AUTO:
case SPEC_STORE_BYPASS_CMD_PRCTL:
- mode = SPEC_STORE_BYPASS_PRCTL;
+ ssb_mode = SPEC_STORE_BYPASS_PRCTL;
break;
case SPEC_STORE_BYPASS_CMD_NONE:
break;
}
+out:
+ if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
+ pr_info("%s\n", ssb_strings[ssb_mode]);
+}
+
+static void __init ssb_apply_mitigation(void)
+{
/*
* We have three CPU feature flags that are in play here:
* - X86_BUG_SPEC_STORE_BYPASS - CPU is susceptible.
* - X86_FEATURE_SSBD - CPU is able to turn off speculative store bypass
* - X86_FEATURE_SPEC_STORE_BYPASS_DISABLE - engage the mitigation
*/
- if (mode == SPEC_STORE_BYPASS_DISABLE) {
+ if (ssb_mode == SPEC_STORE_BYPASS_DISABLE) {
setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE);
/*
* Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may
@@ -2272,15 +2280,6 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
}
}
- return mode;
-}
-
-static void ssb_select_mitigation(void)
-{
- ssb_mode = __ssb_select_mitigation();
-
- if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
- pr_info("%s\n", ssb_strings[ssb_mode]);
}
#undef pr_fmt
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 16/35] x86/bugs: Restructure ssb mitigation
2025-01-08 20:24 ` [PATCH v3 16/35] x86/bugs: Restructure ssb mitigation David Kaplan
@ 2025-02-11 1:10 ` Josh Poimboeuf
2025-02-12 16:45 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 1:10 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:56PM -0600, David Kaplan wrote:
> @@ -2272,15 +2280,6 @@ static enum ssb_mitigation __init __ssb_select_mitigation(void)
> }
> }
>
> - return mode;
> -}
> -
> -static void ssb_select_mitigation(void)
> -{
> - ssb_mode = __ssb_select_mitigation();
> -
> - if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
> - pr_info("%s\n", ssb_strings[ssb_mode]);
> }
Extra whitespace at end of function.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 16/35] x86/bugs: Restructure ssb mitigation
2025-02-11 1:10 ` Josh Poimboeuf
@ 2025-02-12 16:45 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-12 16:45 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Monday, February 10, 2025 7:10 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 16/35] x86/bugs: Restructure ssb mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:56PM -0600, David Kaplan wrote:
> > @@ -2272,15 +2280,6 @@ static enum ssb_mitigation __init
> __ssb_select_mitigation(void)
> > }
> > }
> >
> > - return mode;
> > -}
> > -
> > -static void ssb_select_mitigation(void) -{
> > - ssb_mode = __ssb_select_mitigation();
> > -
> > - if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS))
> > - pr_info("%s\n", ssb_strings[ssb_mode]);
> > }
>
> Extra whitespace at end of function.
>
Ack
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 17/35] x86/bugs: Restructure l1tf mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (15 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 16/35] x86/bugs: Restructure ssb mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-11 1:21 ` Josh Poimboeuf
2025-01-08 20:24 ` [PATCH v3 18/35] x86/bugs: Restructure srso mitigation David Kaplan
` (18 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure l1tf to use select/apply functions to create consistent
vulnerability handling.
Define new AUTO mitigation for l1tf.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/include/asm/processor.h | 1 +
arch/x86/kernel/cpu/bugs.c | 27 +++++++++++++++++++++------
arch/x86/kvm/vmx/vmx.c | 2 ++
3 files changed, 24 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 90278d0c071b..57760b0d553e 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -746,6 +746,7 @@ void store_cpu_caps(struct cpuinfo_x86 *info);
enum l1tf_mitigations {
L1TF_MITIGATION_OFF,
+ L1TF_MITIGATION_AUTO,
L1TF_MITIGATION_FLUSH_NOWARN,
L1TF_MITIGATION_FLUSH,
L1TF_MITIGATION_FLUSH_NOSMT,
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index b07726a8dd3b..08ac515df888 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -67,6 +67,7 @@ static void __init spectre_v2_user_apply_mitigation(void);
static void __init ssb_select_mitigation(void);
static void __init ssb_apply_mitigation(void);
static void __init l1tf_select_mitigation(void);
+static void __init l1tf_apply_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init mds_update_mitigation(void);
static void __init mds_apply_mitigation(void);
@@ -225,6 +226,7 @@ void __init cpu_select_mitigations(void)
retbleed_apply_mitigation();
spectre_v2_user_apply_mitigation();
ssb_apply_mitigation();
+ l1tf_apply_mitigation();
mds_apply_mitigation();
taa_apply_mitigation();
mmio_apply_mitigation();
@@ -2535,7 +2537,7 @@ EXPORT_SYMBOL_GPL(itlb_multihit_kvm_mitigation);
/* Default mitigation for L1TF-affected CPUs */
enum l1tf_mitigations l1tf_mitigation __ro_after_init =
- IS_ENABLED(CONFIG_MITIGATION_L1TF) ? L1TF_MITIGATION_FLUSH : L1TF_MITIGATION_OFF;
+ IS_ENABLED(CONFIG_MITIGATION_L1TF) ? L1TF_MITIGATION_AUTO : L1TF_MITIGATION_OFF;
#if IS_ENABLED(CONFIG_KVM_INTEL)
EXPORT_SYMBOL_GPL(l1tf_mitigation);
#endif
@@ -2582,23 +2584,36 @@ static void override_cache_bits(struct cpuinfo_x86 *c)
}
static void __init l1tf_select_mitigation(void)
+{
+ if (!boot_cpu_has_bug(X86_BUG_L1TF) || cpu_mitigations_off()) {
+ l1tf_mitigation = L1TF_MITIGATION_OFF;
+ return;
+ }
+
+ if (l1tf_mitigation == L1TF_MITIGATION_AUTO) {
+ if (cpu_mitigations_auto_nosmt())
+ l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
+ else
+ l1tf_mitigation = L1TF_MITIGATION_FLUSH;
+ }
+
+}
+
+static void __init l1tf_apply_mitigation(void)
{
u64 half_pa;
if (!boot_cpu_has_bug(X86_BUG_L1TF))
return;
- if (cpu_mitigations_off())
- l1tf_mitigation = L1TF_MITIGATION_OFF;
- else if (cpu_mitigations_auto_nosmt())
- l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
-
override_cache_bits(&boot_cpu_data);
switch (l1tf_mitigation) {
case L1TF_MITIGATION_OFF:
+ return;
case L1TF_MITIGATION_FLUSH_NOWARN:
case L1TF_MITIGATION_FLUSH:
+ case L1TF_MITIGATION_AUTO:
break;
case L1TF_MITIGATION_FLUSH_NOSMT:
case L1TF_MITIGATION_FULL:
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 893366e53732..99bdb9341be0 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -273,6 +273,7 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
case L1TF_MITIGATION_OFF:
l1tf = VMENTER_L1D_FLUSH_NEVER;
break;
+ case L1TF_MITIGATION_AUTO:
case L1TF_MITIGATION_FLUSH_NOWARN:
case L1TF_MITIGATION_FLUSH:
case L1TF_MITIGATION_FLUSH_NOSMT:
@@ -7643,6 +7644,7 @@ int vmx_vm_init(struct kvm *kvm)
case L1TF_MITIGATION_FLUSH_NOWARN:
/* 'I explicitly don't care' is set */
break;
+ case L1TF_MITIGATION_AUTO:
case L1TF_MITIGATION_FLUSH:
case L1TF_MITIGATION_FLUSH_NOSMT:
case L1TF_MITIGATION_FULL:
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 17/35] x86/bugs: Restructure l1tf mitigation
2025-01-08 20:24 ` [PATCH v3 17/35] x86/bugs: Restructure l1tf mitigation David Kaplan
@ 2025-02-11 1:21 ` Josh Poimboeuf
2025-02-12 16:47 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 1:21 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:57PM -0600, David Kaplan wrote:
> static void __init l1tf_select_mitigation(void)
> +{
> + if (!boot_cpu_has_bug(X86_BUG_L1TF) || cpu_mitigations_off()) {
> + l1tf_mitigation = L1TF_MITIGATION_OFF;
> + return;
> + }
> +
> + if (l1tf_mitigation == L1TF_MITIGATION_AUTO) {
> + if (cpu_mitigations_auto_nosmt())
> + l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
> + else
> + l1tf_mitigation = L1TF_MITIGATION_FLUSH;
> + }
> +
> +}
Extra whitespace.
> +
> +static void __init l1tf_apply_mitigation(void)
> {
> u64 half_pa;
>
> if (!boot_cpu_has_bug(X86_BUG_L1TF))
> return;
>
> - if (cpu_mitigations_off())
> - l1tf_mitigation = L1TF_MITIGATION_OFF;
> - else if (cpu_mitigations_auto_nosmt())
> - l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
> -
> override_cache_bits(&boot_cpu_data);
>
> switch (l1tf_mitigation) {
> case L1TF_MITIGATION_OFF:
> + return;
Note the PTE inverstion mitigation is already done unconditionally, the
X86_FEATURE_L1TF_PTEINV bit is just for reporting that. So this
shouldn't return.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 17/35] x86/bugs: Restructure l1tf mitigation
2025-02-11 1:21 ` Josh Poimboeuf
@ 2025-02-12 16:47 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-12 16:47 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Monday, February 10, 2025 7:21 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 17/35] x86/bugs: Restructure l1tf mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:57PM -0600, David Kaplan wrote:
> > static void __init l1tf_select_mitigation(void)
> > +{
> > + if (!boot_cpu_has_bug(X86_BUG_L1TF) || cpu_mitigations_off()) {
> > + l1tf_mitigation = L1TF_MITIGATION_OFF;
> > + return;
> > + }
> > +
> > + if (l1tf_mitigation == L1TF_MITIGATION_AUTO) {
> > + if (cpu_mitigations_auto_nosmt())
> > + l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
> > + else
> > + l1tf_mitigation = L1TF_MITIGATION_FLUSH;
> > + }
> > +
> > +}
>
> Extra whitespace.
Ack
>
> > +
> > +static void __init l1tf_apply_mitigation(void)
> > {
> > u64 half_pa;
> >
> > if (!boot_cpu_has_bug(X86_BUG_L1TF))
> > return;
> >
> > - if (cpu_mitigations_off())
> > - l1tf_mitigation = L1TF_MITIGATION_OFF;
> > - else if (cpu_mitigations_auto_nosmt())
> > - l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
> > -
> > override_cache_bits(&boot_cpu_data);
> >
> > switch (l1tf_mitigation) {
> > case L1TF_MITIGATION_OFF:
> > + return;
>
> Note the PTE inverstion mitigation is already done unconditionally, the
> X86_FEATURE_L1TF_PTEINV bit is just for reporting that. So this shouldn't return.
>
Ok, will fix
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 18/35] x86/bugs: Restructure srso mitigation
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (16 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 17/35] x86/bugs: Restructure l1tf mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-11 16:39 ` Josh Poimboeuf
2025-01-08 20:24 ` [PATCH v3 19/35] Documentation/x86: Document the new attack vector controls David Kaplan
` (17 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Restructure srso to use select/update/apply functions to create
consistent vulnerability handling. Like with retbleed, the command line
options directly select mitigations which can later be modified.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 188 ++++++++++++++++++-------------------
1 file changed, 90 insertions(+), 98 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 08ac515df888..aee2945bdef9 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -84,6 +84,8 @@ static void __init srbds_select_mitigation(void);
static void __init srbds_apply_mitigation(void);
static void __init l1d_flush_select_mitigation(void);
static void __init srso_select_mitigation(void);
+static void __init srso_update_mitigation(void);
+static void __init srso_apply_mitigation(void);
static void __init gds_select_mitigation(void);
static void __init gds_apply_mitigation(void);
static void __init bhi_select_mitigation(void);
@@ -200,11 +202,6 @@ void __init cpu_select_mitigations(void)
rfds_select_mitigation();
srbds_select_mitigation();
l1d_flush_select_mitigation();
-
- /*
- * srso_select_mitigation() depends and must run after
- * retbleed_select_mitigation().
- */
srso_select_mitigation();
gds_select_mitigation();
bhi_select_mitigation();
@@ -220,6 +217,7 @@ void __init cpu_select_mitigations(void)
taa_update_mitigation();
mmio_update_mitigation();
rfds_update_mitigation();
+ srso_update_mitigation();
spectre_v1_apply_mitigation();
spectre_v2_apply_mitigation();
@@ -232,6 +230,7 @@ void __init cpu_select_mitigations(void)
mmio_apply_mitigation();
rfds_apply_mitigation();
srbds_apply_mitigation();
+ srso_apply_mitigation();
gds_apply_mitigation();
bhi_apply_mitigation();
}
@@ -2673,6 +2672,7 @@ early_param("l1tf", l1tf_cmdline);
enum srso_mitigation {
SRSO_MITIGATION_NONE,
+ SRSO_MITIGATION_AUTO,
SRSO_MITIGATION_UCODE_NEEDED,
SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED,
SRSO_MITIGATION_MICROCODE,
@@ -2681,14 +2681,6 @@ enum srso_mitigation {
SRSO_MITIGATION_IBPB_ON_VMEXIT,
};
-enum srso_mitigation_cmd {
- SRSO_CMD_OFF,
- SRSO_CMD_MICROCODE,
- SRSO_CMD_SAFE_RET,
- SRSO_CMD_IBPB,
- SRSO_CMD_IBPB_ON_VMEXIT,
-};
-
static const char * const srso_strings[] = {
[SRSO_MITIGATION_NONE] = "Vulnerable",
[SRSO_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode",
@@ -2699,8 +2691,7 @@ static const char * const srso_strings[] = {
[SRSO_MITIGATION_IBPB_ON_VMEXIT] = "Mitigation: IBPB on VMEXIT only"
};
-static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_NONE;
-static enum srso_mitigation_cmd srso_cmd __ro_after_init = SRSO_CMD_SAFE_RET;
+static enum srso_mitigation srso_mitigation __ro_after_init = SRSO_MITIGATION_AUTO;
static int __init srso_parse_cmdline(char *str)
{
@@ -2708,15 +2699,15 @@ static int __init srso_parse_cmdline(char *str)
return -EINVAL;
if (!strcmp(str, "off"))
- srso_cmd = SRSO_CMD_OFF;
+ srso_mitigation = SRSO_MITIGATION_NONE;
else if (!strcmp(str, "microcode"))
- srso_cmd = SRSO_CMD_MICROCODE;
+ srso_mitigation = SRSO_MITIGATION_MICROCODE;
else if (!strcmp(str, "safe-ret"))
- srso_cmd = SRSO_CMD_SAFE_RET;
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET;
else if (!strcmp(str, "ibpb"))
- srso_cmd = SRSO_CMD_IBPB;
+ srso_mitigation = SRSO_MITIGATION_IBPB;
else if (!strcmp(str, "ibpb-vmexit"))
- srso_cmd = SRSO_CMD_IBPB_ON_VMEXIT;
+ srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
else
pr_err("Ignoring unknown SRSO option (%s).", str);
@@ -2730,13 +2721,14 @@ static void __init srso_select_mitigation(void)
{
bool has_microcode = boot_cpu_has(X86_FEATURE_IBPB_BRTYPE);
- if (!boot_cpu_has_bug(X86_BUG_SRSO) ||
- cpu_mitigations_off() ||
- srso_cmd == SRSO_CMD_OFF) {
- if (boot_cpu_has(X86_FEATURE_SBPB))
- x86_pred_cmd = PRED_CMD_SBPB;
+ if (!boot_cpu_has_bug(X86_BUG_SRSO) || cpu_mitigations_off())
+ srso_mitigation = SRSO_MITIGATION_NONE;
+
+ if (srso_mitigation == SRSO_MITIGATION_NONE)
return;
- }
+
+ if (srso_mitigation == SRSO_MITIGATION_AUTO)
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET;
if (has_microcode) {
/*
@@ -2749,98 +2741,98 @@ static void __init srso_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
return;
}
-
- if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB) {
- srso_mitigation = SRSO_MITIGATION_IBPB;
- goto out;
- }
} else {
pr_warn("IBPB-extending microcode not applied!\n");
pr_warn(SRSO_NOTICE);
- /* may be overwritten by SRSO_CMD_SAFE_RET below */
- srso_mitigation = SRSO_MITIGATION_UCODE_NEEDED;
+ /* Fall-back to Safe-RET */
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
}
- switch (srso_cmd) {
- case SRSO_CMD_MICROCODE:
- if (has_microcode) {
- srso_mitigation = SRSO_MITIGATION_MICROCODE;
- pr_warn(SRSO_NOTICE);
- }
+ switch (srso_mitigation) {
+ case SRSO_MITIGATION_MICROCODE:
break;
- case SRSO_CMD_SAFE_RET:
- if (boot_cpu_has(X86_FEATURE_SRSO_USER_KERNEL_NO))
- goto ibpb_on_vmexit;
-
- if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
- /*
- * Enable the return thunk for generated code
- * like ftrace, static_call, etc.
- */
- setup_force_cpu_cap(X86_FEATURE_RETHUNK);
- setup_force_cpu_cap(X86_FEATURE_UNRET);
-
- if (boot_cpu_data.x86 == 0x19) {
- setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
- x86_return_thunk = srso_alias_return_thunk;
- } else {
- setup_force_cpu_cap(X86_FEATURE_SRSO);
- x86_return_thunk = srso_return_thunk;
- }
- if (has_microcode)
- srso_mitigation = SRSO_MITIGATION_SAFE_RET;
- else
- srso_mitigation = SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED;
- } else {
+ case SRSO_MITIGATION_SAFE_RET:
+ case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED:
+ if (!IS_ENABLED(CONFIG_MITIGATION_SRSO))
pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
- }
+ else if (boot_cpu_has(X86_FEATURE_SRSO_USER_KERNEL_NO))
+ srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
break;
- case SRSO_CMD_IBPB:
- if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
- if (has_microcode) {
- setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
- srso_mitigation = SRSO_MITIGATION_IBPB;
-
- /*
- * IBPB on entry already obviates the need for
- * software-based untraining so clear those in case some
- * other mitigation like Retbleed has selected them.
- */
- setup_clear_cpu_cap(X86_FEATURE_UNRET);
- setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
- }
- } else {
+ case SRSO_MITIGATION_IBPB:
+ if (!IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY))
pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
- }
break;
-ibpb_on_vmexit:
- case SRSO_CMD_IBPB_ON_VMEXIT:
- if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
- if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
- setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
- srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
-
- /*
- * There is no need for RSB filling: entry_ibpb() ensures
- * all predictions, including the RSB, are invalidated,
- * regardless of IBPB implementation.
- */
- setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
- }
- } else {
+ case SRSO_MITIGATION_IBPB_ON_VMEXIT:
+ if (!IS_ENABLED(CONFIG_MITIGATION_SRSO))
pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
- }
+ break;
+ default:
+ break;
+ }
+}
+
+static void __init srso_update_mitigation(void)
+{
+ /* If retbleed is using IBPB, that works for SRSO as well */
+ if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB)
+ srso_mitigation = SRSO_MITIGATION_IBPB;
+
+ if (srso_mitigation != SRSO_MITIGATION_NONE)
+ pr_info("%s\n", srso_strings[srso_mitigation]);
+}
+
+static void __init srso_apply_mitigation(void)
+{
+ if (srso_mitigation == SRSO_MITIGATION_NONE) {
+ if (boot_cpu_has(X86_FEATURE_SBPB))
+ x86_pred_cmd = PRED_CMD_SBPB;
+ return;
+ }
+ switch (srso_mitigation) {
+ case SRSO_MITIGATION_SAFE_RET:
+ case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED:
+ /*
+ * Enable the return thunk for generated code
+ * like ftrace, static_call, etc.
+ */
+ setup_force_cpu_cap(X86_FEATURE_RETHUNK);
+ setup_force_cpu_cap(X86_FEATURE_UNRET);
+
+ if (boot_cpu_data.x86 == 0x19) {
+ setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
+ x86_return_thunk = srso_alias_return_thunk;
+ } else {
+ setup_force_cpu_cap(X86_FEATURE_SRSO);
+ x86_return_thunk = srso_return_thunk;
+ }
+ break;
+ case SRSO_MITIGATION_IBPB:
+ setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
+ /*
+ * IBPB on entry already obviates the need for
+ * software-based untraining so clear those in case some
+ * other mitigation like Retbleed has selected them.
+ */
+ setup_clear_cpu_cap(X86_FEATURE_UNRET);
+ setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
+ break;
+ case SRSO_MITIGATION_IBPB_ON_VMEXIT:
+ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+ /*
+ * There is no need for RSB filling: entry_ibpb() ensures
+ * all predictions, including the RSB, are invalidated,
+ * regardless of IBPB implementation.
+ */
+ setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
break;
default:
break;
}
-out:
- pr_info("%s\n", srso_strings[srso_mitigation]);
}
#undef pr_fmt
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 18/35] x86/bugs: Restructure srso mitigation
2025-01-08 20:24 ` [PATCH v3 18/35] x86/bugs: Restructure srso mitigation David Kaplan
@ 2025-02-11 16:39 ` Josh Poimboeuf
2025-02-12 17:01 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 16:39 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:58PM -0600, David Kaplan wrote:
> @@ -2749,98 +2741,98 @@ static void __init srso_select_mitigation(void)
> if (has_microcode) {
> /*
> * Zen1/2 with SMT off aren't vulnerable after the right
> * IBPB microcode has been applied.
> *
> * Zen1/2 don't have SBPB, no need to try to enable it here.
> */
This second paragraph no longer applies here since enablement isn't done
in this function anyway.
> if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
> setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
> return;
> }
This should also set 'srso_mitigation = SRSO_MITIGATION_NONE', otherwise
it will end up applying the mitigation.
> + switch (srso_mitigation) {
> + case SRSO_MITIGATION_MICROCODE:
> break;
The switch statement has a default case so this one isn't needed.
>
> + case SRSO_MITIGATION_SAFE_RET:
> + case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED:
> + if (!IS_ENABLED(CONFIG_MITIGATION_SRSO))
> pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
> - }
> + else if (boot_cpu_has(X86_FEATURE_SRSO_USER_KERNEL_NO))
> + srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
This misses the below SRSO_MITIGATION_IBPB_ON_VMEXIT check for
CONFIG_MITIGATION_SRSO.
Though, that doesn't make any sense. What they really need to be
checking for is CONFIG_MITIGATION_IBPB_ENTRY.
> + case SRSO_MITIGATION_IBPB_ON_VMEXIT:
> + if (!IS_ENABLED(CONFIG_MITIGATION_SRSO))
> pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
> - }
> + break;
This is an existing bug, but as mentioned above this should be checking
for CONFIG_MITIGATION_IBPB_ENTRY instead of CONFIG_MITIGATION_SRSO.
> +static void __init srso_update_mitigation(void)
> +{
> + /* If retbleed is using IBPB, that works for SRSO as well */
> + if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB)
> + srso_mitigation = SRSO_MITIGATION_IBPB;
Another dependency on retbleed_update_mitigation().
> + if (srso_mitigation != SRSO_MITIGATION_NONE)
> + pr_info("%s\n", srso_strings[srso_mitigation]);
> +}
For consistency with others this should probably be something like
if (boot_cpu_has_bug(X86_BUG_SRSO) && !cpu_migitations_off())
pr_info("%s\n", srso_strings[srso_mitigation]);
> + setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
> break;
> default:
> break;
> }
>
> -out:
> - pr_info("%s\n", srso_strings[srso_mitigation]);
> }
Extra whitespace.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 18/35] x86/bugs: Restructure srso mitigation
2025-02-11 16:39 ` Josh Poimboeuf
@ 2025-02-12 17:01 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-12 17:01 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Tuesday, February 11, 2025 10:39 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 18/35] x86/bugs: Restructure srso mitigation
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:58PM -0600, David Kaplan wrote:
> > @@ -2749,98 +2741,98 @@ static void __init srso_select_mitigation(void)
> > if (has_microcode) {
> > /*
> > * Zen1/2 with SMT off aren't vulnerable after the right
> > * IBPB microcode has been applied.
> > *
> > * Zen1/2 don't have SBPB, no need to try to enable it here.
> > */
>
> This second paragraph no longer applies here since enablement isn't done in this
> function anyway.
Ah, good point
>
> > if (boot_cpu_data.x86 < 0x19 && !cpu_smt_possible()) {
> > setup_force_cpu_cap(X86_FEATURE_SRSO_NO);
> > return;
> > }
>
> This should also set 'srso_mitigation = SRSO_MITIGATION_NONE', otherwise it
> will end up applying the mitigation.
Good catch, will fix.
>
>
> > + switch (srso_mitigation) {
> > + case SRSO_MITIGATION_MICROCODE:
> > break;
>
> The switch statement has a default case so this one isn't needed.
Ack
>
> >
> > + case SRSO_MITIGATION_SAFE_RET:
> > + case SRSO_MITIGATION_SAFE_RET_UCODE_NEEDED:
> > + if (!IS_ENABLED(CONFIG_MITIGATION_SRSO))
> > pr_err("WARNING: kernel not compiled with
> MITIGATION_SRSO.\n");
> > - }
> > + else if (boot_cpu_has(X86_FEATURE_SRSO_USER_KERNEL_NO))
> > + srso_mitigation =
> > + SRSO_MITIGATION_IBPB_ON_VMEXIT;
>
> This misses the below SRSO_MITIGATION_IBPB_ON_VMEXIT check for
> CONFIG_MITIGATION_SRSO.
>
> Though, that doesn't make any sense. What they really need to be checking for is
> CONFIG_MITIGATION_IBPB_ENTRY.
>
> > + case SRSO_MITIGATION_IBPB_ON_VMEXIT:
> > + if (!IS_ENABLED(CONFIG_MITIGATION_SRSO))
> > pr_err("WARNING: kernel not compiled with
> MITIGATION_SRSO.\n");
> > - }
> > + break;
>
> This is an existing bug, but as mentioned above this should be checking for
> CONFIG_MITIGATION_IBPB_ENTRY instead of CONFIG_MITIGATION_SRSO.
Agreed, will fix both cases
>
> > +static void __init srso_update_mitigation(void) {
> > + /* If retbleed is using IBPB, that works for SRSO as well */
> > + if (retbleed_mitigation == RETBLEED_MITIGATION_IBPB)
> > + srso_mitigation = SRSO_MITIGATION_IBPB;
>
> Another dependency on retbleed_update_mitigation().
Well, not really (other than the bizarre retbleed='stuff' on AMD case mentioned in the other patch). That is, I don't think there's a case that matters if retbleed_update_mitigation is run before or not.
But I think I can at least document that this function uses retbleed_mitigation.
>
> > + if (srso_mitigation != SRSO_MITIGATION_NONE)
> > + pr_info("%s\n", srso_strings[srso_mitigation]); }
>
> For consistency with others this should probably be something like
>
> if (boot_cpu_has_bug(X86_BUG_SRSO) && !cpu_migitations_off())
> pr_info("%s\n", srso_strings[srso_mitigation]);
Ok
>
> > + setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
> > break;
> > default:
> > break;
> > }
> >
> > -out:
> > - pr_info("%s\n", srso_strings[srso_mitigation]);
> > }
>
> Extra whitespace.
Ack
Thanks --David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 19/35] Documentation/x86: Document the new attack vector controls
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (17 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 18/35] x86/bugs: Restructure srso mitigation David Kaplan
@ 2025-01-08 20:24 ` David Kaplan
2025-02-11 16:43 ` Josh Poimboeuf
2025-01-08 20:25 ` [PATCH v3 20/35] x86/bugs: Define attack vectors David Kaplan
` (16 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:24 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Document the 5 new attack vector command line options, how they
interact with existing vulnerability controls, and recommendations on
when they can be disabled.
Note that while mitigating against untrusted userspace requires both
mitigate_user_kernel and mitigate_user_user, these are kept separate.
The kernel can control what code executes inside of it and that may
affect the risk associated with vulnerabilities especially if new kernel
mitigations are implemented. The same isn't typically true of userspace.
In other words, the risk associated with user_user or guest_guest
attacks is unlikely to change over time. While the risk associated with
user_kernel or guest_host attacks may change. Therefore, these controls
are separated.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
.../hw-vuln/attack_vector_controls.rst | 172 ++++++++++++++++++
Documentation/admin-guide/hw-vuln/index.rst | 1 +
2 files changed, 173 insertions(+)
create mode 100644 Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
diff --git a/Documentation/admin-guide/hw-vuln/attack_vector_controls.rst b/Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
new file mode 100644
index 000000000000..541c8a3cac13
--- /dev/null
+++ b/Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
@@ -0,0 +1,172 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Attack Vector Controls
+======================
+
+Attack vector controls provide a simple method to configure only the mitigations
+for CPU vulnerabilities which are relevant given the intended use of a system.
+Administrators are encouraged to consider which attack vectors are relevant and
+disable all others in order to recoup system performance.
+
+When new relevant CPU vulnerabilities are found, they will be added to these
+attack vector controls so administrators will likely not need to reconfigure
+their command line parameters as mitigations will continue to be correctly
+applied based on the chosen attack vector controls.
+
+Attack Vectors
+--------------
+
+There are 5 sets of attack-vector mitigations currently supported by the kernel:
+
+#. :ref:`user_kernel` (mitigate_user_kernel= )
+#. :ref:`user_user` (mitigate_user_user= )
+#. :ref:`guest_host` (mitigate_guest_host= )
+#. :ref:`guest_guest` (mitigate_guest_guest=)
+#. :ref:`cross_thread` (mitigate_cross_thread= )
+
+Each control may either be specified as 'off' or 'on'.
+
+.. _user_kernel:
+
+User-to-Kernel
+^^^^^^^^^^^^^^
+
+The user-to-kernel attack vector involves a malicious userspace program
+attempting to leak kernel data into userspace by exploiting a CPU vulnerability.
+The kernel data involved might be limited to certain kernel memory, or include
+all memory in the system, depending on the vulnerability exploited.
+
+If no untrusted userspace applications are being run, such as with single-user
+systems, consider disabling user-to-kernel mitigations.
+
+Note that the CPU vulnerabilities mitigated by Linux have generally not been
+shown to be exploitable from browser-based sandboxes. User-to-kernel
+mitigations are therefore mostly relevant if unknown userspace applications may
+be run by untrusted users.
+
+*mitigate_user_kernel defaults to 'on'*
+
+.. _user_user:
+
+User-to-User
+^^^^^^^^^^^^
+
+The user-to-user attack vector involves a malicious userspace program attempting
+to influence the behavior of another unsuspecting userspace program in order to
+exfiltrate data. The vulnerability of a userspace program is based on the
+program itself and the interfaces it provides.
+
+If no untrusted userspace applications are being run, consider disabling
+user-to-user mitigations.
+
+Note that because the Linux kernel contains a mapping of all physical memory,
+preventing a malicious userspace program from leaking data from another
+userspace program requires mitigating user-to-kernel attacks as well for
+complete protection.
+
+*mitigate_user_user defaults to 'on'*
+
+.. _guest_host:
+
+Guest-to-Host
+^^^^^^^^^^^^^
+
+The guest-to-host attack vector involves a malicious VM attempting to leak
+hypervisor data into the VM. The data involved may be limited, or may
+potentially include all memory in the system, depending on the vulnerability
+exploited.
+
+If no untrusted VMs are being run, consider disabling guest-to-host mitigations.
+
+*mitigate_guest_host defaults to 'on' if KVM support is present*
+
+.. _guest_guest:
+
+Guest-to-Guest
+^^^^^^^^^^^^^^
+
+The guest-to-guest attack vector involves a malicious VM attempting to influence
+the behavior of another unsuspecting VM in order to exfiltrate data. The
+vulnerability of a VM is based on the code inside the VM itself and the
+interfaces it provides.
+
+If no untrusted VMs, or only a single VM is being run, consider disabling
+guest-to-guest mitigations.
+
+Similar to the user-to-user attack vector, preventing a malicious VM from
+leaking data from another VM requires mitigating guest-to-host attacks as well
+due to the Linux kernel phys map.
+
+*mitigate_guest_guest defaults to 'on' if KVM support is present*
+
+.. _cross_thread:
+
+Cross-Thread
+^^^^^^^^^^^^
+
+The cross-thread attack vector involves a malicious userspace program or
+malicious VM either observing or attempting to influence the behavior of code
+running on the SMT sibling thread in order to exfiltrate data.
+
+Many cross-thread attacks can only be mitigated if SMT is disabled, which will
+result in reduced CPU core count and reduced performance. Enabling mitigations
+for the cross-thread attack vector may result in SMT being disabled, depending
+on the CPU vulnerabilities detected.
+
+*mitigate_cross_thread defaults to 'off'*
+
+Interactions with command-line options
+--------------------------------------
+
+The global 'mitigations=off' command line takes precedence over all attack
+vector controls and will disable all mitigations.
+
+Vulnerability-specific controls (e.g. "retbleed=off") take precedence over all
+attack vector controls. Mitigations for individual vulnerabilities may be
+turned on or off via their command-line options regardless of the attack vector
+controls.
+
+Summary of attack-vector mitigations
+------------------------------------
+
+When a vulnerability is mitigated due to an attack-vector control, the default
+mitigation option for that particular vulnerability is used. To use a different
+mitigation, please use the vulnerability-specific command line option.
+
+The table below summarizes which vulnerabilities are mitigated when different
+attack vectors are enabled and assuming the CPU is vulnerable.
+
+=============== ============== ============ ============= ============== ============
+Vulnerability User-to-Kernel User-to-User Guest-to-Host Guest-to-Guest Cross-Thread
+=============== ============== ============ ============= ============== ============
+BHI X X
+GDS X X X X X
+L1TF X (Note 1)
+MDS X X X X (Note 1)
+MMIO X X X X (Note 1)
+Meltdown X
+Retbleed X X (Note 2)
+RFDS X X X X
+Spectre_v1 X
+Spectre_v2 X X
+Spectre_v2_user X X
+SRBDS X X X X
+SRSO X X
+SSB (Note 3)
+TAA X X X X (Note 1)
+=============== ============== ============ ============= ============== ============
+
+Notes:
+ 1 -- Disables SMT if cross-thread mitigations are selected and CPU is vulnerable
+
+ 2 -- Disables SMT if cross-thread mitigations are selected, CPU is vulnerable,
+ and STIBP is not supported
+
+ 3 -- Speculative store bypass is always enabled by default (no kernel
+ mitigation applied) unless overridden with spec_store_bypass_disable option
+
+When an attack-vector is disabled (e.g., *mitigate_user_kernel=off*), all
+mitigations for the vulnerabilities listed in the above table are disabled,
+unless mitigation is required for a different enabled attack-vector or a
+mitigation is explicitly selected via a vulnerability-specific command line
+option.
diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
index ff0b440ef2dc..1add4a0baeb0 100644
--- a/Documentation/admin-guide/hw-vuln/index.rst
+++ b/Documentation/admin-guide/hw-vuln/index.rst
@@ -9,6 +9,7 @@ are configurable at compile, boot or run time.
.. toctree::
:maxdepth: 1
+ attack_vector_controls
spectre
l1tf
mds
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 19/35] Documentation/x86: Document the new attack vector controls
2025-01-08 20:24 ` [PATCH v3 19/35] Documentation/x86: Document the new attack vector controls David Kaplan
@ 2025-02-11 16:43 ` Josh Poimboeuf
2025-02-11 16:57 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 16:43 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:24:59PM -0600, David Kaplan wrote:
> +Cross-Thread
> +^^^^^^^^^^^^
> +
> +The cross-thread attack vector involves a malicious userspace program or
> +malicious VM either observing or attempting to influence the behavior of code
> +running on the SMT sibling thread in order to exfiltrate data.
> +
> +Many cross-thread attacks can only be mitigated if SMT is disabled, which will
> +result in reduced CPU core count and reduced performance. Enabling mitigations
> +for the cross-thread attack vector may result in SMT being disabled, depending
> +on the CPU vulnerabilities detected.
> +
> +*mitigate_cross_thread defaults to 'off'*
How does STIBP fit into this? It's a cross-thread mitigation, but it's
much cheaper than, say, disabling SMT.
The default is generally to enable STIBP where applicable, but *not* to
disable SMT.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* RE: [PATCH v3 19/35] Documentation/x86: Document the new attack vector controls
2025-02-11 16:43 ` Josh Poimboeuf
@ 2025-02-11 16:57 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-11 16:57 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Tuesday, February 11, 2025 10:44 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 19/35] Documentation/x86: Document the new attack
> vector controls
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:24:59PM -0600, David Kaplan wrote:
> > +Cross-Thread
> > +^^^^^^^^^^^^
> > +
> > +The cross-thread attack vector involves a malicious userspace program
> > +or malicious VM either observing or attempting to influence the
> > +behavior of code running on the SMT sibling thread in order to exfiltrate data.
> > +
> > +Many cross-thread attacks can only be mitigated if SMT is disabled,
> > +which will result in reduced CPU core count and reduced performance.
> > +Enabling mitigations for the cross-thread attack vector may result in
> > +SMT being disabled, depending on the CPU vulnerabilities detected.
> > +
> > +*mitigate_cross_thread defaults to 'off'*
>
> How does STIBP fit into this? It's a cross-thread mitigation, but it's much cheaper
> than, say, disabling SMT.
>
> The default is generally to enable STIBP where applicable, but *not* to disable SMT.
>
The current patch series treats STIBP and IBPB similar and will enable them if the user->user or guest->guest attack vectors are selected.
Technically STIBP is a cross-thread protection though and only needs to be enabled if cross-thread protection is desired. The challenge here is that mitigate_cross_thread defaults to 'off', while STIBP has historically defaulted to 'on'. This is arguably an inconsistency in the current code, although it presumably comes from the fact that enabling STIBP is relatively cheap while disabling SMT is not. But from a security standpoint, only mitigating some attacks does not actually mitigate the attack vector.
Open to feedback on how to handle this. I can leave it as is, and perhaps just document that STIBP gets enabled under the attack vectors mentioned above. I do not want to change any of the mitigation defaults though.
Thanks --David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (18 preceding siblings ...)
2025-01-08 20:24 ` [PATCH v3 19/35] Documentation/x86: Document the new attack vector controls David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-02-11 18:07 ` Josh Poimboeuf
2025-01-08 20:25 ` [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls David Kaplan
` (15 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Define 5 new attack vectors that are used for controlling CPU
speculation mitigations and associated command line options. Each
attack vector may be enabled or disabled, which affects the CPU
mitigations enabled.
The default settings for these attack vectors are consistent with
existing kernel defaults, other than the automatic disabling of VM-based
attack vectors if KVM support is not present.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/include/asm/bugs.h | 11 +++++++
arch/x86/kernel/cpu/bugs.c | 60 +++++++++++++++++++++++++++++++++++++
2 files changed, 71 insertions(+)
diff --git a/arch/x86/include/asm/bugs.h b/arch/x86/include/asm/bugs.h
index f25ca2d709d4..354d04a964f0 100644
--- a/arch/x86/include/asm/bugs.h
+++ b/arch/x86/include/asm/bugs.h
@@ -12,4 +12,15 @@ static inline int ppro_with_ram_bug(void) { return 0; }
extern void cpu_bugs_smt_update(void);
+enum cpu_attack_vectors {
+ CPU_MITIGATE_USER_KERNEL,
+ CPU_MITIGATE_USER_USER,
+ CPU_MITIGATE_GUEST_HOST,
+ CPU_MITIGATE_GUEST_GUEST,
+ CPU_MITIGATE_CROSS_THREAD,
+ NR_CPU_ATTACK_VECTORS,
+};
+
+bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v);
+
#endif /* _ASM_X86_BUGS_H */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index aee2945bdef9..88eba8e4c7fb 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -169,6 +169,66 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
DEFINE_STATIC_KEY_FALSE(mmio_stale_data_clear);
EXPORT_SYMBOL_GPL(mmio_stale_data_clear);
+#ifdef CONFIG_CPU_MITIGATIONS
+/*
+ * All except the cross-thread attack vector are mitigated by default.
+ * Cross-thread mitigation often requires disabling SMT which is too expensive
+ * to be enabled by default.
+ *
+ * Guest-to-Host and Guest-to-Guest vectors are only needed if KVM support is
+ * present.
+ */
+static bool cpu_mitigate_attack_vectors[NR_CPU_ATTACK_VECTORS] __ro_after_init = {
+ [CPU_MITIGATE_USER_KERNEL] = true,
+ [CPU_MITIGATE_USER_USER] = true,
+ [CPU_MITIGATE_GUEST_HOST] = IS_ENABLED(CONFIG_KVM),
+ [CPU_MITIGATE_GUEST_GUEST] = IS_ENABLED(CONFIG_KVM),
+ [CPU_MITIGATE_CROSS_THREAD] = false
+};
+
+#define DEFINE_ATTACK_VECTOR(opt, v) \
+ static int __init v##_parse_cmdline(char *arg) \
+{ \
+ if (!strcmp(arg, "off")) \
+ cpu_mitigate_attack_vectors[v] = false; \
+ else if (!strcmp(arg, "on")) \
+ cpu_mitigate_attack_vectors[v] = true; \
+ else \
+ pr_warn("Unsupported " opt "=%s\n", arg); \
+ return 0; \
+} \
+early_param(opt, v##_parse_cmdline)
+
+bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v)
+{
+ if (v < NR_CPU_ATTACK_VECTORS)
+ return cpu_mitigate_attack_vectors[v];
+
+ WARN_ON_ONCE(v >= NR_CPU_ATTACK_VECTORS);
+ return false;
+}
+
+#else
+#define DEFINE_ATTACK_VECTOR(opt, v) \
+static int __init v##_parse_cmdline(char *arg) \
+{ \
+ pr_crit("Kernel compiled without mitigations, ignoring %s; system may still be vulnerable\n", opt); \
+ return 0; \
+} \
+early_param(opt, v##_parse_cmdline)
+
+bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v)
+{
+ return false;
+}
+#endif
+
+DEFINE_ATTACK_VECTOR("mitigate_user_kernel", CPU_MITIGATE_USER_KERNEL);
+DEFINE_ATTACK_VECTOR("mitigate_user_user", CPU_MITIGATE_USER_USER);
+DEFINE_ATTACK_VECTOR("mitigate_guest_host", CPU_MITIGATE_GUEST_HOST);
+DEFINE_ATTACK_VECTOR("mitigate_guest_guest", CPU_MITIGATE_GUEST_GUEST);
+DEFINE_ATTACK_VECTOR("mitigate_cross_thread", CPU_MITIGATE_CROSS_THREAD);
+
void __init cpu_select_mitigations(void)
{
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-01-08 20:25 ` [PATCH v3 20/35] x86/bugs: Define attack vectors David Kaplan
@ 2025-02-11 18:07 ` Josh Poimboeuf
2025-02-12 17:20 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 18:07 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:25:00PM -0600, David Kaplan wrote:
> Define 5 new attack vectors that are used for controlling CPU
> speculation mitigations and associated command line options. Each
> attack vector may be enabled or disabled, which affects the CPU
> mitigations enabled.
>
> The default settings for these attack vectors are consistent with
> existing kernel defaults, other than the automatic disabling of VM-based
> attack vectors if KVM support is not present.
>
> Signed-off-by: David Kaplan <david.kaplan@amd.com>
> ---
> arch/x86/include/asm/bugs.h | 11 +++++++
> arch/x86/kernel/cpu/bugs.c | 60 +++++++++++++++++++++++++++++++++++++
> 2 files changed, 71 insertions(+)
>
> diff --git a/arch/x86/include/asm/bugs.h b/arch/x86/include/asm/bugs.h
> index f25ca2d709d4..354d04a964f0 100644
> --- a/arch/x86/include/asm/bugs.h
> +++ b/arch/x86/include/asm/bugs.h
> @@ -12,4 +12,15 @@ static inline int ppro_with_ram_bug(void) { return 0; }
>
> extern void cpu_bugs_smt_update(void);
>
> +enum cpu_attack_vectors {
> + CPU_MITIGATE_USER_KERNEL,
> + CPU_MITIGATE_USER_USER,
> + CPU_MITIGATE_GUEST_HOST,
> + CPU_MITIGATE_GUEST_GUEST,
> + CPU_MITIGATE_CROSS_THREAD,
> + NR_CPU_ATTACK_VECTORS,
> +};
> +
> +bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v);
> +
> #endif /* _ASM_X86_BUGS_H */
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index aee2945bdef9..88eba8e4c7fb 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -169,6 +169,66 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
> DEFINE_STATIC_KEY_FALSE(mmio_stale_data_clear);
> EXPORT_SYMBOL_GPL(mmio_stale_data_clear);
>
> +#ifdef CONFIG_CPU_MITIGATIONS
> +/*
> + * All except the cross-thread attack vector are mitigated by default.
> + * Cross-thread mitigation often requires disabling SMT which is too expensive
> + * to be enabled by default.
> + *
> + * Guest-to-Host and Guest-to-Guest vectors are only needed if KVM support is
> + * present.
> + */
> +static bool cpu_mitigate_attack_vectors[NR_CPU_ATTACK_VECTORS] __ro_after_init = {
> + [CPU_MITIGATE_USER_KERNEL] = true,
> + [CPU_MITIGATE_USER_USER] = true,
> + [CPU_MITIGATE_GUEST_HOST] = IS_ENABLED(CONFIG_KVM),
> + [CPU_MITIGATE_GUEST_GUEST] = IS_ENABLED(CONFIG_KVM),
> + [CPU_MITIGATE_CROSS_THREAD] = false
> +};
> +
> +#define DEFINE_ATTACK_VECTOR(opt, v) \
s/opt/name/ to distinguish it from v.
> + static int __init v##_parse_cmdline(char *arg) \
Instead of "CPU_MITIGATE_USER_KERNEL_parse_cmdline" it should really be
"mitigate_user_kernel_cmdline".
Also this line shouldn't be indented.
Also it's more readable to tab align all the line continuation
backslashes.
> +{ \
> + if (!strcmp(arg, "off")) \
> + cpu_mitigate_attack_vectors[v] = false; \
> + else if (!strcmp(arg, "on")) \
> + cpu_mitigate_attack_vectors[v] = true; \
> + else \
> + pr_warn("Unsupported " opt "=%s\n", arg); \
> + return 0; \
> +} \
> +early_param(opt, v##_parse_cmdline)
> +
> +bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v)
> +{
> + if (v < NR_CPU_ATTACK_VECTORS)
> + return cpu_mitigate_attack_vectors[v];
> +
> + WARN_ON_ONCE(v >= NR_CPU_ATTACK_VECTORS);
> + return false;
> +}
This error can be checked at build time.
> +#else
This needs a /* !CONFIG_CPU_MITIGATIONS */ comment.
> #endif
As does this.
So, something like so:
#ifdef CONFIG_CPU_MITIGATIONS
/*
* All except the cross-thread attack vector are mitigated by default.
* Cross-thread mitigation often requires disabling SMT which is too expensive
* to be enabled by default.
*
* Guest-to-Host and Guest-to-Guest vectors are only needed if KVM support is
* present.
*/
static bool cpu_mitigate_attack_vectors[NR_CPU_ATTACK_VECTORS] __ro_after_init = {
[CPU_MITIGATE_USER_KERNEL] = true,
[CPU_MITIGATE_USER_USER] = true,
[CPU_MITIGATE_GUEST_HOST] = IS_ENABLED(CONFIG_KVM),
[CPU_MITIGATE_GUEST_GUEST] = IS_ENABLED(CONFIG_KVM),
[CPU_MITIGATE_CROSS_THREAD] = false
};
#define DEFINE_ATTACK_VECTOR(name, v) \
static int __init name##_parse_cmdline(char *arg) \
{ \
if (!strcmp(arg, "off")) \
cpu_mitigate_attack_vectors[v] = false; \
else if (!strcmp(arg, "on")) \
cpu_mitigate_attack_vectors[v] = true; \
else \
pr_warn("Unsupported " __stringify(name) "=%s\n", arg); \
return 0; \
} \
early_param(__stringify(name), name##_parse_cmdline)
#define cpu_mitigate_attack_vector(v) \
({ \
BUILD_BUG_ON(v >= NR_CPU_ATTACK_VECTORS); \
cpu_mitigate_attack_vectors[v]; \
})
#else /* !CONFIG_CPU_MITIGATIONS */
#define DEFINE_ATTACK_VECTOR(name, v) \
static int __init name##_parse_cmdline(char *arg) \
{ \
pr_crit("Kernel compiled without mitigations, ignoring %s; system may still be vulnerable\n", \
__stringify(name)); \
return 0; \
} \
early_param(__stringify(name), name##_parse_cmdline)
#define cpu_mitigate_attack_vector(v) false
#endif /* !CONFIG_CPU_MITIGATIONS */
DEFINE_ATTACK_VECTOR(mitigate_user_kernel, CPU_MITIGATE_USER_KERNEL);
DEFINE_ATTACK_VECTOR(mitigate_user_user, CPU_MITIGATE_USER_USER);
DEFINE_ATTACK_VECTOR(mitigate_guest_host, CPU_MITIGATE_GUEST_HOST);
DEFINE_ATTACK_VECTOR(mitigate_guest_guest, CPU_MITIGATE_GUEST_GUEST);
DEFINE_ATTACK_VECTOR(mitigate_cross_thread, CPU_MITIGATE_CROSS_THREAD);
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-11 18:07 ` Josh Poimboeuf
@ 2025-02-12 17:20 ` Kaplan, David
2025-02-17 17:33 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-12 17:20 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Tuesday, February 11, 2025 12:08 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:25:00PM -0600, David Kaplan wrote:
> > Define 5 new attack vectors that are used for controlling CPU
> > speculation mitigations and associated command line options. Each
> > attack vector may be enabled or disabled, which affects the CPU
> > mitigations enabled.
> >
> > The default settings for these attack vectors are consistent with
> > existing kernel defaults, other than the automatic disabling of VM-based
> > attack vectors if KVM support is not present.
> >
> > Signed-off-by: David Kaplan <david.kaplan@amd.com>
> > ---
> > arch/x86/include/asm/bugs.h | 11 +++++++
> > arch/x86/kernel/cpu/bugs.c | 60
> +++++++++++++++++++++++++++++++++++++
> > 2 files changed, 71 insertions(+)
> >
> > diff --git a/arch/x86/include/asm/bugs.h b/arch/x86/include/asm/bugs.h
> > index f25ca2d709d4..354d04a964f0 100644
> > --- a/arch/x86/include/asm/bugs.h
> > +++ b/arch/x86/include/asm/bugs.h
> > @@ -12,4 +12,15 @@ static inline int ppro_with_ram_bug(void) { return 0; }
> >
> > extern void cpu_bugs_smt_update(void);
> >
> > +enum cpu_attack_vectors {
> > + CPU_MITIGATE_USER_KERNEL,
> > + CPU_MITIGATE_USER_USER,
> > + CPU_MITIGATE_GUEST_HOST,
> > + CPU_MITIGATE_GUEST_GUEST,
> > + CPU_MITIGATE_CROSS_THREAD,
> > + NR_CPU_ATTACK_VECTORS,
> > +};
> > +
> > +bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v);
> > +
> > #endif /* _ASM_X86_BUGS_H */
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index aee2945bdef9..88eba8e4c7fb 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -169,6 +169,66 @@
> DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
> > DEFINE_STATIC_KEY_FALSE(mmio_stale_data_clear);
> > EXPORT_SYMBOL_GPL(mmio_stale_data_clear);
> >
> > +#ifdef CONFIG_CPU_MITIGATIONS
> > +/*
> > + * All except the cross-thread attack vector are mitigated by default.
> > + * Cross-thread mitigation often requires disabling SMT which is too expensive
> > + * to be enabled by default.
> > + *
> > + * Guest-to-Host and Guest-to-Guest vectors are only needed if KVM support is
> > + * present.
> > + */
> > +static bool cpu_mitigate_attack_vectors[NR_CPU_ATTACK_VECTORS]
> __ro_after_init = {
> > + [CPU_MITIGATE_USER_KERNEL] = true,
> > + [CPU_MITIGATE_USER_USER] = true,
> > + [CPU_MITIGATE_GUEST_HOST] = IS_ENABLED(CONFIG_KVM),
> > + [CPU_MITIGATE_GUEST_GUEST] = IS_ENABLED(CONFIG_KVM),
> > + [CPU_MITIGATE_CROSS_THREAD] = false
> > +};
> > +
> > +#define DEFINE_ATTACK_VECTOR(opt, v) \
>
> s/opt/name/ to distinguish it from v.
>
> > + static int __init v##_parse_cmdline(char *arg) \
>
> Instead of "CPU_MITIGATE_USER_KERNEL_parse_cmdline" it should really be
> "mitigate_user_kernel_cmdline".
>
> Also this line shouldn't be indented.
>
> Also it's more readable to tab align all the line continuation
> backslashes.
>
> > +{ \
> > + if (!strcmp(arg, "off")) \
> > + cpu_mitigate_attack_vectors[v] = false; \
> > + else if (!strcmp(arg, "on")) \
> > + cpu_mitigate_attack_vectors[v] = true; \
> > + else \
> > + pr_warn("Unsupported " opt "=%s\n", arg); \
> > + return 0; \
> > +} \
> > +early_param(opt, v##_parse_cmdline)
> > +
> > +bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v)
> > +{
> > + if (v < NR_CPU_ATTACK_VECTORS)
> > + return cpu_mitigate_attack_vectors[v];
> > +
> > + WARN_ON_ONCE(v >= NR_CPU_ATTACK_VECTORS);
> > + return false;
> > +}
>
> This error can be checked at build time.
>
> > +#else
>
> This needs a /* !CONFIG_CPU_MITIGATIONS */ comment.
>
> > #endif
>
> As does this.
>
>
> So, something like so:
>
> #ifdef CONFIG_CPU_MITIGATIONS
> /*
> * All except the cross-thread attack vector are mitigated by default.
> * Cross-thread mitigation often requires disabling SMT which is too expensive
> * to be enabled by default.
> *
> * Guest-to-Host and Guest-to-Guest vectors are only needed if KVM support is
> * present.
> */
> static bool cpu_mitigate_attack_vectors[NR_CPU_ATTACK_VECTORS]
> __ro_after_init = {
> [CPU_MITIGATE_USER_KERNEL] = true,
> [CPU_MITIGATE_USER_USER] = true,
> [CPU_MITIGATE_GUEST_HOST] = IS_ENABLED(CONFIG_KVM),
> [CPU_MITIGATE_GUEST_GUEST] = IS_ENABLED(CONFIG_KVM),
> [CPU_MITIGATE_CROSS_THREAD] = false
> };
>
> #define DEFINE_ATTACK_VECTOR(name, v) \
> static int __init name##_parse_cmdline(char *arg) \
> { \
> if (!strcmp(arg, "off")) \
> cpu_mitigate_attack_vectors[v] = false; \
> else if (!strcmp(arg, "on")) \
> cpu_mitigate_attack_vectors[v] = true; \
> else \
> pr_warn("Unsupported " __stringify(name) "=%s\n", arg); \
> return 0; \
> } \
> early_param(__stringify(name), name##_parse_cmdline)
>
> #define cpu_mitigate_attack_vector(v) \
> ({ \
> BUILD_BUG_ON(v >= NR_CPU_ATTACK_VECTORS); \
> cpu_mitigate_attack_vectors[v]; \
> })
>
> #else /* !CONFIG_CPU_MITIGATIONS */
>
> #define DEFINE_ATTACK_VECTOR(name, v) \
> static int __init name##_parse_cmdline(char *arg) \
> { \
> pr_crit("Kernel compiled without mitigations, ignoring %s; system may still be
> vulnerable\n", \
> __stringify(name)); \
> return 0; \
> } \
> early_param(__stringify(name), name##_parse_cmdline)
>
> #define cpu_mitigate_attack_vector(v) false
>
> #endif /* !CONFIG_CPU_MITIGATIONS */
>
> DEFINE_ATTACK_VECTOR(mitigate_user_kernel,
> CPU_MITIGATE_USER_KERNEL);
> DEFINE_ATTACK_VECTOR(mitigate_user_user,
> CPU_MITIGATE_USER_USER);
> DEFINE_ATTACK_VECTOR(mitigate_guest_host,
> CPU_MITIGATE_GUEST_HOST);
> DEFINE_ATTACK_VECTOR(mitigate_guest_guest,
> CPU_MITIGATE_GUEST_GUEST);
> DEFINE_ATTACK_VECTOR(mitigate_cross_thread,
> CPU_MITIGATE_CROSS_THREAD);
Got it, thanks
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-12 17:20 ` Kaplan, David
@ 2025-02-17 17:33 ` Kaplan, David
2025-02-17 20:19 ` Josh Poimboeuf
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-17 17:33 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Kaplan, David
> Sent: Wednesday, February 12, 2025 11:21 AM
> To: Josh Poimboeuf <jpoimboe@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: RE: [PATCH v3 20/35] x86/bugs: Define attack vectors
>
>
>
> > -----Original Message-----
> > From: Josh Poimboeuf <jpoimboe@kernel.org>
> > Sent: Tuesday, February 11, 2025 12:08 PM
> > To: Kaplan, David <David.Kaplan@amd.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov
> > <bp@alien8.de>; Peter Zijlstra <peterz@infradead.org>; Pawan Gupta
> > <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>;
> > Dave Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter
> > Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
> >
> > Caution: This message originated from an External Source. Use proper
> > caution when opening attachments, clicking links, or responding.
> >
> >
> > On Wed, Jan 08, 2025 at 02:25:00PM -0600, David Kaplan wrote:
> > > Define 5 new attack vectors that are used for controlling CPU
> > > speculation mitigations and associated command line options. Each
> > > attack vector may be enabled or disabled, which affects the CPU
> > > mitigations enabled.
> > >
> > > The default settings for these attack vectors are consistent with
> > > existing kernel defaults, other than the automatic disabling of
> > > VM-based attack vectors if KVM support is not present.
> > >
> > > Signed-off-by: David Kaplan <david.kaplan@amd.com>
> > > ---
> > > arch/x86/include/asm/bugs.h | 11 +++++++
> > > arch/x86/kernel/cpu/bugs.c | 60
> > +++++++++++++++++++++++++++++++++++++
> > > 2 files changed, 71 insertions(+)
> > >
> > > diff --git a/arch/x86/include/asm/bugs.h
> > > b/arch/x86/include/asm/bugs.h index f25ca2d709d4..354d04a964f0
> > > 100644
> > > --- a/arch/x86/include/asm/bugs.h
> > > +++ b/arch/x86/include/asm/bugs.h
> > > @@ -12,4 +12,15 @@ static inline int ppro_with_ram_bug(void) {
> > > return 0; }
> > >
> > > extern void cpu_bugs_smt_update(void);
> > >
> > > +enum cpu_attack_vectors {
> > > + CPU_MITIGATE_USER_KERNEL,
> > > + CPU_MITIGATE_USER_USER,
> > > + CPU_MITIGATE_GUEST_HOST,
> > > + CPU_MITIGATE_GUEST_GUEST,
> > > + CPU_MITIGATE_CROSS_THREAD,
> > > + NR_CPU_ATTACK_VECTORS,
> > > +};
> > > +
> > > +bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v);
> > > +
> > > #endif /* _ASM_X86_BUGS_H */
> > > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > > index aee2945bdef9..88eba8e4c7fb 100644
> > > --- a/arch/x86/kernel/cpu/bugs.c
> > > +++ b/arch/x86/kernel/cpu/bugs.c
> > > @@ -169,6 +169,66 @@
> > DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
> > > DEFINE_STATIC_KEY_FALSE(mmio_stale_data_clear);
> > > EXPORT_SYMBOL_GPL(mmio_stale_data_clear);
> > >
> > > +#ifdef CONFIG_CPU_MITIGATIONS
> > > +/*
> > > + * All except the cross-thread attack vector are mitigated by default.
> > > + * Cross-thread mitigation often requires disabling SMT which is
> > > +too expensive
> > > + * to be enabled by default.
> > > + *
> > > + * Guest-to-Host and Guest-to-Guest vectors are only needed if KVM
> > > +support is
> > > + * present.
> > > + */
> > > +static bool cpu_mitigate_attack_vectors[NR_CPU_ATTACK_VECTORS]
> > __ro_after_init = {
> > > + [CPU_MITIGATE_USER_KERNEL] = true,
> > > + [CPU_MITIGATE_USER_USER] = true,
> > > + [CPU_MITIGATE_GUEST_HOST] = IS_ENABLED(CONFIG_KVM),
> > > + [CPU_MITIGATE_GUEST_GUEST] = IS_ENABLED(CONFIG_KVM),
> > > + [CPU_MITIGATE_CROSS_THREAD] = false };
> > > +
> > > +#define DEFINE_ATTACK_VECTOR(opt, v) \
> >
> > s/opt/name/ to distinguish it from v.
> >
> > > + static int __init v##_parse_cmdline(char *arg) \
> >
> > Instead of "CPU_MITIGATE_USER_KERNEL_parse_cmdline" it should really
> > be "mitigate_user_kernel_cmdline".
> >
> > Also this line shouldn't be indented.
> >
> > Also it's more readable to tab align all the line continuation
> > backslashes.
> >
> > > +{ \
> > > + if (!strcmp(arg, "off")) \
> > > + cpu_mitigate_attack_vectors[v] = false; \
> > > + else if (!strcmp(arg, "on")) \
> > > + cpu_mitigate_attack_vectors[v] = true; \
> > > + else \
> > > + pr_warn("Unsupported " opt "=%s\n", arg); \
> > > + return 0; \
> > > +} \
> > > +early_param(opt, v##_parse_cmdline)
> > > +
> > > +bool cpu_mitigate_attack_vector(enum cpu_attack_vectors v) {
> > > + if (v < NR_CPU_ATTACK_VECTORS)
> > > + return cpu_mitigate_attack_vectors[v];
> > > +
> > > + WARN_ON_ONCE(v >= NR_CPU_ATTACK_VECTORS);
> > > + return false;
> > > +}
> >
> > This error can be checked at build time.
> >
> > > +#else
> >
> > This needs a /* !CONFIG_CPU_MITIGATIONS */ comment.
> >
> > > #endif
> >
> > As does this.
> >
> >
> > So, something like so:
> >
> > #ifdef CONFIG_CPU_MITIGATIONS
> > /*
> > * All except the cross-thread attack vector are mitigated by default.
> > * Cross-thread mitigation often requires disabling SMT which is too
> > expensive
> > * to be enabled by default.
> > *
> > * Guest-to-Host and Guest-to-Guest vectors are only needed if KVM
> > support is
> > * present.
> > */
> > static bool cpu_mitigate_attack_vectors[NR_CPU_ATTACK_VECTORS]
> > __ro_after_init = {
> > [CPU_MITIGATE_USER_KERNEL] = true,
> > [CPU_MITIGATE_USER_USER] = true,
> > [CPU_MITIGATE_GUEST_HOST] = IS_ENABLED(CONFIG_KVM),
> > [CPU_MITIGATE_GUEST_GUEST] = IS_ENABLED(CONFIG_KVM),
> > [CPU_MITIGATE_CROSS_THREAD] = false };
> >
> > #define DEFINE_ATTACK_VECTOR(name, v) \
> > static int __init name##_parse_cmdline(char *arg) \
> > { \
> > if (!strcmp(arg, "off")) \
> > cpu_mitigate_attack_vectors[v] = false; \
> > else if (!strcmp(arg, "on")) \
> > cpu_mitigate_attack_vectors[v] = true; \
> > else \
> > pr_warn("Unsupported " __stringify(name) "=%s\n", arg); \
> > return 0; \
> > } \
> > early_param(__stringify(name), name##_parse_cmdline)
> >
> > #define cpu_mitigate_attack_vector(v) \
> > ({ \
> > BUILD_BUG_ON(v >= NR_CPU_ATTACK_VECTORS); \
> > cpu_mitigate_attack_vectors[v]; \
> > })
> >
> > #else /* !CONFIG_CPU_MITIGATIONS */
> >
> > #define DEFINE_ATTACK_VECTOR(name, v) \
> > static int __init name##_parse_cmdline(char *arg) \
> > { \
> > pr_crit("Kernel compiled without mitigations, ignoring %s;
> > system may still be vulnerable\n", \
> > __stringify(name)); \
> > return 0; \
> > } \
> > early_param(__stringify(name), name##_parse_cmdline)
> >
> > #define cpu_mitigate_attack_vector(v) false
> >
> > #endif /* !CONFIG_CPU_MITIGATIONS */
> >
> > DEFINE_ATTACK_VECTOR(mitigate_user_kernel,
> > CPU_MITIGATE_USER_KERNEL);
> > DEFINE_ATTACK_VECTOR(mitigate_user_user,
> > CPU_MITIGATE_USER_USER);
> > DEFINE_ATTACK_VECTOR(mitigate_guest_host,
> > CPU_MITIGATE_GUEST_HOST);
> > DEFINE_ATTACK_VECTOR(mitigate_guest_guest,
> > CPU_MITIGATE_GUEST_GUEST);
> > DEFINE_ATTACK_VECTOR(mitigate_cross_thread,
> > CPU_MITIGATE_CROSS_THREAD);
>
So actually this doesn't quite work because the code in arch/x86/mm/pti.c has to call cpu_mitigate_attack_vector in order to check if PTI is required (it checks if user->kernel mitigations are needed). That's the only use of the attack vectors outside of bugs.c.
The original code (using a function and WARN_ON_ONCE) can work, or I could perhaps create a pti-specific function in bugs.c that the pti code can query. But right now I don't think there is any pti-related code in bugs.c at all.
Any suggestion?
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-17 17:33 ` Kaplan, David
@ 2025-02-17 20:19 ` Josh Poimboeuf
2025-02-17 20:38 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-17 20:19 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Mon, Feb 17, 2025 at 05:33:24PM +0000, Kaplan, David wrote:
> So actually this doesn't quite work because the code in
> arch/x86/mm/pti.c has to call cpu_mitigate_attack_vector in order to
> check if PTI is required (it checks if user->kernel mitigations are
> needed). That's the only use of the attack vectors outside of bugs.c.
>
> The original code (using a function and WARN_ON_ONCE) can work, or I
> could perhaps create a pti-specific function in bugs.c that the pti
> code can query. But right now I don't think there is any pti-related
> code in bugs.c at all.
>
> Any suggestion?
Hm. We *could* put the cpu_mitigate_attack_vector() macro in bugs.h and
make the array global (and possibly exported). That way anybody could
call it, but it would still have the compile-time check.
However... should these not actually be arch-generic options, like
mitigations= already is? It would make for a more consistent user
interface across arches.
They could even be integrated into the "mitigations=" interface. The
options could be combined in any order (separated by commas):
mitigations=user_kernel,user_user
mitigations=guest_host,user_kernel
...etc...
And e.g., "mitigations=off" would simply disable all the vectors.
That would remove ambiguity created by combining mitigations= with
mitigate_* and would make it easier for all the current
cpu_mitigations_off() callers: only one global enable/disable interface
to call instead of two. Any code calling cpu_mitigations_off() should
probably be calling something like cpu_mitigate_attack_vector() instead.
cpu_mitigations_off() and cpu_mitigations_auto_nosmt() could be
deprecated in favor of more vector-specific interfaces, and could be
removed once all the arches stop using them. They could be gated by a
temporary ARCH_USES_MITIGATION_VECTORS option. As could the per-vector
cmdline options.
Thoughts?
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* RE: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-17 20:19 ` Josh Poimboeuf
@ 2025-02-17 20:38 ` Kaplan, David
2025-02-17 23:39 ` Josh Poimboeuf
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-17 20:38 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Monday, February 17, 2025 2:19 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Mon, Feb 17, 2025 at 05:33:24PM +0000, Kaplan, David wrote:
> > So actually this doesn't quite work because the code in
> > arch/x86/mm/pti.c has to call cpu_mitigate_attack_vector in order to
> > check if PTI is required (it checks if user->kernel mitigations are
> > needed). That's the only use of the attack vectors outside of bugs.c.
> >
> > The original code (using a function and WARN_ON_ONCE) can work, or I
> > could perhaps create a pti-specific function in bugs.c that the pti
> > code can query. But right now I don't think there is any pti-related
> > code in bugs.c at all.
> >
> > Any suggestion?
>
> Hm. We *could* put the cpu_mitigate_attack_vector() macro in bugs.h and make the
> array global (and possibly exported). That way anybody could call it, but it would still
> have the compile-time check.
>
>
> However... should these not actually be arch-generic options, like mitigations=
> already is? It would make for a more consistent user interface across arches.
That's what I had in my patch series up until this one. Boris said to move them to x86-specific code because nobody else is using them yet and somebody down the road could move them.
I do agree that they can be arch-generic (hence why I originally put them in kernel/cpu.c) but I also don't know when (if ever) anyone from other archs will want to pick them up.
>
> They could even be integrated into the "mitigations=" interface. The options could
> be combined in any order (separated by commas):
>
> mitigations=user_kernel,user_user
> mitigations=guest_host,user_kernel
> ...etc...
>
> And e.g., "mitigations=off" would simply disable all the vectors.
Hmm, that's an interesting idea. I assume that any vectors not listed would be considered 'off', unless no mitigations= was specified, or mitigations=auto was specified in which case they'd default to 'on' like they do today.
In other words:
mitigations=auto
=> all 4 vectors are 'on'
mitigations=user_kernel
=> user_kernel is 'on', all others are 'off'
That would also imply that:
mitigations=user_kernel mitigations=user_user
Would actually mean that user_user is 'on' and everything is 'off'. Not sure if that's an issue or would be sufficiently obvious.
Then a question is how to integrate the mitigate_smt option we were just discussing since that needs a 3-way select. Or perhaps keep that one as a separate command line option.
>
> That would remove ambiguity created by combining mitigations= with
> mitigate_* and would make it easier for all the current
> cpu_mitigations_off() callers: only one global enable/disable interface to call instead
> of two. Any code calling cpu_mitigations_off() should probably be calling something
> like cpu_mitigate_attack_vector() instead.
>
> cpu_mitigations_off() and cpu_mitigations_auto_nosmt() could be deprecated in
> favor of more vector-specific interfaces, and could be removed once all the arches
> stop using them. They could be gated by a temporary
> ARCH_USES_MITIGATION_VECTORS option. As could the per-vector cmdline
> options.
>
> Thoughts?
>
I'm not sure there is really that much ambiguity...the global mitigations=off is the big button that disables everything. I don't think we can change that.
I think the other issue here may be that the attack vectors are defined to be rather low-priority in terms of selection. That is, you can disable all the attack vectors but then still enable an individual bug fix.
In other words, if you were to replace cpu_mitigations_off() with a function looking for all attack vectors to be off, that isn't quite correct because of the priority difference.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-17 20:38 ` Kaplan, David
@ 2025-02-17 23:39 ` Josh Poimboeuf
2025-02-18 2:24 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-17 23:39 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Mon, Feb 17, 2025 at 08:38:07PM +0000, Kaplan, David wrote:
> > However... should these not actually be arch-generic options, like mitigations=
> > already is? It would make for a more consistent user interface across arches.
>
> That's what I had in my patch series up until this one. Boris said to
> move them to x86-specific code because nobody else is using them yet
> and somebody down the road could move them.
Ah, I guess I missed the previous versions. Sorry :-/
> I do agree that they can be arch-generic (hence why I originally put
> them in kernel/cpu.c) but I also don't know when (if ever) anyone from
> other archs will want to pick them up.
Well, for example, we already have the generic
prctl(PR_SET_SPECULATION_CTRL) which is used by arm64. If somebody
boots with mitigate_user_user=off on arm64, they could reasonably expect
those context switch mitigations to be disabled.
> > They could even be integrated into the "mitigations=" interface. The options could
> > be combined in any order (separated by commas):
> >
> > mitigations=user_kernel,user_user
> > mitigations=guest_host,user_kernel
> > ...etc...
> >
> > And e.g., "mitigations=off" would simply disable all the vectors.
>
> Hmm, that's an interesting idea. I assume that any vectors not listed
> would be considered 'off', unless no mitigations= was specified, or
> mitigations=auto was specified in which case they'd default to 'on'
> like they do today.
>
> In other words:
> mitigations=auto
> => all 4 vectors are 'on'
> mitigations=user_kernel
> => user_kernel is 'on', all others are 'off'
>
> That would also imply that:
> mitigations=user_kernel mitigations=user_user
>
> Would actually mean that user_user is 'on' and everything is 'off'.
> Not sure if that's an issue or would be sufficiently obvious.
Hm, so I was actually thinking that multiple "mitigations=" options
would combine. I guess either way is confusing. The problem is that
they have side effects. Another idea below.
> Then a question is how to integrate the mitigate_smt option we were
> just discussing since that needs a 3-way select. Or perhaps keep that
> one as a separate command line option.
Yeah, the smt tri-state would need some kind of interface as well.
> > That would remove ambiguity created by combining mitigations= with
> > mitigate_* and would make it easier for all the current
> > cpu_mitigations_off() callers: only one global enable/disable interface to call instead
> > of two. Any code calling cpu_mitigations_off() should probably be calling something
> > like cpu_mitigate_attack_vector() instead.
> >
> > cpu_mitigations_off() and cpu_mitigations_auto_nosmt() could be deprecated in
> > favor of more vector-specific interfaces, and could be removed once all the arches
> > stop using them. They could be gated by a temporary
> > ARCH_USES_MITIGATION_VECTORS option. As could the per-vector cmdline
> > options.
> >
> > Thoughts?
> >
>
> I'm not sure there is really that much ambiguity...the global
> mitigations=off is the big button that disables everything.
Well sure, but that's because you already know it's the big button ;-)
If you don't know that, it's nonobvious.
Joe Admin might assume the cmdline options are processed in order, e.g:
mitigations=off mitigate_user_kernel=on
in which case they might reasonably expect to have only user->kernel
mitigations enabled. Otherwise there would be no point in combining
them in the first place.
How about negative and positive versions of each:
mitigations=[no_]user_kernel
migitations=[no_]user_user
etc.
And then the mitigations= cmdline could simply be processed in order,
without side effects, to give the user full flexibility. To opt-in to
specific vectors:
mitigations=off mitigations=user_kernel,host_guest
which is equivalent to:
mitigations=off,user_kernel,host_guest
Or, if one prefers to opt-out:
mitigations=auto,no_user_user,no_guest_guest
where the "auto" is optional for default configs.
> I think the other issue here may be that the attack vectors are
> defined to be rather low-priority in terms of selection. That is, you
> can disable all the attack vectors but then still enable an individual
> bug fix.
>
> In other words, if you were to replace cpu_mitigations_off() with a
> function looking for all attack vectors to be off, that isn't quite
> correct because of the priority difference.
Hm. So IIUC, the priority (in descending order) is:
1) mitigations=
2) individual mitigations, e.g. spectre_v2=
3) mitigate_*=
4) defaults
That seems overly complex and nonobvious, where most of the complexity
comes from handling rarely (or never?) used edge cases.
Does the current "big button" behavior for mitigations=off even make
sense? Why would somebody do "mitigations=off spectre_v2=eibrs" and
expect the spectre_v2 mitigation to be *disabled*? Do we really think
anybody relies on that and gets the result they were expecting?
The priority could be simplified:
1) individual mitigations (=auto means use system-wide default)
2) system-wide defaults (tweaked by mitigations=/mitigate_*=)
So the system-wide defaults would be defined by mitigations=whatever,
and those can be overriden by the individual mitigations. That seems a
lot more simple and logical.
And since you're already introducing "=auto" for the individual
mitigations, I think that would be easy enough to implement.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* RE: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-17 23:39 ` Josh Poimboeuf
@ 2025-02-18 2:24 ` Kaplan, David
2025-02-18 7:05 ` Josh Poimboeuf
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-18 2:24 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Monday, February 17, 2025 5:39 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Mon, Feb 17, 2025 at 08:38:07PM +0000, Kaplan, David wrote:
> > > However... should these not actually be arch-generic options, like
> > > mitigations= already is? It would make for a more consistent user interface
> across arches.
> >
> > That's what I had in my patch series up until this one. Boris said to
> > move them to x86-specific code because nobody else is using them yet
> > and somebody down the road could move them.
>
> Ah, I guess I missed the previous versions. Sorry :-/
>
> > I do agree that they can be arch-generic (hence why I originally put
> > them in kernel/cpu.c) but I also don't know when (if ever) anyone from
> > other archs will want to pick them up.
>
> Well, for example, we already have the generic
> prctl(PR_SET_SPECULATION_CTRL) which is used by arm64. If somebody boots
> with mitigate_user_user=off on arm64, they could reasonably expect those context
> switch mitigations to be disabled.
I'm not sure which this is an argument for actually :)
That is, I do agree that mitigate_user_user makes sense in an arch-agnostic way and a user might expect context switch mitigations disabled. However this patch series doesn't implement these attack vector controls for anything other than x86. So I guess I'm not sure if your argument is that because they aren't yet implemented for arm64, then keeping them x86-specific is better...or if we should make them generic to more easily support extension to other architectures.
>
> > > They could even be integrated into the "mitigations=" interface.
> > > The options could be combined in any order (separated by commas):
> > >
> > > mitigations=user_kernel,user_user
> > > mitigations=guest_host,user_kernel
> > > ...etc...
> > >
> > > And e.g., "mitigations=off" would simply disable all the vectors.
> >
> > Hmm, that's an interesting idea. I assume that any vectors not listed
> > would be considered 'off', unless no mitigations= was specified, or
> > mitigations=auto was specified in which case they'd default to 'on'
> > like they do today.
> >
> > In other words:
> > mitigations=auto
> > => all 4 vectors are 'on'
> > mitigations=user_kernel
> > => user_kernel is 'on', all others are 'off'
> >
> > That would also imply that:
> > mitigations=user_kernel mitigations=user_user
> >
> > Would actually mean that user_user is 'on' and everything is 'off'.
> > Not sure if that's an issue or would be sufficiently obvious.
>
> Hm, so I was actually thinking that multiple "mitigations=" options would combine. I
> guess either way is confusing. The problem is that they have side effects. Another
> idea below.
Yeah I'm not a big fan of this either, since I don't think this is consistent with how other mitigation command lines are handled.
>
> > Then a question is how to integrate the mitigate_smt option we were
> > just discussing since that needs a 3-way select. Or perhaps keep that
> > one as a separate command line option.
>
> Yeah, the smt tri-state would need some kind of interface as well.
I'd probably vote for just making that a separate command line option.
>
> > > That would remove ambiguity created by combining mitigations= with
> > > mitigate_* and would make it easier for all the current
> > > cpu_mitigations_off() callers: only one global enable/disable
> > > interface to call instead of two. Any code calling
> > > cpu_mitigations_off() should probably be calling something like
> cpu_mitigate_attack_vector() instead.
> > >
> > > cpu_mitigations_off() and cpu_mitigations_auto_nosmt() could be
> > > deprecated in favor of more vector-specific interfaces, and could be
> > > removed once all the arches stop using them. They could be gated by
> > > a temporary ARCH_USES_MITIGATION_VECTORS option. As could the
> > > per-vector cmdline options.
> > >
> > > Thoughts?
> > >
> >
> > I'm not sure there is really that much ambiguity...the global
> > mitigations=off is the big button that disables everything.
>
> Well sure, but that's because you already know it's the big button ;-) If you don't
> know that, it's nonobvious.
>
> Joe Admin might assume the cmdline options are processed in order, e.g:
>
> mitigations=off mitigate_user_kernel=on
>
> in which case they might reasonably expect to have only user->kernel mitigations
> enabled. Otherwise there would be no point in combining them in the first place.
>
> How about negative and positive versions of each:
>
> mitigations=[no_]user_kernel
> migitations=[no_]user_user
>
> etc.
>
> And then the mitigations= cmdline could simply be processed in order, without side
> effects, to give the user full flexibility. To opt-in to specific vectors:
>
> mitigations=off mitigations=user_kernel,host_guest
I don't like this idea as much as the next, because as noted above, I think with most other command lines, a later version replaces an earlier one, it doesn't append to it.
That is, something like "spectre_v2=retpoline spectre_v2=ibrs" ends up just meaning ibrs.
>
> which is equivalent to:
>
> mitigations=off,user_kernel,host_guest
>
> Or, if one prefers to opt-out:
>
> mitigations=auto,no_user_user,no_guest_guest
>
> where the "auto" is optional for default configs.
This seems more appealing to me because I think it's clearer what 'on' vs 'off' is. It retains the more compact form of the command line while also allowing for opt-in or opt-out style. And if you specify multiple "mitigations=" command lines, the new one replaces the old one, like with most other options.
So I rather like this, and would be interested in what others think too.
>
> > I think the other issue here may be that the attack vectors are
> > defined to be rather low-priority in terms of selection. That is, you
> > can disable all the attack vectors but then still enable an individual
> > bug fix.
> >
> > In other words, if you were to replace cpu_mitigations_off() with a
> > function looking for all attack vectors to be off, that isn't quite
> > correct because of the priority difference.
>
> Hm. So IIUC, the priority (in descending order) is:
>
> 1) mitigations=
>
> 2) individual mitigations, e.g. spectre_v2=
>
> 3) mitigate_*=
>
> 4) defaults
>
> That seems overly complex and nonobvious, where most of the complexity comes
> from handling rarely (or never?) used edge cases.
Your understanding is correct, although I'd argue there isn't really a case 4, since there are default settings for the attack vectors (case 3) which step in if 1) or 2) do not exist. And case 1 is only 'mitigations=off'.
Having individual mitigations override attack vectors to me makes sense because they're more specific. I think the bigger question is the big mitigations=off hammer.
>
> Does the current "big button" behavior for mitigations=off even make sense? Why
> would somebody do "mitigations=off spectre_v2=eibrs" and expect the spectre_v2
> mitigation to be *disabled*? Do we really think anybody relies on that and gets the
> result they were expecting?
A valid question, but this gets into changing existing behavior and that makes me nervous.
I can't say for certain what people do, but I could potentially imagine a system that has some custom configuration of bug options, and then for testing somebody decides to just append 'mitigations=off' at the end of the command line. Perhaps they might reasonably expect that to disable everything? And that is how it'd work today.
>
> The priority could be simplified:
>
> 1) individual mitigations (=auto means use system-wide default)
>
> 2) system-wide defaults (tweaked by mitigations=/mitigate_*=)
>
> So the system-wide defaults would be defined by mitigations=whatever, and those
> can be overriden by the individual mitigations. That seems a lot more simple and
> logical.
>
> And since you're already introducing "=auto" for the individual mitigations, I think that
> would be easy enough to implement.
>
Yes it wouldn't be very hard, and it does make logical sense. But I think the big caveat is the change to the existing mitigations=off and that it would no longer override individual mitigations. Changing that would seem to be a risk, and is that risk worth taking? It doesn't seem like it's that much harder to implement the 3-tier scheme (mitigations=off, individual, attack-vectors), especially since everything already checks for cpu_mitigations_off().
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-18 2:24 ` Kaplan, David
@ 2025-02-18 7:05 ` Josh Poimboeuf
2025-02-18 8:52 ` Borislav Petkov
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-18 7:05 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Tue, Feb 18, 2025 at 02:24:01AM +0000, Kaplan, David wrote:
> > > I do agree that they can be arch-generic (hence why I originally put
> > > them in kernel/cpu.c) but I also don't know when (if ever) anyone from
> > > other archs will want to pick them up.
> >
> > Well, for example, we already have the generic
> > prctl(PR_SET_SPECULATION_CTRL) which is used by arm64. If somebody boots
> > with mitigate_user_user=off on arm64, they could reasonably expect those context
> > switch mitigations to be disabled.
>
> I'm not sure which this is an argument for actually :)
>
> That is, I do agree that mitigate_user_user makes sense in an
> arch-agnostic way and a user might expect context switch mitigations
> disabled. However this patch series doesn't implement these attack
> vector controls for anything other than x86. So I guess I'm not sure
> if your argument is that because they aren't yet implemented for
> arm64, then keeping them x86-specific is better...or if we should make
> them generic to more easily support extension to other architectures.
IMO, make them generic from the start, then there's less churn and it's
easy to port the other arches.
If we went with putting everything in "mitigations=", making them
generic would be the obvious way to go anyway.
> > And then the mitigations= cmdline could simply be processed in order, without side
> > effects, to give the user full flexibility. To opt-in to specific vectors:
> >
> > mitigations=off mitigations=user_kernel,host_guest
>
> I don't like this idea as much as the next, because as noted above, I
> think with most other command lines, a later version replaces an
> earlier one, it doesn't append to it.
>
> That is, something like "spectre_v2=retpoline spectre_v2=ibrs" ends up just meaning ibrs.
Yeah, that makes sense to me.
> > which is equivalent to:
> >
> > mitigations=off,user_kernel,host_guest
> >
> > Or, if one prefers to opt-out:
> >
> > mitigations=auto,no_user_user,no_guest_guest
> >
> > where the "auto" is optional for default configs.
>
> This seems more appealing to me because I think it's clearer what 'on'
> vs 'off' is. It retains the more compact form of the command line
> while also allowing for opt-in or opt-out style. And if you specify
> multiple "mitigations=" command lines, the new one replaces the old
> one, like with most other options.
>
> So I rather like this, and would be interested in what others think too.
+1
> > The priority could be simplified:
> >
> > 1) individual mitigations (=auto means use system-wide default)
> >
> > 2) system-wide defaults (tweaked by mitigations=/mitigate_*=)
> >
> > So the system-wide defaults would be defined by mitigations=whatever, and those
> > can be overriden by the individual mitigations. That seems a lot more simple and
> > logical.
> >
> > And since you're already introducing "=auto" for the individual mitigations, I think that
> > would be easy enough to implement.
> >
>
> Yes it wouldn't be very hard, and it does make logical sense. But I
> think the big caveat is the change to the existing mitigations=off and
> that it would no longer override individual mitigations. Changing
> that would seem to be a risk, and is that risk worth taking?
Honestly I don't see it as much of a risk. AFAICT that behavior isn't
documented anywhere. I'd view it as more of an implementation detail.
And we have to weigh that risk against better maintainability and ease
of use going forward. These bugs are going to continue to be found
across all the major arches for a long time, let's get the interfaces
right.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-18 7:05 ` Josh Poimboeuf
@ 2025-02-18 8:52 ` Borislav Petkov
2025-02-20 22:04 ` Josh Poimboeuf
0 siblings, 1 reply; 138+ messages in thread
From: Borislav Petkov @ 2025-02-18 8:52 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Kaplan, David, Thomas Gleixner, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Mon, Feb 17, 2025 at 11:05:01PM -0800, Josh Poimboeuf wrote:
> IMO, make them generic from the start, then there's less churn and it's
> easy to port the other arches.
>
> If we went with putting everything in "mitigations=", making them
> generic would be the obvious way to go anyway.
Just to make sure we're all on the same page: we obviously cannot enable
and test and support a mitigaion on another arch like, say, arm64, or so.
This needs to come from the respective arch maintainers themselves and they'll
have to say, yes, pls, enable it and we'll support it. We should not go "oh,
this would be a good idea to do on all arches" without hearing from them
first, even if it is a good idea on its face.
That's why those are x86-only as they should be initially.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-18 8:52 ` Borislav Petkov
@ 2025-02-20 22:04 ` Josh Poimboeuf
2025-02-26 18:57 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-20 22:04 UTC (permalink / raw)
To: Borislav Petkov
Cc: Kaplan, David, Thomas Gleixner, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Tue, Feb 18, 2025 at 09:52:03AM +0100, Borislav Petkov wrote:
> On Mon, Feb 17, 2025 at 11:05:01PM -0800, Josh Poimboeuf wrote:
> > IMO, make them generic from the start, then there's less churn and it's
> > easy to port the other arches.
> >
> > If we went with putting everything in "mitigations=", making them
> > generic would be the obvious way to go anyway.
>
> Just to make sure we're all on the same page: we obviously cannot enable
> and test and support a mitigaion on another arch like, say, arm64, or so.
>
> This needs to come from the respective arch maintainers themselves and they'll
> have to say, yes, pls, enable it and we'll support it. We should not go "oh,
> this would be a good idea to do on all arches" without hearing from them
> first, even if it is a good idea on its face.
>
> That's why those are x86-only as they should be initially.
I wasn't suggesting that this patch set should *enable* it on all
arches. Of course that would need to be reviewed by the respective arch
maintainers.
But looking ahead, this *will* be needed for the other arches, for the
same reason we have a generic mitigations=off. It's a user problem, not
an arch-specific one. Users need a simple interface that works
everywhere. That's why I suggested integrating it into "mitigations=".
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* RE: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-20 22:04 ` Josh Poimboeuf
@ 2025-02-26 18:57 ` Kaplan, David
2025-02-26 20:14 ` Pawan Gupta
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-26 18:57 UTC (permalink / raw)
To: Josh Poimboeuf, Borislav Petkov
Cc: Thomas Gleixner, Peter Zijlstra, Pawan Gupta, Ingo Molnar,
Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Thursday, February 20, 2025 4:05 PM
> To: Borislav Petkov <bp@alien8.de>
> Cc: Kaplan, David <David.Kaplan@amd.com>; Thomas Gleixner
> <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Tue, Feb 18, 2025 at 09:52:03AM +0100, Borislav Petkov wrote:
> > On Mon, Feb 17, 2025 at 11:05:01PM -0800, Josh Poimboeuf wrote:
> > > IMO, make them generic from the start, then there's less churn and
> > > it's easy to port the other arches.
> > >
> > > If we went with putting everything in "mitigations=", making them
> > > generic would be the obvious way to go anyway.
> >
> > Just to make sure we're all on the same page: we obviously cannot
> > enable and test and support a mitigaion on another arch like, say, arm64, or so.
> >
> > This needs to come from the respective arch maintainers themselves and
> > they'll have to say, yes, pls, enable it and we'll support it. We
> > should not go "oh, this would be a good idea to do on all arches"
> > without hearing from them first, even if it is a good idea on its face.
> >
> > That's why those are x86-only as they should be initially.
>
> I wasn't suggesting that this patch set should *enable* it on all arches. Of course
> that would need to be reviewed by the respective arch maintainers.
>
> But looking ahead, this *will* be needed for the other arches, for the same reason
> we have a generic mitigations=off. It's a user problem, not an arch-specific one.
> Users need a simple interface that works everywhere. That's why I suggested
> integrating it into "mitigations=".
>
Talked with Boris on the side, he is ok with supporting this in mitigations=, with a warning message if you try to use these controls on yet-unsupported architectures.
Going back to the command line definition, I think that to help make the selection clearer we could consider the following format:
mitigations=[on/off],[attack vectors]
For example:
"mitigations=on,no_user_kernel" to enable all attack vectors except user->kernel
"mitigations=off,guest_host" to disable all vectors except guest->host
By requiring either 'on' or 'off' first, I think that makes it more obvious what the default would be. My concern is something like 'mitigations=no_user_kernel' doesn't immediately make it clear that other mitigations are going to be enabled. If the correct format is not followed, the kernel can print a warning and just fall back to the defaults.
This format would only be required if you're going to use attack vector controls, of course.
Thoughts?
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-26 18:57 ` Kaplan, David
@ 2025-02-26 20:14 ` Pawan Gupta
2025-02-26 21:01 ` Borislav Petkov
2025-02-26 21:03 ` Kaplan, David
0 siblings, 2 replies; 138+ messages in thread
From: Pawan Gupta @ 2025-02-26 20:14 UTC (permalink / raw)
To: Kaplan, David
Cc: Josh Poimboeuf, Borislav Petkov, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 26, 2025 at 06:57:05PM +0000, Kaplan, David wrote:
> [AMD Official Use Only - AMD Internal Distribution Only]
>
> > -----Original Message-----
> > From: Josh Poimboeuf <jpoimboe@kernel.org>
> > Sent: Thursday, February 20, 2025 4:05 PM
> > To: Borislav Petkov <bp@alien8.de>
> > Cc: Kaplan, David <David.Kaplan@amd.com>; Thomas Gleixner
> > <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>; Pawan Gupta
> > <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> > Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> > <hpa@zytor.com>; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
> >
> > Caution: This message originated from an External Source. Use proper caution
> > when opening attachments, clicking links, or responding.
> >
> >
> > On Tue, Feb 18, 2025 at 09:52:03AM +0100, Borislav Petkov wrote:
> > > On Mon, Feb 17, 2025 at 11:05:01PM -0800, Josh Poimboeuf wrote:
> > > > IMO, make them generic from the start, then there's less churn and
> > > > it's easy to port the other arches.
> > > >
> > > > If we went with putting everything in "mitigations=", making them
> > > > generic would be the obvious way to go anyway.
> > >
> > > Just to make sure we're all on the same page: we obviously cannot
> > > enable and test and support a mitigaion on another arch like, say, arm64, or so.
> > >
> > > This needs to come from the respective arch maintainers themselves and
> > > they'll have to say, yes, pls, enable it and we'll support it. We
> > > should not go "oh, this would be a good idea to do on all arches"
> > > without hearing from them first, even if it is a good idea on its face.
> > >
> > > That's why those are x86-only as they should be initially.
> >
> > I wasn't suggesting that this patch set should *enable* it on all arches. Of course
> > that would need to be reviewed by the respective arch maintainers.
> >
> > But looking ahead, this *will* be needed for the other arches, for the same reason
> > we have a generic mitigations=off. It's a user problem, not an arch-specific one.
> > Users need a simple interface that works everywhere. That's why I suggested
> > integrating it into "mitigations=".
> >
>
> Talked with Boris on the side, he is ok with supporting this in mitigations=, with a warning message if you try to use these controls on yet-unsupported architectures.
>
> Going back to the command line definition, I think that to help make the selection clearer we could consider the following format:
>
> mitigations=[on/off],[attack vectors]
>
> For example:
>
> "mitigations=on,no_user_kernel" to enable all attack vectors except user->kernel
> "mitigations=off,guest_host" to disable all vectors except guest->host
This is a bit ambiguous, mitigations=off,guest_host could be interpreted as
disabling guest->host and enabling all others. Using attack vectors with
both =on and =off seems unnecessary.
Also, we currently don't have mitigations=on option, it's equivalent is =auto.
static int __init mitigations_parse_cmdline(char *arg)
{
if (!strcmp(arg, "off"))
cpu_mitigations = CPU_MITIGATIONS_OFF;
else if (!strcmp(arg, "auto"))
cpu_mitigations = CPU_MITIGATIONS_AUTO;
else if (!strcmp(arg, "auto,nosmt"))
cpu_mitigations = CPU_MITIGATIONS_AUTO_NOSMT;
else
pr_crit("Unsupported mitigations=%s, system may still be vulnerable\n",
arg);
return 0;
}
Extending =auto to take attack vectors is going to be tricky, because it
already takes ",nosmt" which would conflict with ",no_cross_thread".
How about we keep =off, and =auto as is, and add:
mitigations=selective,no_user_kernel,no_cross_thread,...
Requiring the user to explicitly select attack vectors to disable (rather
than to enable). This would be more verbose, but it would be clear that the
user is explicitly selecting attack vectors to disable. Also, if a new
attack vector gets added in future, it would be mitigated by default,
without requiring the world to change their cmdline.
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-26 20:14 ` Pawan Gupta
@ 2025-02-26 21:01 ` Borislav Petkov
2025-02-26 21:51 ` Pawan Gupta
2025-02-26 21:03 ` Kaplan, David
1 sibling, 1 reply; 138+ messages in thread
From: Borislav Petkov @ 2025-02-26 21:01 UTC (permalink / raw)
To: Pawan Gupta
Cc: Kaplan, David, Josh Poimboeuf, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 26, 2025 at 12:14:53PM -0800, Pawan Gupta wrote:
> This is a bit ambiguous, mitigations=off,guest_host could be interpreted as
> disabling guest->host and enabling all others. Using attack vectors with
> both =on and =off seems unnecessary.
No, you'll have
mitigations=[global],[separate_vector(s)]
so global can be "on", "off", "auto" and the separate vector enables only that
specific one.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-26 21:01 ` Borislav Petkov
@ 2025-02-26 21:51 ` Pawan Gupta
2025-02-27 13:39 ` Borislav Petkov
0 siblings, 1 reply; 138+ messages in thread
From: Pawan Gupta @ 2025-02-26 21:51 UTC (permalink / raw)
To: Borislav Petkov
Cc: Kaplan, David, Josh Poimboeuf, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 26, 2025 at 10:01:29PM +0100, Borislav Petkov wrote:
> On Wed, Feb 26, 2025 at 12:14:53PM -0800, Pawan Gupta wrote:
> > This is a bit ambiguous, mitigations=off,guest_host could be interpreted as
> > disabling guest->host and enabling all others. Using attack vectors with
> > both =on and =off seems unnecessary.
>
> No, you'll have
>
> mitigations=[global],[separate_vector(s)]
>
> so global can be "on", "off", "auto" and the separate vector enables only that
> specific one.
I got that part, what I meant was allowing to use =off,<enabled vectors>
seems unnecessary when the same can be achieved by =on,<disabled vectors>.
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-26 21:51 ` Pawan Gupta
@ 2025-02-27 13:39 ` Borislav Petkov
0 siblings, 0 replies; 138+ messages in thread
From: Borislav Petkov @ 2025-02-27 13:39 UTC (permalink / raw)
To: Pawan Gupta
Cc: Kaplan, David, Josh Poimboeuf, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 26, 2025 at 01:51:47PM -0800, Pawan Gupta wrote:
> I got that part, what I meant was allowing to use =off,<enabled vectors>
> seems unnecessary when the same can be achieved by =on,<disabled vectors>.
I think Josh was the one on that thread who suggested that we should have
negative options. You can go read upthread.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 138+ messages in thread
* RE: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-26 20:14 ` Pawan Gupta
2025-02-26 21:01 ` Borislav Petkov
@ 2025-02-26 21:03 ` Kaplan, David
2025-02-26 22:13 ` Pawan Gupta
1 sibling, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-26 21:03 UTC (permalink / raw)
To: Pawan Gupta
Cc: Josh Poimboeuf, Borislav Petkov, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Sent: Wednesday, February 26, 2025 2:15 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Josh Poimboeuf <jpoimboe@kernel.org>; Borislav Petkov <bp@alien8.de>;
> Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>; Ingo
> Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Feb 26, 2025 at 06:57:05PM +0000, Kaplan, David wrote:
> > [AMD Official Use Only - AMD Internal Distribution Only]
> >
> > > -----Original Message-----
> > > From: Josh Poimboeuf <jpoimboe@kernel.org>
> > > Sent: Thursday, February 20, 2025 4:05 PM
> > > To: Borislav Petkov <bp@alien8.de>
> > > Cc: Kaplan, David <David.Kaplan@amd.com>; Thomas Gleixner
> > > <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>; Pawan
> > > Gupta <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar
> > > <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> > > x86@kernel.org; H . Peter Anvin <hpa@zytor.com>;
> > > linux-kernel@vger.kernel.org
> > > Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
> > >
> > > Caution: This message originated from an External Source. Use proper
> > > caution when opening attachments, clicking links, or responding.
> > >
> > >
> > > On Tue, Feb 18, 2025 at 09:52:03AM +0100, Borislav Petkov wrote:
> > > > On Mon, Feb 17, 2025 at 11:05:01PM -0800, Josh Poimboeuf wrote:
> > > > > IMO, make them generic from the start, then there's less churn
> > > > > and it's easy to port the other arches.
> > > > >
> > > > > If we went with putting everything in "mitigations=", making
> > > > > them generic would be the obvious way to go anyway.
> > > >
> > > > Just to make sure we're all on the same page: we obviously cannot
> > > > enable and test and support a mitigaion on another arch like, say, arm64, or
> so.
> > > >
> > > > This needs to come from the respective arch maintainers themselves
> > > > and they'll have to say, yes, pls, enable it and we'll support it.
> > > > We should not go "oh, this would be a good idea to do on all arches"
> > > > without hearing from them first, even if it is a good idea on its face.
> > > >
> > > > That's why those are x86-only as they should be initially.
> > >
> > > I wasn't suggesting that this patch set should *enable* it on all
> > > arches. Of course that would need to be reviewed by the respective arch
> maintainers.
> > >
> > > But looking ahead, this *will* be needed for the other arches, for
> > > the same reason we have a generic mitigations=off. It's a user problem, not an
> arch-specific one.
> > > Users need a simple interface that works everywhere. That's why I
> > > suggested integrating it into "mitigations=".
> > >
> >
> > Talked with Boris on the side, he is ok with supporting this in mitigations=, with a
> warning message if you try to use these controls on yet-unsupported architectures.
> >
> > Going back to the command line definition, I think that to help make the selection
> clearer we could consider the following format:
> >
> > mitigations=[on/off],[attack vectors]
> >
> > For example:
> >
> > "mitigations=on,no_user_kernel" to enable all attack vectors except
> > user->kernel "mitigations=off,guest_host" to disable all vectors
> > except guest->host
>
> This is a bit ambiguous, mitigations=off,guest_host could be interpreted as disabling
> guest->host and enabling all others. Using attack vectors with both =on and =off
> seems unnecessary.
>
> Also, we currently don't have mitigations=on option, it's equivalent is =auto.
>
> static int __init mitigations_parse_cmdline(char *arg) {
> if (!strcmp(arg, "off"))
> cpu_mitigations = CPU_MITIGATIONS_OFF;
> else if (!strcmp(arg, "auto"))
> cpu_mitigations = CPU_MITIGATIONS_AUTO;
> else if (!strcmp(arg, "auto,nosmt"))
> cpu_mitigations = CPU_MITIGATIONS_AUTO_NOSMT;
> else
> pr_crit("Unsupported mitigations=%s, system may still be vulnerable\n",
> arg);
>
> return 0;
> }
>
> Extending =auto to take attack vectors is going to be tricky, because it already
> takes ",nosmt" which would conflict with ",no_cross_thread".
>
> How about we keep =off, and =auto as is, and add:
>
> mitigations=selective,no_user_kernel,no_cross_thread,...
>
> Requiring the user to explicitly select attack vectors to disable (rather than to
> enable). This would be more verbose, but it would be clear that the user is explicitly
> selecting attack vectors to disable. Also, if a new attack vector gets added in future,
> it would be mitigated by default, without requiring the world to change their cmdline.
I kind of like that.
Note that for the SMT stuff, my new plan had been to use a separate option 'mitigate_smt' which will be on/off/auto.
But we could also combine that with mitigations=selective perhaps with tokens like 'mitigate_smt' (enable all relevant SMT mitigations including disabling SMT if needed) or 'no_mitigate_smt' (do not enable any SMT mitigation). If no token is specified, then we'd default to the behavior today where SMT won't be disabled but other mitigations get applied. Then everything is in one option.
If we like that, then a related question then, do we agree that 'mitigations=off' should be equivalent to 'mitigations=selective,no_user_kernel,no_user_user,no_guest_host,no_guest_guest,no_mitigate_smt'?
If so, and we're ok with individual bug cmdlines overriding this, then I think we can get rid of cpu_mitigations_off() and just rely on the attack vectors as Josh was suggesting.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-26 21:03 ` Kaplan, David
@ 2025-02-26 22:13 ` Pawan Gupta
2025-02-26 22:18 ` Kaplan, David
2025-02-26 23:44 ` Josh Poimboeuf
0 siblings, 2 replies; 138+ messages in thread
From: Pawan Gupta @ 2025-02-26 22:13 UTC (permalink / raw)
To: Kaplan, David
Cc: Josh Poimboeuf, Borislav Petkov, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 26, 2025 at 09:03:58PM +0000, Kaplan, David wrote:
> [AMD Official Use Only - AMD Internal Distribution Only]
>
> > -----Original Message-----
> > From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> > Sent: Wednesday, February 26, 2025 2:15 PM
> > To: Kaplan, David <David.Kaplan@amd.com>
> > Cc: Josh Poimboeuf <jpoimboe@kernel.org>; Borislav Petkov <bp@alien8.de>;
> > Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>; Ingo
> > Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> > x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
> >
> > Caution: This message originated from an External Source. Use proper caution
> > when opening attachments, clicking links, or responding.
> >
> >
> > On Wed, Feb 26, 2025 at 06:57:05PM +0000, Kaplan, David wrote:
> > > [AMD Official Use Only - AMD Internal Distribution Only]
> > >
> > > > -----Original Message-----
> > > > From: Josh Poimboeuf <jpoimboe@kernel.org>
> > > > Sent: Thursday, February 20, 2025 4:05 PM
> > > > To: Borislav Petkov <bp@alien8.de>
> > > > Cc: Kaplan, David <David.Kaplan@amd.com>; Thomas Gleixner
> > > > <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>; Pawan
> > > > Gupta <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar
> > > > <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> > > > x86@kernel.org; H . Peter Anvin <hpa@zytor.com>;
> > > > linux-kernel@vger.kernel.org
> > > > Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
> > > >
> > > > Caution: This message originated from an External Source. Use proper
> > > > caution when opening attachments, clicking links, or responding.
> > > >
> > > >
> > > > On Tue, Feb 18, 2025 at 09:52:03AM +0100, Borislav Petkov wrote:
> > > > > On Mon, Feb 17, 2025 at 11:05:01PM -0800, Josh Poimboeuf wrote:
> > > > > > IMO, make them generic from the start, then there's less churn
> > > > > > and it's easy to port the other arches.
> > > > > >
> > > > > > If we went with putting everything in "mitigations=", making
> > > > > > them generic would be the obvious way to go anyway.
> > > > >
> > > > > Just to make sure we're all on the same page: we obviously cannot
> > > > > enable and test and support a mitigaion on another arch like, say, arm64, or
> > so.
> > > > >
> > > > > This needs to come from the respective arch maintainers themselves
> > > > > and they'll have to say, yes, pls, enable it and we'll support it.
> > > > > We should not go "oh, this would be a good idea to do on all arches"
> > > > > without hearing from them first, even if it is a good idea on its face.
> > > > >
> > > > > That's why those are x86-only as they should be initially.
> > > >
> > > > I wasn't suggesting that this patch set should *enable* it on all
> > > > arches. Of course that would need to be reviewed by the respective arch
> > maintainers.
> > > >
> > > > But looking ahead, this *will* be needed for the other arches, for
> > > > the same reason we have a generic mitigations=off. It's a user problem, not an
> > arch-specific one.
> > > > Users need a simple interface that works everywhere. That's why I
> > > > suggested integrating it into "mitigations=".
> > > >
> > >
> > > Talked with Boris on the side, he is ok with supporting this in mitigations=, with a
> > warning message if you try to use these controls on yet-unsupported architectures.
> > >
> > > Going back to the command line definition, I think that to help make the selection
> > clearer we could consider the following format:
> > >
> > > mitigations=[on/off],[attack vectors]
> > >
> > > For example:
> > >
> > > "mitigations=on,no_user_kernel" to enable all attack vectors except
> > > user->kernel "mitigations=off,guest_host" to disable all vectors
> > > except guest->host
> >
> > This is a bit ambiguous, mitigations=off,guest_host could be interpreted as disabling
> > guest->host and enabling all others. Using attack vectors with both =on and =off
> > seems unnecessary.
> >
> > Also, we currently don't have mitigations=on option, it's equivalent is =auto.
> >
> > static int __init mitigations_parse_cmdline(char *arg) {
> > if (!strcmp(arg, "off"))
> > cpu_mitigations = CPU_MITIGATIONS_OFF;
> > else if (!strcmp(arg, "auto"))
> > cpu_mitigations = CPU_MITIGATIONS_AUTO;
> > else if (!strcmp(arg, "auto,nosmt"))
> > cpu_mitigations = CPU_MITIGATIONS_AUTO_NOSMT;
> > else
> > pr_crit("Unsupported mitigations=%s, system may still be vulnerable\n",
> > arg);
> >
> > return 0;
> > }
> >
> > Extending =auto to take attack vectors is going to be tricky, because it already
> > takes ",nosmt" which would conflict with ",no_cross_thread".
> >
> > How about we keep =off, and =auto as is, and add:
> >
> > mitigations=selective,no_user_kernel,no_cross_thread,...
> >
> > Requiring the user to explicitly select attack vectors to disable (rather than to
> > enable). This would be more verbose, but it would be clear that the user is explicitly
> > selecting attack vectors to disable. Also, if a new attack vector gets added in future,
> > it would be mitigated by default, without requiring the world to change their cmdline.
>
> I kind of like that.
>
> Note that for the SMT stuff, my new plan had been to use a separate
> option 'mitigate_smt' which will be on/off/auto.
I would avoid that, because we can't drop support for
"mitigations=auto,nosmt" and we also have a separate cmdline parameter
"nosmt":
nosmt [KNL,MIPS,PPC,S390,EARLY] Disable symmetric multithreading (SMT).
Equivalent to smt=1.
[KNL,X86,PPC] Disable symmetric multithreading (SMT).
nosmt=force: Force disable SMT, cannot be undone
via the sysfs control file.
> But we could also combine that with mitigations=selective perhaps with
> tokens like 'mitigate_smt' (enable all relevant SMT mitigations including
> disabling SMT if needed) or 'no_mitigate_smt' (do not enable any SMT
> mitigation). If no token is specified, then we'd default to the behavior
> today where SMT won't be disabled but other mitigations get applied.
> Then everything is in one option.
Agree.
> If we like that, then a related question then, do we agree that
> 'mitigations=off' should be equivalent to
> 'mitigations=selective,no_user_kernel,no_user_user,no_guest_host,no_guest_guest,no_mitigate_smt'?
>
> If so, and we're ok with individual bug cmdlines overriding this, then I
> think we can get rid of cpu_mitigations_off() and just rely on the attack
> vectors as Josh was suggesting.
Does that mean to stop supporting "mitigations=off"?
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-26 22:13 ` Pawan Gupta
@ 2025-02-26 22:18 ` Kaplan, David
2025-02-26 22:34 ` Pawan Gupta
2025-02-26 23:44 ` Josh Poimboeuf
1 sibling, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-26 22:18 UTC (permalink / raw)
To: Pawan Gupta
Cc: Josh Poimboeuf, Borislav Petkov, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Sent: Wednesday, February 26, 2025 4:13 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Josh Poimboeuf <jpoimboe@kernel.org>; Borislav Petkov <bp@alien8.de>;
> Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra <peterz@infradead.org>; Ingo
> Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
>
>
> > But we could also combine that with mitigations=selective perhaps with
> > tokens like 'mitigate_smt' (enable all relevant SMT mitigations
> > including disabling SMT if needed) or 'no_mitigate_smt' (do not enable
> > any SMT mitigation). If no token is specified, then we'd default to
> > the behavior today where SMT won't be disabled but other mitigations get applied.
> > Then everything is in one option.
>
> Agree.
>
> > If we like that, then a related question then, do we agree that
> > 'mitigations=off' should be equivalent to
> >
> 'mitigations=selective,no_user_kernel,no_user_user,no_guest_host,no_guest_gues
> t,no_mitigate_smt'?
> >
> > If so, and we're ok with individual bug cmdlines overriding this, then
> > I think we can get rid of cpu_mitigations_off() and just rely on the
> > attack vectors as Josh was suggesting.
>
> Does that mean to stop supporting "mitigations=off"?
No. I'm saying that mitigations=off would be equivalent to the above command line. The <vuln>_select_mitigation() functions wouldn't have to call cpu_mitigations_off() anymore, they'd just naturally chose no mitigation because no attack vectors would be selected.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-26 22:18 ` Kaplan, David
@ 2025-02-26 22:34 ` Pawan Gupta
0 siblings, 0 replies; 138+ messages in thread
From: Pawan Gupta @ 2025-02-26 22:34 UTC (permalink / raw)
To: Kaplan, David
Cc: Josh Poimboeuf, Borislav Petkov, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 26, 2025 at 10:18:51PM +0000, Kaplan, David wrote:
> No. I'm saying that mitigations=off would be equivalent to the above
> command line. The <vuln>_select_mitigation() functions wouldn't have to
> call cpu_mitigations_off() anymore, they'd just naturally chose no
> mitigation because no attack vectors would be selected.
Ohk, thanks for the clarification.
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-26 22:13 ` Pawan Gupta
2025-02-26 22:18 ` Kaplan, David
@ 2025-02-26 23:44 ` Josh Poimboeuf
2025-02-27 0:35 ` Pawan Gupta
1 sibling, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-26 23:44 UTC (permalink / raw)
To: Pawan Gupta
Cc: Kaplan, David, Borislav Petkov, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 26, 2025 at 02:13:24PM -0800, Pawan Gupta wrote:
> On Wed, Feb 26, 2025 at 09:03:58PM +0000, Kaplan, David wrote:
> > > Extending =auto to take attack vectors is going to be tricky, because it already
> > > takes ",nosmt" which would conflict with ",no_cross_thread".
> > >
> > > How about we keep =off, and =auto as is, and add:
> > >
> > > mitigations=selective,no_user_kernel,no_cross_thread,...
> > >
> > > Requiring the user to explicitly select attack vectors to disable (rather than to
> > > enable). This would be more verbose, but it would be clear that the user is explicitly
> > > selecting attack vectors to disable. Also, if a new attack vector gets added in future,
> > > it would be mitigated by default, without requiring the world to change their cmdline.
> >
> > I kind of like that.
While it might be true that we don't necessarily need both opt-in and
opt-out options...
I'm missing the point of the "selective" thing vs just
"auto,no_whatever"?
> > Note that for the SMT stuff, my new plan had been to use a separate
> > option 'mitigate_smt' which will be on/off/auto.
>
> I would avoid that, because we can't drop support for
> "mitigations=auto,nosmt"
We wouldn't have to drop support for that... If there's a conflict
between the two options then just print a warning and pick one.
> and we also have a separate cmdline parameter
> "nosmt":
>
> nosmt [KNL,MIPS,PPC,S390,EARLY] Disable symmetric multithreading (SMT).
> Equivalent to smt=1.
>
> [KNL,X86,PPC] Disable symmetric multithreading (SMT).
> nosmt=force: Force disable SMT, cannot be undone
> via the sysfs control file.
The separate 'nosmt' option is orthogonal to the mitigation stuff. If
it disables SMT then there are no cross-thread mitigations to do in the
first place.
> > But we could also combine that with mitigations=selective perhaps with
> > tokens like 'mitigate_smt' (enable all relevant SMT mitigations including
> > disabling SMT if needed) or 'no_mitigate_smt' (do not enable any SMT
> > mitigation). If no token is specified, then we'd default to the behavior
> > today where SMT won't be disabled but other mitigations get applied.
> > Then everything is in one option.
>
> Agree.
I think that's *way* too subtle. It's completely unlike the other
options in that it's not a binary opt-out. And it sneakily obfuscates
the mitigate_smt tristate (with the third state being the unspecified
default).
However, one of those three states is already represented by
'auto,nosmt'. Why not just piggyback on that by allowing the vectors to
be combined with it? Then we only need two more states, which could be
represented with e.g., "[no_]cross_thread".
For example, to disable SMT (if needed), along with disabling of
vectors:
mitigations=auto,nosmt,no_user_kernel,etc
Or to disable all SMT mitigations (e.g., because the user is doing core
scheduling):
mitigations=auto,no_cross_thread,etc
And combining 'auto,nosmt' with 'no_cross_thread' is nonsensical, in
which case so it could just pick the former and print a warning.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-26 23:44 ` Josh Poimboeuf
@ 2025-02-27 0:35 ` Pawan Gupta
2025-02-27 1:23 ` Josh Poimboeuf
0 siblings, 1 reply; 138+ messages in thread
From: Pawan Gupta @ 2025-02-27 0:35 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Kaplan, David, Borislav Petkov, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 26, 2025 at 03:44:40PM -0800, Josh Poimboeuf wrote:
> On Wed, Feb 26, 2025 at 02:13:24PM -0800, Pawan Gupta wrote:
> > On Wed, Feb 26, 2025 at 09:03:58PM +0000, Kaplan, David wrote:
> > > > Extending =auto to take attack vectors is going to be tricky, because it already
> > > > takes ",nosmt" which would conflict with ",no_cross_thread".
> > > >
> > > > How about we keep =off, and =auto as is, and add:
> > > >
> > > > mitigations=selective,no_user_kernel,no_cross_thread,...
> > > >
> > > > Requiring the user to explicitly select attack vectors to disable (rather than to
> > > > enable). This would be more verbose, but it would be clear that the user is explicitly
> > > > selecting attack vectors to disable. Also, if a new attack vector gets added in future,
> > > > it would be mitigated by default, without requiring the world to change their cmdline.
> > >
> > > I kind of like that.
>
> While it might be true that we don't necessarily need both opt-in and
> opt-out options...
>
> I'm missing the point of the "selective" thing vs just
> "auto,no_whatever"?
That was my first thought, but then I realized that in "auto,nosmt" nosmt
is the opposite of disabling the mitigation. It would be cleaner to have
"=selective,no_whatever" which is self-explanatory.
> > > Note that for the SMT stuff, my new plan had been to use a separate
> > > option 'mitigate_smt' which will be on/off/auto.
> >
> > I would avoid that, because we can't drop support for
> > "mitigations=auto,nosmt"
>
> We wouldn't have to drop support for that... If there's a conflict
> between the two options then just print a warning and pick one.
Introducing one more option for smt seems unnecessary. We already have
auto,nosmt and nosmt. Trying to guess which one takes precedence would
be confusing.
> > and we also have a separate cmdline parameter
> > "nosmt":
> >
> > nosmt [KNL,MIPS,PPC,S390,EARLY] Disable symmetric multithreading (SMT).
> > Equivalent to smt=1.
> >
> > [KNL,X86,PPC] Disable symmetric multithreading (SMT).
> > nosmt=force: Force disable SMT, cannot be undone
> > via the sysfs control file.
>
> The separate 'nosmt' option is orthogonal to the mitigation stuff. If
> it disables SMT then there are no cross-thread mitigations to do in the
> first place.
Right.
> > > But we could also combine that with mitigations=selective perhaps with
> > > tokens like 'mitigate_smt' (enable all relevant SMT mitigations including
> > > disabling SMT if needed) or 'no_mitigate_smt' (do not enable any SMT
> > > mitigation). If no token is specified, then we'd default to the behavior
> > > today where SMT won't be disabled but other mitigations get applied.
> > > Then everything is in one option.
> >
> > Agree.
>
> I think that's *way* too subtle. It's completely unlike the other
> options in that it's not a binary opt-out. And it sneakily obfuscates
> the mitigate_smt tristate (with the third state being the unspecified
> default).
>
> However, one of those three states is already represented by
> 'auto,nosmt'. Why not just piggyback on that by allowing the vectors to
> be combined with it? Then we only need two more states, which could be
> represented with e.g., "[no_]cross_thread".
>
> For example, to disable SMT (if needed), along with disabling of
> vectors:
>
> mitigations=auto,nosmt,no_user_kernel,etc
>
> Or to disable all SMT mitigations (e.g., because the user is doing core
> scheduling):
>
> mitigations=auto,no_cross_thread,etc
>
> And combining 'auto,nosmt' with 'no_cross_thread' is nonsensical, in
> which case so it could just pick the former and print a warning.
That seems reasonable. The only thing is now we are mixing enabling and
disabling mitigations in the attack vector list. And that probably is
better than having a separate parameter.
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-27 0:35 ` Pawan Gupta
@ 2025-02-27 1:23 ` Josh Poimboeuf
2025-02-27 3:50 ` Pawan Gupta
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-27 1:23 UTC (permalink / raw)
To: Pawan Gupta
Cc: Kaplan, David, Borislav Petkov, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 26, 2025 at 04:35:28PM -0800, Pawan Gupta wrote:
> On Wed, Feb 26, 2025 at 03:44:40PM -0800, Josh Poimboeuf wrote:
> > On Wed, Feb 26, 2025 at 02:13:24PM -0800, Pawan Gupta wrote:
> > > On Wed, Feb 26, 2025 at 09:03:58PM +0000, Kaplan, David wrote:
> > > > > Extending =auto to take attack vectors is going to be tricky, because it already
> > > > > takes ",nosmt" which would conflict with ",no_cross_thread".
> > > > >
> > > > > How about we keep =off, and =auto as is, and add:
> > > > >
> > > > > mitigations=selective,no_user_kernel,no_cross_thread,...
> > > > >
> > > > > Requiring the user to explicitly select attack vectors to disable (rather than to
> > > > > enable). This would be more verbose, but it would be clear that the user is explicitly
> > > > > selecting attack vectors to disable. Also, if a new attack vector gets added in future,
> > > > > it would be mitigated by default, without requiring the world to change their cmdline.
> > > >
> > > > I kind of like that.
> >
> > While it might be true that we don't necessarily need both opt-in and
> > opt-out options...
> >
> > I'm missing the point of the "selective" thing vs just
> > "auto,no_whatever"?
>
> That was my first thought, but then I realized that in "auto,nosmt" nosmt
> is the opposite of disabling the mitigation. It would be cleaner to have
> "=selective,no_whatever" which is self-explanatory.
The "auto,nosmt,no_whatever" is indeed a bit confusing because of the
opposite meanings of the word "no", but least it sort of makes some
kind of sense if you consider the existing "auto,nosmt" option to be the
starting point.
And we could document it from that perspective: start with "auto" or
"auto,smt" and then optionally append the ",no_*" options for the vectors
you want to disable.
IMO "selective" doesn't seem very self-explanatory, it says nothing to
indicate "opting out of defaults", in fact it sounds to me more like
opting in. At least with "auto,no_whatever" it's more clear that it
starts with the defaults and subtracts from there.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-27 1:23 ` Josh Poimboeuf
@ 2025-02-27 3:50 ` Pawan Gupta
2025-02-27 14:08 ` Borislav Petkov
0 siblings, 1 reply; 138+ messages in thread
From: Pawan Gupta @ 2025-02-27 3:50 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Kaplan, David, Borislav Petkov, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 26, 2025 at 05:23:29PM -0800, Josh Poimboeuf wrote:
> On Wed, Feb 26, 2025 at 04:35:28PM -0800, Pawan Gupta wrote:
> > On Wed, Feb 26, 2025 at 03:44:40PM -0800, Josh Poimboeuf wrote:
> > > On Wed, Feb 26, 2025 at 02:13:24PM -0800, Pawan Gupta wrote:
> > > > On Wed, Feb 26, 2025 at 09:03:58PM +0000, Kaplan, David wrote:
> > > > > > Extending =auto to take attack vectors is going to be tricky, because it already
> > > > > > takes ",nosmt" which would conflict with ",no_cross_thread".
> > > > > >
> > > > > > How about we keep =off, and =auto as is, and add:
> > > > > >
> > > > > > mitigations=selective,no_user_kernel,no_cross_thread,...
> > > > > >
> > > > > > Requiring the user to explicitly select attack vectors to disable (rather than to
> > > > > > enable). This would be more verbose, but it would be clear that the user is explicitly
> > > > > > selecting attack vectors to disable. Also, if a new attack vector gets added in future,
> > > > > > it would be mitigated by default, without requiring the world to change their cmdline.
> > > > >
> > > > > I kind of like that.
> > >
> > > While it might be true that we don't necessarily need both opt-in and
> > > opt-out options...
> > >
> > > I'm missing the point of the "selective" thing vs just
> > > "auto,no_whatever"?
> >
> > That was my first thought, but then I realized that in "auto,nosmt" nosmt
> > is the opposite of disabling the mitigation. It would be cleaner to have
> > "=selective,no_whatever" which is self-explanatory.
>
> The "auto,nosmt,no_whatever" is indeed a bit confusing because of the
> opposite meanings of the word "no", but least it sort of makes some
> kind of sense if you consider the existing "auto,nosmt" option to be the
> starting point.
>
> And we could document it from that perspective: start with "auto" or
> "auto,smt" and then optionally append the ",no_*" options for the vectors
> you want to disable.
>
> IMO "selective" doesn't seem very self-explanatory, it says nothing to
> indicate "opting out of defaults", in fact it sounds to me more like
> opting in. At least with "auto,no_whatever" it's more clear that it
> starts with the defaults and subtracts from there.
Thats fair. I certainly don't want to be adding new option if we are
willing to live with some minor quirks with auto,nosmt.
Like, should the order in which nosmt appears after =auto matter? IOW,
"=auto,no_foo,nosmt" would be equivalent to "=auto,nosmt,no_foo"? I believe
they should be treated the same.
Arches that don't support attack vectors, but support smt, should treat
"=auto,no_foo,nosmt" as "=auto,nosmt".
So as to treat nosmt as any other attack vector, CPU_MITIGATIONS_AUTO_NOSMT
should go away. I am thinking we can modify cpu_mitigations_auto_nosmt() to
check for smt attack vector:
---
diff --git a/kernel/cpu.c b/kernel/cpu.c
index b605334f8ee6..6ddbee6a0b6b 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -3193,22 +3193,27 @@ void __init boot_cpu_hotplug_init(void)
enum cpu_mitigations {
CPU_MITIGATIONS_OFF,
CPU_MITIGATIONS_AUTO,
- CPU_MITIGATIONS_AUTO_NOSMT,
};
+#define MITIGATE_SMT BIT(0)
+#define MITIGATE_USER_KERNEL BIT(1)
+#define MITIGATE_USER_USER BIT(2)
+#define MITIGATE_GUEST_HOST BIT(3)
+
static enum cpu_mitigations cpu_mitigations __ro_after_init = CPU_MITIGATIONS_AUTO;
+static unsigned int cpu_attack_vectors __ro_after_init = ~0;
static int __init mitigations_parse_cmdline(char *arg)
{
- if (!strcmp(arg, "off"))
+ if (!strcmp(arg, "off")) {
cpu_mitigations = CPU_MITIGATIONS_OFF;
- else if (!strcmp(arg, "auto"))
+ } else if (strstr(arg, "auto")) {
cpu_mitigations = CPU_MITIGATIONS_AUTO;
- else if (!strcmp(arg, "auto,nosmt"))
- cpu_mitigations = CPU_MITIGATIONS_AUTO_NOSMT;
- else
+ cpu_attack_vectors = parse_cpu_attack_vectors(arg);
+ } else {
pr_crit("Unsupported mitigations=%s, system may still be vulnerable\n",
arg);
+ }
return 0;
}
@@ -3223,7 +3228,7 @@ EXPORT_SYMBOL_GPL(cpu_mitigations_off);
/* mitigations=auto,nosmt */
bool cpu_mitigations_auto_nosmt(void)
{
- return cpu_mitigations == CPU_MITIGATIONS_AUTO_NOSMT;
+ return (cpu_mitigations == CPU_MITIGATIONS_AUTO) && (cpu_attack_vectors & MITIGATE_SMT);
}
EXPORT_SYMBOL_GPL(cpu_mitigations_auto_nosmt);
#else
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-27 3:50 ` Pawan Gupta
@ 2025-02-27 14:08 ` Borislav Petkov
2025-02-27 14:36 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Borislav Petkov @ 2025-02-27 14:08 UTC (permalink / raw)
To: Pawan Gupta
Cc: Josh Poimboeuf, Kaplan, David, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Wed, Feb 26, 2025 at 07:50:38PM -0800, Pawan Gupta wrote:
> Thats fair. I certainly don't want to be adding new option if we are
> willing to live with some minor quirks with auto,nosmt.
>
> Like, should the order in which nosmt appears after =auto matter? IOW,
> "=auto,no_foo,nosmt" would be equivalent to "=auto,nosmt,no_foo"? I believe
> they should be treated the same.
Yes, they should be. The order within a single mitigations=<string> shouldn't
matter.
> So as to treat nosmt as any other attack vector, CPU_MITIGATIONS_AUTO_NOSMT
> should go away. I am thinking we can modify cpu_mitigations_auto_nosmt() to
> check for smt attack vector:
Looks like we're calling it an attack vector if I look at the cross-thread
section in the documentation patch:
https://lore.kernel.org/r/20250108202515.385902-20-david.kaplan@amd.com
So I guess the cmdline format should be something like:
mitigations=<global_vector_policy>;<list_of_vectors>
More concretely:
mitigations=(on|off|auto);((no)_<vector>)?
Btw, it probably would be better to split the global policy and the vectors
with ';' instead of ',' for an additional clarity and ease of parsing.
Before this goes out of hand with bikeshedding: please think about what
configurations we want to support and why and then design the command line
syntax - not the other way around.
I'm still not fully sold on the negative vector options. Although it sure does
save typing.
With my user hat on: If I have to do "no_user_kernel" then I probably need to
go look what else is there. Do I want it, need it? Dunno. Maybe.
If I do
mitigations=;no_user_kernel
then yeah, that would basically set everything else to "auto" and disable
user_kernel.
David still wants to warn if there's no global option supplied like "auto" but
we can simply assume it is meant "auto" but warn. This is the most intuitive
thing to do IMO.
And when it comes to warning about nonsensical options - yap, we should do so
when parsing.
A couple more random examples as food for bikeshedding:
mitigation=auto;nosmt,user_user - running untrusted user stuff, prevent user
apps from attacking each-other, kernel protections are default
mitigations=off;guest_host - running untrusted VMs, protect host from them
mitigations=off;guest_host,guest_guest,cross_thread - cloud provider settings
Same thing with negative options should probably be
mitigations=;no_user_kernel,no_user_user
Hmm, I dunno: being able to specify the same thing in two different ways is
calling for trouble. I think we should keep it simple and do positive options
first and then consider negative if really really needed.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-27 14:08 ` Borislav Petkov
@ 2025-02-27 14:36 ` Kaplan, David
2025-02-27 15:01 ` Borislav Petkov
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-27 14:36 UTC (permalink / raw)
To: Borislav Petkov, Pawan Gupta
Cc: Josh Poimboeuf, Thomas Gleixner, Peter Zijlstra, Ingo Molnar,
Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Thursday, February 27, 2025 8:09 AM
> To: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Cc: Josh Poimboeuf <jpoimboe@kernel.org>; Kaplan, David
> <David.Kaplan@amd.com>; Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra
> <peterz@infradead.org>; Ingo Molnar <mingo@redhat.com>; Dave Hansen
> <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Feb 26, 2025 at 07:50:38PM -0800, Pawan Gupta wrote:
> > Thats fair. I certainly don't want to be adding new option if we are
> > willing to live with some minor quirks with auto,nosmt.
> >
> > Like, should the order in which nosmt appears after =auto matter? IOW,
> > "=auto,no_foo,nosmt" would be equivalent to "=auto,nosmt,no_foo"? I
> > believe they should be treated the same.
>
> Yes, they should be. The order within a single mitigations=<string> shouldn't matter.
>
> > So as to treat nosmt as any other attack vector,
> > CPU_MITIGATIONS_AUTO_NOSMT should go away. I am thinking we can
> modify
> > cpu_mitigations_auto_nosmt() to check for smt attack vector:
>
> Looks like we're calling it an attack vector if I look at the cross-thread section in the
> documentation patch:
>
> https://lore.kernel.org/r/20250108202515.385902-20-david.kaplan@amd.com
>
> So I guess the cmdline format should be something like:
>
> mitigations=<global_vector_policy>;<list_of_vectors>
>
> More concretely:
>
> mitigations=(on|off|auto);((no)_<vector>)?
>
> Btw, it probably would be better to split the global policy and the vectors with ';'
> instead of ',' for an additional clarity and ease of parsing.
>
> Before this goes out of hand with bikeshedding: please think about what
> configurations we want to support and why and then design the command line
> syntax - not the other way around.
>
> I'm still not fully sold on the negative vector options. Although it sure does save
> typing.
>
> With my user hat on: If I have to do "no_user_kernel" then I probably need to go
> look what else is there. Do I want it, need it? Dunno. Maybe.
>
> If I do
>
> mitigations=;no_user_kernel
>
> then yeah, that would basically set everything else to "auto" and disable
> user_kernel.
>
> David still wants to warn if there's no global option supplied like "auto" but we can
> simply assume it is meant "auto" but warn. This is the most intuitive thing to do IMO.
>
> And when it comes to warning about nonsensical options - yap, we should do so
> when parsing.
>
> A couple more random examples as food for bikeshedding:
>
> mitigation=auto;nosmt,user_user - running untrusted user stuff, prevent user apps
> from attacking each-other, kernel protections are default
>
> mitigations=off;guest_host - running untrusted VMs, protect host from them
>
> mitigations=off;guest_host,guest_guest,cross_thread - cloud provider settings
>
> Same thing with negative options should probably be
>
> mitigations=;no_user_kernel,no_user_user
>
> Hmm, I dunno: being able to specify the same thing in two different ways is calling
> for trouble. I think we should keep it simple and do positive options first and then
> consider negative if really really needed.
>
My 2 cents is I think the negative option form is better. That's because I'd rather err on the side of safety if the user forgets something.
For instance, in the case of 'mitigations=off;guest_host' there would be no guest->guest protection. Did the user really intend for that? Or did they simply forget to think about that attack vector? In this case, their error leaves the system potentially insecure.
But if we only support the opt-out form, like 'mitigations=auto;no_guest_host' and the user forgot about guest->guest, it would leave those protections enabled. Potentially reducing performance more than intended, but the system is more secure.
Because the existing kernel defaults things to on (the auto setting) and requires action to disable mitigations, why not keep the same logic here and only support the opt-out form?
Some specific use case examples might be:
'mitigations=auto;no_guest_guest,no_guest_host' -- Running trusted VMs
'mitigations=auto;no_user_kernel,no_user_user' -- Running untrusted VMs but trusted userspace (cloud provider setting)
'mitigations=auto;no_cross_thread' -- Using core scheduling
On the SMT piece, I think the proposal is:
'auto;<attack vectors>' -- Default SMT protections (enable cheap ones like STIBP, but never disable SMT)
'auto,nosmt;<attack vectors>' -- Full SMT protections, including disabling SMT if required
'auto;no_cross_thread,<attack vectors>' -- No SMT protections
Is that right?
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-27 14:36 ` Kaplan, David
@ 2025-02-27 15:01 ` Borislav Petkov
2025-02-27 15:22 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Borislav Petkov @ 2025-02-27 15:01 UTC (permalink / raw)
To: Kaplan, David
Cc: Pawan Gupta, Josh Poimboeuf, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Thu, Feb 27, 2025 at 02:36:37PM +0000, Kaplan, David wrote:
> My 2 cents is I think the negative option form is better. That's because
> I'd rather err on the side of safety if the user forgets something.
>
> For instance, in the case of 'mitigations=off;guest_host' there would be no
> guest->guest protection. Did the user really intend for that? Or did they
> simply forget to think about that attack vector? In this case, their error
> leaves the system potentially insecure.
Well, good question. It could be that or it could be that the admin only cares
about protecting the host from malicious VMs but not the VMs amongst each
other. Does this use case make sense?
Probably. Maybe.
So if the admin really wants to do that, then she'll have to say:
mitigations=off;guest_host,no_guest_guest
I guess that can be specified with this cmdline.
I guess if she would want to enable both guest_host and guest_guest, then the
cmdline should be
mitigations=auto;no_user_kernel,no_user_user
or the shorter version
mitigations=;no_user_kernel,no_user_user
Hmmm, something still feels weird... I still can't go "oh yeah, this is a good
form." ;-\
> But if we only support the opt-out form, like
> 'mitigations=auto;no_guest_host' and the user forgot about guest->guest, it
> would leave those protections enabled. Potentially reducing performance
> more than intended, but the system is more secure.
Still don't know for sure what the admin wanted: more perf or more security?
:-P
> Because the existing kernel defaults things to on (the auto setting) and
> requires action to disable mitigations, why not keep the same logic here and
> only support the opt-out form?
>
> Some specific use case examples might be:
> 'mitigations=auto;no_guest_guest,no_guest_host' -- Running trusted VMs
> 'mitigations=auto;no_user_kernel,no_user_user' -- Running untrusted VMs but trusted userspace (cloud provider setting)
> 'mitigations=auto;no_cross_thread' -- Using core scheduling
I guess those make sense if you write them this way.
With the opt-out-only strategy, enabling a single vector would require you to
specify all others as no_*.
mitigations=auto,no_user_kernel,no_guest_host,no_guest_guest,no_cross_thread
That'll give you user_user.
Yeah, I guess we can't have the cake and eat it too. :-\
Which reminds me: on boot we should printk which attack vector got enabled and
which got disabled.
And then have that same info in
/sys/devices/system/cpu/vulnerabilities/attack_vectors
or so.
So that we can verify what got configured.
> On the SMT piece, I think the proposal is:
> 'auto;<attack vectors>' -- Default SMT protections (enable cheap ones like STIBP, but never disable SMT)
> 'auto,nosmt;<attack vectors>' -- Full SMT protections, including disabling SMT if required
Well, that's the question: cross-thread or nosmt is yet another attack vector.
So if we define the format as I mentioned above, this should be
auto;<attack_vectors>,nosmt or "cross_thread".
Right?
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-27 15:01 ` Borislav Petkov
@ 2025-02-27 15:22 ` Kaplan, David
2025-02-27 15:37 ` Borislav Petkov
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-27 15:22 UTC (permalink / raw)
To: Borislav Petkov
Cc: Pawan Gupta, Josh Poimboeuf, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Thursday, February 27, 2025 9:02 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>; Josh Poimboeuf
> <jpoimboe@kernel.org>; Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra
> <peterz@infradead.org>; Ingo Molnar <mingo@redhat.com>; Dave Hansen
> <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Thu, Feb 27, 2025 at 02:36:37PM +0000, Kaplan, David wrote:
>
> > My 2 cents is I think the negative option form is better. That's
> > because I'd rather err on the side of safety if the user forgets something.
> >
> > For instance, in the case of 'mitigations=off;guest_host' there would
> > be no
> > guest->guest protection. Did the user really intend for that? Or did
> > guest->they
> > simply forget to think about that attack vector? In this case, their
> > error leaves the system potentially insecure.
>
> Well, good question. It could be that or it could be that the admin only cares about
> protecting the host from malicious VMs but not the VMs amongst each other. Does
> this use case make sense?
In this case, I think it is clearer to say
mitigations=auto;no_guest_guest
That way, the admin is explicitly saying they don't want certain protection. This seems much harder to mess up.
>
> > But if we only support the opt-out form, like
> > 'mitigations=auto;no_guest_host' and the user forgot about
> > guest->guest, it would leave those protections enabled. Potentially
> > reducing performance more than intended, but the system is more secure.
>
> Still don't know for sure what the admin wanted: more perf or more security?
>
My argument is it's probably better to err on the side of security.
>
> > Because the existing kernel defaults things to on (the auto setting)
> > and requires action to disable mitigations, why not keep the same
> > logic here and only support the opt-out form?
> >
> > Some specific use case examples might be:
> > 'mitigations=auto;no_guest_guest,no_guest_host' -- Running trusted VMs
> > 'mitigations=auto;no_user_kernel,no_user_user' -- Running untrusted
> > VMs but trusted userspace (cloud provider setting)
> > 'mitigations=auto;no_cross_thread' -- Using core scheduling
>
> I guess those make sense if you write them this way.
>
> With the opt-out-only strategy, enabling a single vector would require you to specify
> all others as no_*.
>
> mitigations=auto,no_user_kernel,no_guest_host,no_guest_guest,no_cross_thread
>
> That'll give you user_user.
>
> Yeah, I guess we can't have the cake and eat it too. :-\
To me this seems like an unlikely use case, so maybe it's ok to be a bit more verbose.
And of course, we can add more options later...we just can't remove anything we add now.
>
> Which reminds me: on boot we should printk which attack vector got enabled and
> which got disabled.
>
> And then have that same info in
>
> /sys/devices/system/cpu/vulnerabilities/attack_vectors
>
> or so.
Ok, I can add that to the series.
>
> So that we can verify what got configured.
>
> > On the SMT piece, I think the proposal is:
> > 'auto;<attack vectors>' -- Default SMT protections (enable cheap ones
> > like STIBP, but never disable SMT) 'auto,nosmt;<attack vectors>' --
> > Full SMT protections, including disabling SMT if required
>
> Well, that's the question: cross-thread or nosmt is yet another attack vector.
> So if we define the format as I mentioned above, this should be
>
> auto;<attack_vectors>,nosmt or "cross_thread".
>
> Right?
But there's already an 'auto,nosmt' option. So I thought we wanted to leave that alone and use it as the base.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-27 15:22 ` Kaplan, David
@ 2025-02-27 15:37 ` Borislav Petkov
2025-02-27 16:05 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Borislav Petkov @ 2025-02-27 15:37 UTC (permalink / raw)
To: Kaplan, David
Cc: Pawan Gupta, Josh Poimboeuf, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Thu, Feb 27, 2025 at 03:22:08PM +0000, Kaplan, David wrote:
> In this case, I think it is clearer to say
> mitigations=auto;no_guest_guest
>
> That way, the admin is explicitly saying they don't want certain protection.
> This seems much harder to mess up.
So if we want to protect *only* against malicious VMs, the cmdline should be
mitigations:off;no_guest_guest
off being the policy to disable the other vectors because admin wants to have
her performance back.
Right?
Which then makes this one:
mitigations=off;guest_host
equivalent.
Uff.
> My argument is it's probably better to err on the side of security.
Probably. As you can realize, I'm playing the devil's advocate in all this
so that we can see how we feel about it.
> To me this seems like an unlikely use case, so maybe it's ok to be a bit more verbose.
Right, that use case is for benchmarkers. :)
> Ok, I can add that to the series.
Thx.
> But there's already an 'auto,nosmt' option. So I thought we wanted to leave
> that alone and use it as the base.
There's that. And "nosmt" is actually the cross-thread attack vector.
I guess what we should do here is to leave "auto,nosmt" alone and use
"cross_thread" for the attack vector and not allow "nosmt" in the new
mitigations specification scheme.
IOW, the set of the attack vectors will be:
list_of_vectors = {user_kernel, user_user, guest_host, guest_guest,
cross_thread }
Or the no_ versions of them respectively.
Hmmm.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-27 15:37 ` Borislav Petkov
@ 2025-02-27 16:05 ` Kaplan, David
2025-02-27 17:07 ` Borislav Petkov
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-27 16:05 UTC (permalink / raw)
To: Borislav Petkov
Cc: Pawan Gupta, Josh Poimboeuf, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Borislav Petkov <bp@alien8.de>
> Sent: Thursday, February 27, 2025 9:37 AM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>; Josh Poimboeuf
> <jpoimboe@kernel.org>; Thomas Gleixner <tglx@linutronix.de>; Peter Zijlstra
> <peterz@infradead.org>; Ingo Molnar <mingo@redhat.com>; Dave Hansen
> <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Thu, Feb 27, 2025 at 03:22:08PM +0000, Kaplan, David wrote:
> > In this case, I think it is clearer to say
> > mitigations=auto;no_guest_guest
> >
> > That way, the admin is explicitly saying they don't want certain protection.
> > This seems much harder to mess up.
>
> So if we want to protect *only* against malicious VMs, the cmdline should be
>
> mitigations:off;no_guest_guest
>
> off being the policy to disable the other vectors because admin wants to have her
> performance back.
>
> Right?
No. It should be 'mitigations=auto;no_user_kernel,no_user_user'
(And maybe add 'no_guest_guest' if they don’t care about the malicious VMs attacking each other)
>
> Which then makes this one:
>
> mitigations=off;guest_host
>
> equivalent.
>
> Uff.
Right, the question is do we support both opt-in and opt-out forms. We can. We could also start by only supporting opt-out form.
>
> > But there's already an 'auto,nosmt' option. So I thought we wanted to
> > leave that alone and use it as the base.
>
> There's that. And "nosmt" is actually the cross-thread attack vector.
>
> I guess what we should do here is to leave "auto,nosmt" alone and use
> "cross_thread" for the attack vector and not allow "nosmt" in the new mitigations
> specification scheme.
>
> IOW, the set of the attack vectors will be:
>
> list_of_vectors = {user_kernel, user_user, guest_host, guest_guest, cross_thread }
>
> Or the no_ versions of them respectively.
>
> Hmmm.
As mentioned earlier in the thread, SMT really needs a tristate of:
1. All SMT mitigations including potentially disabling SMT
2. All SMT mitigations but excluding the possibility of disabling SMT (current default)
3. No SMT mitigations (not even things like STIBP)
There are various ways to encode that in the command line options. 'auto,nosmt' is already #1. And just 'auto' is currently #2.
We could then add 'no_cross_thread' to support #3. I think that was the latest proposal.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 20/35] x86/bugs: Define attack vectors
2025-02-27 16:05 ` Kaplan, David
@ 2025-02-27 17:07 ` Borislav Petkov
0 siblings, 0 replies; 138+ messages in thread
From: Borislav Petkov @ 2025-02-27 17:07 UTC (permalink / raw)
To: Kaplan, David
Cc: Pawan Gupta, Josh Poimboeuf, Thomas Gleixner, Peter Zijlstra,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Thu, Feb 27, 2025 at 04:05:08PM +0000, Kaplan, David wrote:
> No. It should be 'mitigations=auto;no_user_kernel,no_user_user'
>
> (And maybe add 'no_guest_guest' if they don’t care about the malicious VMs attacking each other)
Doh, ofc, I meant that. :-P
mitigations=off;no_guest_guest
is a non-sensical config: "disable all and then disable guest_guest
additionally". Doh.
> Right, the question is do we support both opt-in and opt-out forms. We can.
> We could also start by only supporting opt-out form.
We probably should put this to a vote. I think supporting both will cause
a lot of confusion but starting with one set and then maybe adding the other
one later, if really needed, is what we could start with.
> As mentioned earlier in the thread, SMT really needs a tristate of:
> 1. All SMT mitigations including potentially disabling SMT
> 2. All SMT mitigations but excluding the possibility of disabling SMT (current default)
> 3. No SMT mitigations (not even things like STIBP)
>
> There are various ways to encode that in the command line options. 'auto,nosmt' is already #1. And just 'auto' is currently #2.
>
> We could then add 'no_cross_thread' to support #3. I think that was the latest proposal.
No objections here.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls.
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (19 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 20/35] x86/bugs: Define attack vectors David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-01-09 3:43 ` Pawan Gupta
2025-02-11 18:41 ` Josh Poimboeuf
2025-01-08 20:25 ` [PATCH v3 22/35] x86/bugs: Add attack vector controls for mds David Kaplan
` (14 subsequent siblings)
35 siblings, 2 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
The function should_mitigate_vuln() defines which vulnerabilities should
be mitigated based on the selected attack vector controls. The
selections here are based on the individual characteristics of each
vulnerability.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 69 ++++++++++++++++++++++++++++++++++++++
1 file changed, 69 insertions(+)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 88eba8e4c7fb..175dbbf9b06e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -347,6 +347,75 @@ static void x86_amd_ssb_disable(void)
wrmsrl(MSR_AMD64_LS_CFG, msrval);
}
+/*
+ * Returns true if vulnerability should be mitigated based on the
+ * selected attack vector controls
+ *
+ * See Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
+ */
+static bool __init should_mitigate_vuln(unsigned int bug)
+{
+ switch (bug) {
+ /*
+ * The only spectre_v1 mitigations in the kernel are related to
+ * SWAPGS protection on kernel entry. Therefore, protection is
+ * only required for the user->kernel attack vector.
+ */
+ case X86_BUG_SPECTRE_V1:
+ return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL);
+
+ /*
+ * Both spectre_v2 and srso may allow user->kernel or
+ * guest->host attacks through branch predictor manipulation.
+ */
+ case X86_BUG_SPECTRE_V2:
+ case X86_BUG_SRSO:
+ return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
+
+ /*
+ * spectre_v2_user refers to user->user or guest->guest branch
+ * predictor attacks only. Other indirect branch predictor attacks
+ * are covered by the spectre_v2 vulnerability.
+ */
+ case X86_BUG_SPECTRE_V2_USER:
+ return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
+
+ /* L1TF is only possible as a guest->host attack */
+ case X86_BUG_L1TF:
+ return cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
+
+ /*
+ * All the vulnerabilities below allow potentially leaking data
+ * across address spaces. Therefore, mitigation is required for
+ * any of these 4 attack vectors.
+ */
+ case X86_BUG_MDS:
+ case X86_BUG_TAA:
+ case X86_BUG_MMIO_STALE_DATA:
+ case X86_BUG_RFDS:
+ case X86_BUG_SRBDS:
+ return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
+ /*
+ * GDS can potentially leak data across address spaces and
+ * threads. Mitigation is required under all attack vectors.
+ */
+ case X86_BUG_GDS:
+ return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST) ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD);
+ default:
+ return false;
+ }
+}
+
+
/* Default mitigation for MDS-affected CPUs */
static enum mds_mitigations mds_mitigation __ro_after_init =
IS_ENABLED(CONFIG_MITIGATION_MDS) ? MDS_MITIGATION_AUTO : MDS_MITIGATION_OFF;
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls.
2025-01-08 20:25 ` [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls David Kaplan
@ 2025-01-09 3:43 ` Pawan Gupta
2025-01-09 15:08 ` Kaplan, David
2025-02-11 18:41 ` Josh Poimboeuf
1 sibling, 1 reply; 138+ messages in thread
From: Pawan Gupta @ 2025-01-09 3:43 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:25:01PM -0600, David Kaplan wrote:
> The function should_mitigate_vuln() defines which vulnerabilities should
> be mitigated based on the selected attack vector controls. The
> selections here are based on the individual characteristics of each
> vulnerability.
>
> Signed-off-by: David Kaplan <david.kaplan@amd.com>
> ---
> arch/x86/kernel/cpu/bugs.c | 69 ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 69 insertions(+)
>
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 88eba8e4c7fb..175dbbf9b06e 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -347,6 +347,75 @@ static void x86_amd_ssb_disable(void)
> wrmsrl(MSR_AMD64_LS_CFG, msrval);
> }
>
> +/*
> + * Returns true if vulnerability should be mitigated based on the
> + * selected attack vector controls
> + *
> + * See Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
> + */
> +static bool __init should_mitigate_vuln(unsigned int bug)
> +{
> + switch (bug) {
> + /*
> + * The only spectre_v1 mitigations in the kernel are related to
> + * SWAPGS protection on kernel entry. Therefore, protection is
> + * only required for the user->kernel attack vector.
> + */
> + case X86_BUG_SPECTRE_V1:
> + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL);
> +
> + /*
> + * Both spectre_v2 and srso may allow user->kernel or
> + * guest->host attacks through branch predictor manipulation.
> + */
> + case X86_BUG_SPECTRE_V2:
> + case X86_BUG_SRSO:
> + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
> +
> + /*
> + * spectre_v2_user refers to user->user or guest->guest branch
> + * predictor attacks only. Other indirect branch predictor attacks
> + * are covered by the spectre_v2 vulnerability.
> + */
> + case X86_BUG_SPECTRE_V2_USER:
> + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
> +
> + /* L1TF is only possible as a guest->host attack */
> + case X86_BUG_L1TF:
> + return cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
> +
> + /*
> + * All the vulnerabilities below allow potentially leaking data
> + * across address spaces. Therefore, mitigation is required for
> + * any of these 4 attack vectors.
> + */
> + case X86_BUG_MDS:
> + case X86_BUG_TAA:
> + case X86_BUG_MMIO_STALE_DATA:
> + case X86_BUG_RFDS:
> + case X86_BUG_SRBDS:
> + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
> + /*
> + * GDS can potentially leak data across address spaces and
> + * threads. Mitigation is required under all attack vectors.
> + */
> + case X86_BUG_GDS:
> + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD);
> + default:
> + return false;
It is missing the case X86_BUG_RETBLEED. should_mitigate_vuln() will always
return false for retbleed.
I am wondering if this function should return true in the default case. So
that in future if someone misses to add a case for a new bug, it will still
be mitigated.
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls.
2025-01-09 3:43 ` Pawan Gupta
@ 2025-01-09 15:08 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-01-09 15:08 UTC (permalink / raw)
To: Pawan Gupta
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
> Sent: Wednesday, January 8, 2025 9:43 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Josh Poimboeuf <jpoimboe@kernel.org>; Ingo
> Molnar <mingo@redhat.com>; Dave Hansen <dave.hansen@linux.intel.com>;
> x86@kernel.org; H . Peter Anvin <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based
> on attack vector controls.
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:25:01PM -0600, David Kaplan wrote:
> > The function should_mitigate_vuln() defines which vulnerabilities
> > should be mitigated based on the selected attack vector controls. The
> > selections here are based on the individual characteristics of each
> > vulnerability.
> >
> > Signed-off-by: David Kaplan <david.kaplan@amd.com>
> > ---
> > arch/x86/kernel/cpu/bugs.c | 69
> > ++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 69 insertions(+)
> >
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index 88eba8e4c7fb..175dbbf9b06e 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -347,6 +347,75 @@ static void x86_amd_ssb_disable(void)
> > wrmsrl(MSR_AMD64_LS_CFG, msrval); }
> >
> > +/*
> > + * Returns true if vulnerability should be mitigated based on the
> > + * selected attack vector controls
> > + *
> > + * See Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
> > + */
> > +static bool __init should_mitigate_vuln(unsigned int bug) {
> > + switch (bug) {
> > + /*
> > + * The only spectre_v1 mitigations in the kernel are related to
> > + * SWAPGS protection on kernel entry. Therefore, protection is
> > + * only required for the user->kernel attack vector.
> > + */
> > + case X86_BUG_SPECTRE_V1:
> > + return
> > +cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL);
> > +
> > + /*
> > + * Both spectre_v2 and srso may allow user->kernel or
> > + * guest->host attacks through branch predictor manipulation.
> > + */
> > + case X86_BUG_SPECTRE_V2:
> > + case X86_BUG_SRSO:
> > + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)
> ||
> > +
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
> > +
> > + /*
> > + * spectre_v2_user refers to user->user or guest->guest branch
> > + * predictor attacks only. Other indirect branch predictor attacks
> > + * are covered by the spectre_v2 vulnerability.
> > + */
> > + case X86_BUG_SPECTRE_V2_USER:
> > + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> > +
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
> > +
> > + /* L1TF is only possible as a guest->host attack */
> > + case X86_BUG_L1TF:
> > + return
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
> > +
> > + /*
> > + * All the vulnerabilities below allow potentially leaking data
> > + * across address spaces. Therefore, mitigation is required for
> > + * any of these 4 attack vectors.
> > + */
> > + case X86_BUG_MDS:
> > + case X86_BUG_TAA:
> > + case X86_BUG_MMIO_STALE_DATA:
> > + case X86_BUG_RFDS:
> > + case X86_BUG_SRBDS:
> > + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)
> ||
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
> > + /*
> > + * GDS can potentially leak data across address spaces and
> > + * threads. Mitigation is required under all attack vectors.
> > + */
> > + case X86_BUG_GDS:
> > + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)
> ||
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST) ||
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD);
> > + default:
> > + return false;
>
> It is missing the case X86_BUG_RETBLEED. should_mitigate_vuln() will always
> return false for retbleed.
Good catch! Not sure how I missed that but will fix. It should be in the same group as spectre_v2/srso
>
> I am wondering if this function should return true in the default case. So that in future
> if someone misses to add a case for a new bug, it will still be mitigated.
Perhaps a warning would also be appropriate, which would have made this issue easier to spot.
Thanks
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls.
2025-01-08 20:25 ` [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls David Kaplan
2025-01-09 3:43 ` Pawan Gupta
@ 2025-02-11 18:41 ` Josh Poimboeuf
2025-02-11 18:54 ` Josh Poimboeuf
2025-02-11 18:55 ` Kaplan, David
1 sibling, 2 replies; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 18:41 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:25:01PM -0600, David Kaplan wrote:
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index 88eba8e4c7fb..175dbbf9b06e 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -347,6 +347,75 @@ static void x86_amd_ssb_disable(void)
> wrmsrl(MSR_AMD64_LS_CFG, msrval);
> }
>
> +/*
> + * Returns true if vulnerability should be mitigated based on the
> + * selected attack vector controls
This needs a period.
> + *
> + * See Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
> + */
> +static bool __init should_mitigate_vuln(unsigned int bug)
> +{
> + switch (bug) {
> + /*
> + * The only spectre_v1 mitigations in the kernel are related to
> + * SWAPGS protection on kernel entry. Therefore, protection is
> + * only required for the user->kernel attack vector.
> + */
This comment isn't quite correct, there are things like
array_index_nospec() and barrier_nospec() being used, but those aren't
being controlled by bugs.c. They should at least be mentioned here.
> + case X86_BUG_SPECTRE_V1:
> + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL);
> +
> + /*
> + * Both spectre_v2 and srso may allow user->kernel or
> + * guest->host attacks through branch predictor manipulation.
> + */
I don't think this comment adds anything, the code already makes this
obvious.
> + case X86_BUG_SPECTRE_V2:
> + case X86_BUG_SRSO:
> + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
This needs aligned:
return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
Also, aren't cross-thread attacks possible here, thus the need for
STIBP? More questions about the cross-thread "vector" below, at the
bottom.
> + /*
> + * spectre_v2_user refers to user->user or guest->guest branch
> + * predictor attacks only. Other indirect branch predictor attacks
> + * are covered by the spectre_v2 vulnerability.
> + */
The code is already self-evident IMO, I don't think the comment adds
anything.
> + case X86_BUG_SPECTRE_V2_USER:
> + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
Another alignment issue.
> +
> + /* L1TF is only possible as a guest->host attack */
That's not quite correct, PTE inversion is also done to protect against
the user->kernel vector.
Also, IIRC the full l1tf mitigation requires disabling SMT, does that
not qualify as CPU_MITIGATE_CROSS_THREAD?
> + case X86_BUG_L1TF:
> + return cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
> +
> + /*
> + * All the vulnerabilities below allow potentially leaking data
> + * across address spaces. Therefore, mitigation is required for
> + * any of these 4 attack vectors.
> + */
> + case X86_BUG_MDS:
> + case X86_BUG_TAA:
> + case X86_BUG_MMIO_STALE_DATA:
> + case X86_BUG_RFDS:
> + case X86_BUG_SRBDS:
> + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
Some of these also require disabling SMT for their complete mitigations?
> + /*
> + * GDS can potentially leak data across address spaces and
> + * threads. Mitigation is required under all attack vectors.
> + */
> + case X86_BUG_GDS:
> + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST) ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD);
I'm confused by CPU_MITIGATE_CROSS_THREAD here, as the GDS mitigation
doesn't seem to disable SMT?
Am I just completely misunderstanding the meaning of
CPU_MITIGATE_CROSS_THREAD?
I assumed it's not a vector per se, but rather it means to force nosmt
if one of the other enabled mitigations requires doing so for its "full"
mitigation. But the implementation doesn't seem to match that.
On the other hand if it really is considered to be its own vector, that
doesn't make sense either, as "cross-thread attack" is really a subset
of each of the other vectors. For example, a user->kernel attack can
often be done either via syscall/irq or via cross-thread.
So I'm really confused. Am I missing something?
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* Re: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls.
2025-02-11 18:41 ` Josh Poimboeuf
@ 2025-02-11 18:54 ` Josh Poimboeuf
2025-02-11 19:04 ` Kaplan, David
2025-02-11 18:55 ` Kaplan, David
1 sibling, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 18:54 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Tue, Feb 11, 2025 at 10:41:33AM -0800, Josh Poimboeuf wrote:
> I'm confused by CPU_MITIGATE_CROSS_THREAD here, as the GDS mitigation
> doesn't seem to disable SMT?
>
> Am I just completely misunderstanding the meaning of
> CPU_MITIGATE_CROSS_THREAD?
>
> I assumed it's not a vector per se, but rather it means to force nosmt
> if one of the other enabled mitigations requires doing so for its "full"
> mitigation. But the implementation doesn't seem to match that.
>
> On the other hand if it really is considered to be its own vector, that
> doesn't make sense either, as "cross-thread attack" is really a subset
> of each of the other vectors. For example, a user->kernel attack can
> often be done either via syscall/irq or via cross-thread.
>
> So I'm really confused. Am I missing something?
So I looked at the next patch and now I see what I was missing: the
individual mitigations are checking
cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD) before deciding
whether to disable SMT. So the implementation mostly makes sense now.
should_mitigate_vuln() should have a comment at the top explaining that
it doesn't check CPU_MITIGATE_CROSS_THREAD (since it's not actually a
standalone vector but rather dependent on the others) and that each
individual mitigation should check CPU_MITIGATE_CROSS_THREAD when
deciding whether to disable SMT.
Also, checking CPU_MITIGATE_CROSS_THREAD for GDS doesn't make sense
because as I mentioned above, "cross-thread" is really a subset of the
other vectors. If the user isn't concerned about any of the other
attack vectors, mitigate_cross_thread=on should just be ignored.
I'm also thinking that "mitigate_cross_thread" isn't quite the right
name for it, as it really only relates to disabling SMT rather than
other cross-thread mitigations like STIBP.
So "mitigate_disable_smt" or "mitigate_nosmt"?
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* RE: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls.
2025-02-11 18:54 ` Josh Poimboeuf
@ 2025-02-11 19:04 ` Kaplan, David
2025-02-11 20:34 ` Josh Poimboeuf
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-11 19:04 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Tuesday, February 11, 2025 12:54 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based
> on attack vector controls.
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Tue, Feb 11, 2025 at 10:41:33AM -0800, Josh Poimboeuf wrote:
> > I'm confused by CPU_MITIGATE_CROSS_THREAD here, as the GDS
> mitigation
> > doesn't seem to disable SMT?
> >
> > Am I just completely misunderstanding the meaning of
> > CPU_MITIGATE_CROSS_THREAD?
> >
> > I assumed it's not a vector per se, but rather it means to force nosmt
> > if one of the other enabled mitigations requires doing so for its "full"
> > mitigation. But the implementation doesn't seem to match that.
> >
> > On the other hand if it really is considered to be its own vector,
> > that doesn't make sense either, as "cross-thread attack" is really a
> > subset of each of the other vectors. For example, a user->kernel
> > attack can often be done either via syscall/irq or via cross-thread.
> >
> > So I'm really confused. Am I missing something?
>
> So I looked at the next patch and now I see what I was missing: the individual
> mitigations are checking
> cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD) before deciding
> whether to disable SMT. So the implementation mostly makes sense now.
>
> should_mitigate_vuln() should have a comment at the top explaining that it doesn't
> check CPU_MITIGATE_CROSS_THREAD (since it's not actually a standalone
> vector but rather dependent on the others) and that each individual mitigation should
> check CPU_MITIGATE_CROSS_THREAD when deciding whether to disable SMT.
>
> Also, checking CPU_MITIGATE_CROSS_THREAD for GDS doesn't make sense
> because as I mentioned above, "cross-thread" is really a subset of the other
> vectors. If the user isn't concerned about any of the other attack vectors,
> mitigate_cross_thread=on should just be ignored.
>
> I'm also thinking that "mitigate_cross_thread" isn't quite the right name for it, as it
> really only relates to disabling SMT rather than other cross-thread mitigations like
> STIBP.
>
> So "mitigate_disable_smt" or "mitigate_nosmt"?
I'm glad the next patch helped, I can certainly add a comment to help. As you noted, should_mitigate_vuln() is not the only function that determines mitigations.
To explain my thinking a bit more, mitigate_cross_thread is intended to enable cross-thread mitigations for any vulnerabilities the hardware may have. That does not necessarily require disabling SMT. The required cross-thread mitigation is defined by each vulnerability.
For many vulnerabilities (like MDS), mitigation requires disabling SMT. mds_apply_mitigation() queries the status of the cross-thread attack vector and will disable SMT if needed.
For GDS, mitigating cross-thread attacks does not require disabling SMT, just enabling the mitigation in the MSR.
To be fair, it doesn't make much sense to disable all the attack vectors except mitigate_cross_thread, but for correctness it seemed like enabling the mitigation in this case was the right thing.
I don't really want to tie mitigate_cross_thread to SMT disable because of cases like this where there is a cross-thread attack mitigation that is different from disabling SMT. You could also imagine bugs that might be even more limited, where perhaps they're only relevant for say user->kernel but also have a cross-thread component.
STIBP is another case where there is a cross-thread mitigation that is not disabling SMT (but this one is harder to deal with given the historical precedent as noted in the other email). But the point is that there have been cases where cross-thread mitigation != disable SMT.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls.
2025-02-11 19:04 ` Kaplan, David
@ 2025-02-11 20:34 ` Josh Poimboeuf
2025-02-11 20:53 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 20:34 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Tue, Feb 11, 2025 at 07:04:44PM +0000, Kaplan, David wrote:
> To explain my thinking a bit more, mitigate_cross_thread is intended
> to enable cross-thread mitigations for any vulnerabilities the
> hardware may have. That does not necessarily require disabling SMT.
> The required cross-thread mitigation is defined by each vulnerability.
>
> For many vulnerabilities (like MDS), mitigation requires disabling
> SMT. mds_apply_mitigation() queries the status of the cross-thread
> attack vector and will disable SMT if needed.
>
> For GDS, mitigating cross-thread attacks does not require disabling
> SMT, just enabling the mitigation in the MSR.
>
> To be fair, it doesn't make much sense to disable all the attack
> vectors except mitigate_cross_thread, but for correctness it seemed
> like enabling the mitigation in this case was the right thing.
>
> I don't really want to tie mitigate_cross_thread to SMT disable
> because of cases like this where there is a cross-thread attack
> mitigation that is different from disabling SMT. You could also
> imagine bugs that might be even more limited, where perhaps they're
> only relevant for say user->kernel but also have a cross-thread
> component.
But that "cross-thread" thing doesn't even make sense as a vector.
Think about it this way. For cross-thread attacks:
- CPU thread A is the attacker. It's running in either user or guest.
- CPU thread B is the victim. It's running in either kernel, user, or
host.
So ALL cross-thread attacks have to include one of the following:
- user->kernel
- user->user
- guest->host
- guest->guest
So by definition, a cross-thread attack must also involve at least one
of those four main vectors.
So cross-thread can't be a standalone vector. Rather, it's a dependent
vector or "sub-vector".
If a user wants to be protected from user->user, of course that includes
wanting to be protected from *cross-thread* user->user.
And if they *don't* care about user->user, why would they care about
*cross-thread* user->user?
What users *really* care about (and why there exists such a distinction
in the first place) is the functional/performance impact of disabling
SMT.
So a flag to allow the vectors to disable SMT makes more sense, e.g.,
mitigate_disable_smt=on
And maybe also an additional flag which says "I've enabled core
scheduling or some other isolation scheme, don't worry about any of the
SMT-specific mitigations like STIBP":
mitigate_smt_safe=on
But the standalone "cross-thread" vector doesn't fit at all.
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls.
2025-02-11 20:34 ` Josh Poimboeuf
@ 2025-02-11 20:53 ` Kaplan, David
2025-02-11 22:38 ` Josh Poimboeuf
0 siblings, 1 reply; 138+ messages in thread
From: Kaplan, David @ 2025-02-11 20:53 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Tuesday, February 11, 2025 2:35 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based
> on attack vector controls.
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Tue, Feb 11, 2025 at 07:04:44PM +0000, Kaplan, David wrote:
> > To explain my thinking a bit more, mitigate_cross_thread is intended
> > to enable cross-thread mitigations for any vulnerabilities the
> > hardware may have. That does not necessarily require disabling SMT.
> > The required cross-thread mitigation is defined by each vulnerability.
> >
> > For many vulnerabilities (like MDS), mitigation requires disabling
> > SMT. mds_apply_mitigation() queries the status of the cross-thread
> > attack vector and will disable SMT if needed.
> >
> > For GDS, mitigating cross-thread attacks does not require disabling
> > SMT, just enabling the mitigation in the MSR.
> >
> > To be fair, it doesn't make much sense to disable all the attack
> > vectors except mitigate_cross_thread, but for correctness it seemed
> > like enabling the mitigation in this case was the right thing.
> >
> > I don't really want to tie mitigate_cross_thread to SMT disable
> > because of cases like this where there is a cross-thread attack
> > mitigation that is different from disabling SMT. You could also
> > imagine bugs that might be even more limited, where perhaps they're
> > only relevant for say user->kernel but also have a cross-thread
> > component.
>
> But that "cross-thread" thing doesn't even make sense as a vector.
>
> Think about it this way. For cross-thread attacks:
>
> - CPU thread A is the attacker. It's running in either user or guest.
>
> - CPU thread B is the victim. It's running in either kernel, user, or
> host.
>
> So ALL cross-thread attacks have to include one of the following:
>
> - user->kernel
> - user->user
> - guest->host
> - guest->guest
>
> So by definition, a cross-thread attack must also involve at least one of those four
> main vectors.
>
> So cross-thread can't be a standalone vector. Rather, it's a dependent vector or
> "sub-vector".
>
> If a user wants to be protected from user->user, of course that includes wanting to
> be protected from *cross-thread* user->user.
>
> And if they *don't* care about user->user, why would they care about
> *cross-thread* user->user?
>
> What users *really* care about (and why there exists such a distinction in the first
> place) is the functional/performance impact of disabling SMT.
>
> So a flag to allow the vectors to disable SMT makes more sense, e.g.,
>
> mitigate_disable_smt=on
>
> And maybe also an additional flag which says "I've enabled core scheduling or
> some other isolation scheme, don't worry about any of the SMT-specific mitigations
> like STIBP":
>
> mitigate_smt_safe=on
>
> But the standalone "cross-thread" vector doesn't fit at all.
>
It's a valid argument, I definitely agree that cross-thread is a subset of the other vectors.
If I understand your proposal correctly, 'mitigate_disable_smt' means that the kernel may disable SMT if a vulnerability being mitigated requires it (yes?). I wonder if that should be 'mitigate_smt' with a 3-way selection of:
'on' (disable SMT if needed based on vulnerabilities)
'auto' (do not disable SMT but apply other existing SMT-based mitigations on relevant vulnerabilities)
'off' (do not apply any SMT related mitigations like STIBP)
And this would not be used when selecting whether to mitigate a bug, only in which mitigations are applied.
Thoughts?
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls.
2025-02-11 20:53 ` Kaplan, David
@ 2025-02-11 22:38 ` Josh Poimboeuf
0 siblings, 0 replies; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 22:38 UTC (permalink / raw)
To: Kaplan, David
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
On Tue, Feb 11, 2025 at 08:53:53PM +0000, Kaplan, David wrote:
> If I understand your proposal correctly, 'mitigate_disable_smt' means
> that the kernel may disable SMT if a vulnerability being mitigated
> requires it (yes?). I wonder if that should be 'mitigate_smt' with a
> 3-way selection of:
>
> 'on' (disable SMT if needed based on vulnerabilities)
> 'auto' (do not disable SMT but apply other existing SMT-based mitigations on relevant vulnerabilities)
> 'off' (do not apply any SMT related mitigations like STIBP)
>
> And this would not be used when selecting whether to mitigate a bug, only in which mitigations are applied.
>
> Thoughts?
Sounds good!
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* RE: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls.
2025-02-11 18:41 ` Josh Poimboeuf
2025-02-11 18:54 ` Josh Poimboeuf
@ 2025-02-11 18:55 ` Kaplan, David
1 sibling, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-11 18:55 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Tuesday, February 11, 2025 12:42 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based
> on attack vector controls.
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:25:01PM -0600, David Kaplan wrote:
> > diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> > index 88eba8e4c7fb..175dbbf9b06e 100644
> > --- a/arch/x86/kernel/cpu/bugs.c
> > +++ b/arch/x86/kernel/cpu/bugs.c
> > @@ -347,6 +347,75 @@ static void x86_amd_ssb_disable(void)
> > wrmsrl(MSR_AMD64_LS_CFG, msrval); }
> >
> > +/*
> > + * Returns true if vulnerability should be mitigated based on the
> > + * selected attack vector controls
>
> This needs a period.
Ack
>
> > + *
> > + * See Documentation/admin-guide/hw-vuln/attack_vector_controls.rst
> > + */
> > +static bool __init should_mitigate_vuln(unsigned int bug) {
> > + switch (bug) {
> > + /*
> > + * The only spectre_v1 mitigations in the kernel are related to
> > + * SWAPGS protection on kernel entry. Therefore, protection is
> > + * only required for the user->kernel attack vector.
> > + */
>
> This comment isn't quite correct, there are things like
> array_index_nospec() and barrier_nospec() being used, but those aren't being
> controlled by bugs.c. They should at least be mentioned here.
Ack, it's really about the controllable mitigations, the *_nospec() functions are done unconditionally. Will fix.
>
> > + case X86_BUG_SPECTRE_V1:
> > + return
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL);
> > +
> > + /*
> > + * Both spectre_v2 and srso may allow user->kernel or
> > + * guest->host attacks through branch predictor manipulation.
> > + */
>
> I don't think this comment adds anything, the code already makes this obvious.
Ok.
>
> > + case X86_BUG_SPECTRE_V2:
> > + case X86_BUG_SRSO:
> > + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)
> ||
> > +
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
>
> This needs aligned:
>
> return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL) ||
> cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
>
> Also, aren't cross-thread attacks possible here, thus the need for STIBP? More
> questions about the cross-thread "vector" below, at the bottom.
>
> > + /*
> > + * spectre_v2_user refers to user->user or guest->guest branch
> > + * predictor attacks only. Other indirect branch predictor attacks
> > + * are covered by the spectre_v2 vulnerability.
> > + */
>
> The code is already self-evident IMO, I don't think the comment adds anything.
I'll beg to differ on this one, when I started looking at the code I didn't find it obvious that spectre_v2 only referred to the user->kernel and guest->host vectors while spectre_v2_user handles the others. Additionally I don't know it's immediately obvious from the name that spectre_v2_user handles guest->guest.
>
> > + case X86_BUG_SPECTRE_V2_USER:
> > + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> > +
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
>
> Another alignment issue.
>
> > +
> > + /* L1TF is only possible as a guest->host attack */
>
> That's not quite correct, PTE inversion is also done to protect against the user-
> >kernel vector.
Yes good point, I'll fix this.
>
> Also, IIRC the full l1tf mitigation requires disabling SMT, does that not qualify as
> CPU_MITIGATE_CROSS_THREAD?
It does, but this is handled in l1tf_select_mitigation.
>
> > + case X86_BUG_L1TF:
> > + return
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST);
> > +
> > + /*
> > + * All the vulnerabilities below allow potentially leaking data
> > + * across address spaces. Therefore, mitigation is required for
> > + * any of these 4 attack vectors.
> > + */
> > + case X86_BUG_MDS:
> > + case X86_BUG_TAA:
> > + case X86_BUG_MMIO_STALE_DATA:
> > + case X86_BUG_RFDS:
> > + case X86_BUG_SRBDS:
> > + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)
> ||
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> > +
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST);
>
> Some of these also require disabling SMT for their complete mitigations?
Yes, these are handled in their respective functions.
>
> > + /*
> > + * GDS can potentially leak data across address spaces and
> > + * threads. Mitigation is required under all attack vectors.
> > + */
> > + case X86_BUG_GDS:
> > + return cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)
> ||
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST) ||
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_USER_USER) ||
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_GUEST) ||
> > +
> > + cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD);
>
> I'm confused by CPU_MITIGATE_CROSS_THREAD here, as the GDS mitigation
> doesn't seem to disable SMT?
>
> Am I just completely misunderstanding the meaning of
> CPU_MITIGATE_CROSS_THREAD?
>
> I assumed it's not a vector per se, but rather it means to force nosmt if one of the
> other enabled mitigations requires doing so for its "full"
> mitigation. But the implementation doesn't seem to match that.
>
> On the other hand if it really is considered to be its own vector, that doesn't make
> sense either, as "cross-thread attack" is really a subset of each of the other
> vectors. For example, a user->kernel attack can often be done either via syscall/irq
> or via cross-thread.
>
> So I'm really confused. Am I missing something?
>
I'll reply to your newer mail.
--David Kaplan
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 22/35] x86/bugs: Add attack vector controls for mds
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (20 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 21/35] x86/bugs: Determine relevant vulnerabilities based on attack vector controls David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-01-08 20:25 ` [PATCH v3 23/35] x86/bugs: Add attack vector controls for taa David Kaplan
` (13 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Use attack vector controls to determine if mds mitigation is required.
If cross-thread attack mitigations are required, disable SMT.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 175dbbf9b06e..298acb80d126 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -476,8 +476,12 @@ static void __init mds_select_mitigation(void)
if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off())
mds_mitigation = MDS_MITIGATION_OFF;
- if (mds_mitigation == MDS_MITIGATION_AUTO)
- mds_mitigation = MDS_MITIGATION_FULL;
+ if (mds_mitigation == MDS_MITIGATION_AUTO) {
+ if (should_mitigate_vuln(X86_BUG_MDS))
+ mds_mitigation = MDS_MITIGATION_FULL;
+ else
+ mds_mitigation = MDS_MITIGATION_OFF;
+ }
if (mds_mitigation == MDS_MITIGATION_FULL) {
if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
@@ -506,7 +510,8 @@ static void __init mds_apply_mitigation(void)
if (mds_mitigation == MDS_MITIGATION_FULL) {
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
- (mds_nosmt || cpu_mitigations_auto_nosmt()))
+ (mds_nosmt || cpu_mitigations_auto_nosmt() ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD)))
cpu_smt_disable(false);
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 23/35] x86/bugs: Add attack vector controls for taa
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (21 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 22/35] x86/bugs: Add attack vector controls for mds David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-02-11 19:01 ` Josh Poimboeuf
2025-01-08 20:25 ` [PATCH v3 24/35] x86/bugs: Add attack vector controls for mmio David Kaplan
` (12 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Use attack vector controls to determine if taa mitigation is required.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 298acb80d126..af5aaa0397c7 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -572,8 +572,12 @@ static void __init taa_select_mitigation(void)
return;
/* Microcode will be checked in taa_update_mitigation(). */
- if (taa_mitigation == TAA_MITIGATION_AUTO)
- taa_mitigation = TAA_MITIGATION_VERW;
+ if (taa_mitigation == TAA_MITIGATION_AUTO) {
+ if (should_mitigate_vuln(X86_BUG_TAA))
+ taa_mitigation = TAA_MITIGATION_VERW;
+ else
+ taa_mitigation = TAA_MITIGATION_OFF;
+ }
}
@@ -620,7 +624,8 @@ static void __init taa_apply_mitigation(void)
*/
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
- if (taa_nosmt || cpu_mitigations_auto_nosmt())
+ if (taa_nosmt || cpu_mitigations_auto_nosmt() ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD))
cpu_smt_disable(false);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 23/35] x86/bugs: Add attack vector controls for taa
2025-01-08 20:25 ` [PATCH v3 23/35] x86/bugs: Add attack vector controls for taa David Kaplan
@ 2025-02-11 19:01 ` Josh Poimboeuf
0 siblings, 0 replies; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 19:01 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:25:03PM -0600, David Kaplan wrote:
> @@ -620,7 +624,8 @@ static void __init taa_apply_mitigation(void)
> */
> setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
>
> - if (taa_nosmt || cpu_mitigations_auto_nosmt())
> + if (taa_nosmt || cpu_mitigations_auto_nosmt() ||
> + cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD))
> cpu_smt_disable(false);
There's a huge overlap between cpu_mitigations_auto_nosmt() and
CPU_MITIGATE_CROSS_THREAD.
IIUC, the main difference is that cpu_mitigations_auto_nosmt() selects
all the vectors whereas mitigate_cross_thread=on can be combined with
individual vectors.
Maybe we need a should_disable_smt() helper which checks both?
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 24/35] x86/bugs: Add attack vector controls for mmio
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (22 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 23/35] x86/bugs: Add attack vector controls for taa David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-01-08 20:25 ` [PATCH v3 25/35] x86/bugs: Add attack vector controls for rfds David Kaplan
` (11 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Use attack vectors controls to determine if mmio mitigation is required.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index af5aaa0397c7..4249a1f1524c 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -676,9 +676,12 @@ static void __init mmio_select_mitigation(void)
return;
/* Microcode will be checked in mmio_update_mitigation(). */
- if (mmio_mitigation == MMIO_MITIGATION_AUTO)
- mmio_mitigation = MMIO_MITIGATION_VERW;
-
+ if (mmio_mitigation == MMIO_MITIGATION_AUTO) {
+ if (should_mitigate_vuln(X86_BUG_MMIO_STALE_DATA))
+ mmio_mitigation = MMIO_MITIGATION_VERW;
+ else
+ mmio_mitigation = MMIO_MITIGATION_OFF;
+ }
}
static void __init mmio_update_mitigation(void)
@@ -739,7 +742,8 @@ static void __init mmio_apply_mitigation(void)
if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
static_branch_enable(&mds_idle_clear);
- if (mmio_nosmt || cpu_mitigations_auto_nosmt())
+ if (mmio_nosmt || cpu_mitigations_auto_nosmt() ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD))
cpu_smt_disable(false);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 25/35] x86/bugs: Add attack vector controls for rfds
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (23 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 24/35] x86/bugs: Add attack vector controls for mmio David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-01-08 20:25 ` [PATCH v3 26/35] x86/bugs: Add attack vector controls for srbds David Kaplan
` (10 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Use attack vector controls to determine if rfds mitigation is required.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4249a1f1524c..d9b12c706fc0 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -783,8 +783,12 @@ static void __init rfds_select_mitigation(void)
if (rfds_mitigation == RFDS_MITIGATION_OFF)
return;
- if (rfds_mitigation == RFDS_MITIGATION_AUTO)
- rfds_mitigation = RFDS_MITIGATION_VERW;
+ if (rfds_mitigation == RFDS_MITIGATION_AUTO) {
+ if (should_mitigate_vuln(X86_BUG_RFDS))
+ rfds_mitigation = RFDS_MITIGATION_VERW;
+ else
+ rfds_mitigation = RFDS_MITIGATION_OFF;
+ }
}
static void __init rfds_update_mitigation(void)
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 26/35] x86/bugs: Add attack vector controls for srbds
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (24 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 25/35] x86/bugs: Add attack vector controls for rfds David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-01-08 20:25 ` [PATCH v3 27/35] x86/bugs: Add attack vector controls for gds David Kaplan
` (9 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Use attack vector controls to determine if srbds mitigation is required.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index d9b12c706fc0..c6b395608c3f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -897,8 +897,14 @@ static void __init srbds_select_mitigation(void)
if (!boot_cpu_has_bug(X86_BUG_SRBDS))
return;
- if (srbds_mitigation == SRBDS_MITIGATION_AUTO)
- srbds_mitigation = SRBDS_MITIGATION_FULL;
+ if (srbds_mitigation == SRBDS_MITIGATION_AUTO) {
+ if (should_mitigate_vuln(X86_BUG_SRBDS))
+ srbds_mitigation = SRBDS_MITIGATION_FULL;
+ else {
+ srbds_mitigation = SRBDS_MITIGATION_OFF;
+ return;
+ }
+ }
/*
* Check to see if this is one of the MDS_NO systems supporting TSX that
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 27/35] x86/bugs: Add attack vector controls for gds
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (25 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 26/35] x86/bugs: Add attack vector controls for srbds David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-01-08 20:25 ` [PATCH v3 28/35] x86/bugs: Add attack vector controls for spectre_v1 David Kaplan
` (8 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Use attack vector controls to determine if gds mitigation is required.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c6b395608c3f..9c9299b988d1 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1058,8 +1058,14 @@ static void __init gds_select_mitigation(void)
gds_mitigation = GDS_MITIGATION_OFF;
/* Will verify below that mitigation _can_ be disabled */
- if (gds_mitigation == GDS_MITIGATION_AUTO)
- gds_mitigation = GDS_MITIGATION_FULL;
+ if (gds_mitigation == GDS_MITIGATION_AUTO) {
+ if (should_mitigate_vuln(X86_BUG_GDS))
+ gds_mitigation = GDS_MITIGATION_FULL;
+ else {
+ gds_mitigation = GDS_MITIGATION_OFF;
+ return;
+ }
+ }
/* No microcode */
if (!(x86_arch_cap_msr & ARCH_CAP_GDS_CTRL)) {
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 28/35] x86/bugs: Add attack vector controls for spectre_v1
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (26 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 27/35] x86/bugs: Add attack vector controls for gds David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-01-08 20:25 ` [PATCH v3 29/35] x86/bugs: Add attack vector controls for retbleed David Kaplan
` (7 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Use attack vector controls to determine if spectre_v1 mitigation is
required.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 9c9299b988d1..41c8a9dad411 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1172,6 +1172,9 @@ static void __init spectre_v1_select_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off())
spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
+
+ if (!should_mitigate_vuln(X86_BUG_SPECTRE_V1))
+ spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
}
static void __init spectre_v1_apply_mitigation(void)
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 29/35] x86/bugs: Add attack vector controls for retbleed
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (27 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 28/35] x86/bugs: Add attack vector controls for spectre_v1 David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-01-08 20:25 ` [PATCH v3 30/35] x86/bugs: Add attack vector controls for spectre_v2_user David Kaplan
` (6 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Use attack vector controls to determine if retbleed mitigation is
required.
Disable SMT if cross-thread protection is desired and STIBP is not
available.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 41c8a9dad411..430f89a5f66a 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1331,13 +1331,17 @@ static void __init retbleed_select_mitigation(void)
}
if (retbleed_mitigation == RETBLEED_MITIGATION_AUTO) {
- if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
- boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
- if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
- retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
- else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
- boot_cpu_has(X86_FEATURE_IBPB))
- retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
+ if (should_mitigate_vuln(X86_BUG_RETBLEED)) {
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
+ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
+ if (IS_ENABLED(CONFIG_MITIGATION_UNRET_ENTRY))
+ retbleed_mitigation = RETBLEED_MITIGATION_UNRET;
+ else if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY) &&
+ boot_cpu_has(X86_FEATURE_IBPB))
+ retbleed_mitigation = RETBLEED_MITIGATION_IBPB;
+ }
+ } else {
+ retbleed_mitigation = RETBLEED_MITIGATION_NONE;
}
}
}
@@ -1438,7 +1442,8 @@ static void __init retbleed_apply_mitigation(void)
}
if (mitigate_smt && !boot_cpu_has(X86_FEATURE_STIBP) &&
- (retbleed_nosmt || cpu_mitigations_auto_nosmt()))
+ (retbleed_nosmt || cpu_mitigations_auto_nosmt() ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD)))
cpu_smt_disable(false);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 30/35] x86/bugs: Add attack vector controls for spectre_v2_user
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (28 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 29/35] x86/bugs: Add attack vector controls for retbleed David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-02-11 19:03 ` Josh Poimboeuf
2025-01-08 20:25 ` [PATCH v3 31/35] x86/bugs: Add attack vector controls for bhi David Kaplan
` (5 subsequent siblings)
35 siblings, 1 reply; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Use attack vector controls to determine if spectre_v2_user mitigation is
required.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 430f89a5f66a..c1b60ffa3218 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1614,6 +1614,13 @@ spectre_v2_user_select_mitigation(void)
spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
break;
case SPECTRE_V2_USER_CMD_AUTO:
+ if (should_mitigate_vuln(X86_BUG_SPECTRE_V2_USER)) {
+ spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
+ spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
+ } else {
+ return;
+ }
+ break;
case SPECTRE_V2_USER_CMD_PRCTL:
spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* Re: [PATCH v3 30/35] x86/bugs: Add attack vector controls for spectre_v2_user
2025-01-08 20:25 ` [PATCH v3 30/35] x86/bugs: Add attack vector controls for spectre_v2_user David Kaplan
@ 2025-02-11 19:03 ` Josh Poimboeuf
2025-02-12 17:22 ` Kaplan, David
0 siblings, 1 reply; 138+ messages in thread
From: Josh Poimboeuf @ 2025-02-11 19:03 UTC (permalink / raw)
To: David Kaplan
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Wed, Jan 08, 2025 at 02:25:10PM -0600, David Kaplan wrote:
> @@ -1614,6 +1614,13 @@ spectre_v2_user_select_mitigation(void)
> spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
> break;
> case SPECTRE_V2_USER_CMD_AUTO:
> + if (should_mitigate_vuln(X86_BUG_SPECTRE_V2_USER)) {
> + spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
> + spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
> + } else {
> + return;
> + }
> + break;
Can just fallthrough in the should_mitigate_vuln() case?
> case SPECTRE_V2_USER_CMD_PRCTL:
> spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
> spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
--
Josh
^ permalink raw reply [flat|nested] 138+ messages in thread* RE: [PATCH v3 30/35] x86/bugs: Add attack vector controls for spectre_v2_user
2025-02-11 19:03 ` Josh Poimboeuf
@ 2025-02-12 17:22 ` Kaplan, David
0 siblings, 0 replies; 138+ messages in thread
From: Kaplan, David @ 2025-02-12 17:22 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Pawan Gupta,
Ingo Molnar, Dave Hansen, x86@kernel.org, H . Peter Anvin,
linux-kernel@vger.kernel.org
[AMD Official Use Only - AMD Internal Distribution Only]
> -----Original Message-----
> From: Josh Poimboeuf <jpoimboe@kernel.org>
> Sent: Tuesday, February 11, 2025 1:03 PM
> To: Kaplan, David <David.Kaplan@amd.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Borislav Petkov <bp@alien8.de>; Peter
> Zijlstra <peterz@infradead.org>; Pawan Gupta
> <pawan.kumar.gupta@linux.intel.com>; Ingo Molnar <mingo@redhat.com>; Dave
> Hansen <dave.hansen@linux.intel.com>; x86@kernel.org; H . Peter Anvin
> <hpa@zytor.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v3 30/35] x86/bugs: Add attack vector controls for
> spectre_v2_user
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jan 08, 2025 at 02:25:10PM -0600, David Kaplan wrote:
> > @@ -1614,6 +1614,13 @@ spectre_v2_user_select_mitigation(void)
> > spectre_v2_user_stibp = SPECTRE_V2_USER_STRICT;
> > break;
> > case SPECTRE_V2_USER_CMD_AUTO:
> > + if (should_mitigate_vuln(X86_BUG_SPECTRE_V2_USER)) {
> > + spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
> > + spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
> > + } else {
> > + return;
> > + }
> > + break;
>
> Can just fallthrough in the should_mitigate_vuln() case?
Yeah, I can do that.
--David Kaplan
>
> > case SPECTRE_V2_USER_CMD_PRCTL:
> > spectre_v2_user_ibpb = SPECTRE_V2_USER_PRCTL;
> > spectre_v2_user_stibp = SPECTRE_V2_USER_PRCTL;
>
> --
> Josh
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v3 31/35] x86/bugs: Add attack vector controls for bhi
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (29 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 30/35] x86/bugs: Add attack vector controls for spectre_v2_user David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-01-08 20:25 ` [PATCH v3 32/35] x86/bugs: Add attack vector controls for spectre_v2 David Kaplan
` (4 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
There are two BHI mitigations, one for SYSCALL and one for VMEXIT.
Split these up so they can be selected individually based on attack
vector.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 38 ++++++++++++++++++++++++++------------
1 file changed, 26 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c1b60ffa3218..57c762d86fca 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -1945,8 +1945,9 @@ static bool __init spec_ctrl_bhi_dis(void)
enum bhi_mitigations {
BHI_MITIGATION_OFF,
BHI_MITIGATION_AUTO,
- BHI_MITIGATION_ON,
- BHI_MITIGATION_VMEXIT_ONLY,
+ BHI_MITIGATION_FULL,
+ BHI_MITIGATION_VMEXIT,
+ BHI_MITIGATION_SYSCALL
};
static enum bhi_mitigations bhi_mitigation __ro_after_init =
@@ -1960,9 +1961,9 @@ static int __init spectre_bhi_parse_cmdline(char *str)
if (!strcmp(str, "off"))
bhi_mitigation = BHI_MITIGATION_OFF;
else if (!strcmp(str, "on"))
- bhi_mitigation = BHI_MITIGATION_ON;
+ bhi_mitigation = BHI_MITIGATION_FULL;
else if (!strcmp(str, "vmexit"))
- bhi_mitigation = BHI_MITIGATION_VMEXIT_ONLY;
+ bhi_mitigation = BHI_MITIGATION_VMEXIT;
else
pr_err("Ignoring unknown spectre_bhi option (%s)", str);
@@ -1975,8 +1976,17 @@ static void __init bhi_select_mitigation(void)
if (!boot_cpu_has(X86_BUG_BHI) || cpu_mitigations_off())
bhi_mitigation = BHI_MITIGATION_OFF;
- if (bhi_mitigation == BHI_MITIGATION_AUTO)
- bhi_mitigation = BHI_MITIGATION_ON;
+ if (bhi_mitigation == BHI_MITIGATION_AUTO) {
+ if (cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)) {
+ if (cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST))
+ bhi_mitigation = BHI_MITIGATION_FULL;
+ else
+ bhi_mitigation = BHI_MITIGATION_SYSCALL;
+ } else if (cpu_mitigate_attack_vector(CPU_MITIGATE_GUEST_HOST))
+ bhi_mitigation = BHI_MITIGATION_VMEXIT;
+ else
+ bhi_mitigation = BHI_MITIGATION_OFF;
+ }
}
static void __init bhi_apply_mitigation(void)
@@ -1999,15 +2009,19 @@ static void __init bhi_apply_mitigation(void)
if (!IS_ENABLED(CONFIG_X86_64))
return;
- if (bhi_mitigation == BHI_MITIGATION_VMEXIT_ONLY) {
- pr_info("Spectre BHI mitigation: SW BHB clearing on VM exit only\n");
+ /* Mitigate KVM if guest->host protection is desired */
+ if (bhi_mitigation == BHI_MITIGATION_FULL ||
+ bhi_mitigation == BHI_MITIGATION_VMEXIT) {
setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT);
- return;
+ pr_info("Spectre BHI mitigation: SW BHB clearing on VM exit\n");
}
- pr_info("Spectre BHI mitigation: SW BHB clearing on syscall and VM exit\n");
- setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP);
- setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT);
+ /* Mitigate syscalls if user->kernel protection is desired */
+ if (bhi_mitigation == BHI_MITIGATION_FULL ||
+ bhi_mitigation == BHI_MITIGATION_SYSCALL) {
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP);
+ pr_info("Spectre BHI mitigation: SW BHB clearing on syscall\n");
+ }
}
static void __init spectre_v2_select_mitigation(void)
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 32/35] x86/bugs: Add attack vector controls for spectre_v2
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (30 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 31/35] x86/bugs: Add attack vector controls for bhi David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-01-08 20:25 ` [PATCH v3 33/35] x86/bugs: Add attack vector controls for l1tf David Kaplan
` (3 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Use attack vector controls to determine if spectre_v2 mitigation is
required.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 57c762d86fca..662573ad3e51 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2041,13 +2041,15 @@ static void __init spectre_v2_select_mitigation(void)
case SPECTRE_V2_CMD_NONE:
return;
- case SPECTRE_V2_CMD_FORCE:
case SPECTRE_V2_CMD_AUTO:
+ if (!should_mitigate_vuln(X86_BUG_SPECTRE_V2))
+ break;
+ fallthrough;
+ case SPECTRE_V2_CMD_FORCE:
if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
mode = SPECTRE_V2_EIBRS;
break;
}
-
mode = spectre_v2_select_retpoline();
break;
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 33/35] x86/bugs: Add attack vector controls for l1tf
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (31 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 32/35] x86/bugs: Add attack vector controls for spectre_v2 David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-01-08 20:25 ` [PATCH v3 34/35] x86/bugs: Add attack vector controls for srso David Kaplan
` (2 subsequent siblings)
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Use attack vector controls to determine if l1tf mitigation is required.
Disable SMT if cross-thread attack vector option is selected.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 662573ad3e51..2e3b4d768d6b 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2780,10 +2780,15 @@ static void __init l1tf_select_mitigation(void)
}
if (l1tf_mitigation == L1TF_MITIGATION_AUTO) {
- if (cpu_mitigations_auto_nosmt())
- l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
- else
- l1tf_mitigation = L1TF_MITIGATION_FLUSH;
+ if (!should_mitigate_vuln(X86_BUG_L1TF))
+ l1tf_mitigation = L1TF_MITIGATION_OFF;
+ else {
+ if (cpu_mitigations_auto_nosmt() ||
+ cpu_mitigate_attack_vector(CPU_MITIGATE_CROSS_THREAD))
+ l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
+ else
+ l1tf_mitigation = L1TF_MITIGATION_FLUSH;
+ }
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 34/35] x86/bugs: Add attack vector controls for srso
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (32 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 33/35] x86/bugs: Add attack vector controls for l1tf David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
2025-01-08 20:25 ` [PATCH v3 35/35] x86/pti: Add attack vector controls for pti David Kaplan
[not found] ` <20250110083627.xankiqhczr7ksldv@desk>
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Use attack vector controls to determine if srso mitigation is required.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/kernel/cpu/bugs.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 2e3b4d768d6b..91e00d4de8df 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2922,8 +2922,14 @@ static void __init srso_select_mitigation(void)
if (srso_mitigation == SRSO_MITIGATION_NONE)
return;
- if (srso_mitigation == SRSO_MITIGATION_AUTO)
- srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+ if (srso_mitigation == SRSO_MITIGATION_AUTO) {
+ if (should_mitigate_vuln(X86_BUG_SRSO))
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+ else {
+ srso_mitigation = SRSO_MITIGATION_NONE;
+ return;
+ }
+ }
if (has_microcode) {
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread* [PATCH v3 35/35] x86/pti: Add attack vector controls for pti
2025-01-08 20:24 [PATCH v3 00/35] x86/bugs: Attack vector controls David Kaplan
` (33 preceding siblings ...)
2025-01-08 20:25 ` [PATCH v3 34/35] x86/bugs: Add attack vector controls for srso David Kaplan
@ 2025-01-08 20:25 ` David Kaplan
[not found] ` <20250110083627.xankiqhczr7ksldv@desk>
35 siblings, 0 replies; 138+ messages in thread
From: David Kaplan @ 2025-01-08 20:25 UTC (permalink / raw)
To: Thomas Gleixner, Borislav Petkov, Peter Zijlstra, Josh Poimboeuf,
Pawan Gupta, Ingo Molnar, Dave Hansen, x86, H . Peter Anvin
Cc: linux-kernel
Disable PTI mitigation if user->kernel attack vector mitigations are
disabled.
Signed-off-by: David Kaplan <david.kaplan@amd.com>
---
arch/x86/mm/pti.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
index 5f0d579932c6..132840528d55 100644
--- a/arch/x86/mm/pti.c
+++ b/arch/x86/mm/pti.c
@@ -38,6 +38,7 @@
#include <asm/desc.h>
#include <asm/sections.h>
#include <asm/set_memory.h>
+#include <asm/bugs.h>
#undef pr_fmt
#define pr_fmt(fmt) "Kernel/User page tables isolation: " fmt
@@ -94,7 +95,8 @@ void __init pti_check_boottime_disable(void)
if (pti_mode == PTI_FORCE_ON)
pti_print_if_secure("force enabled on command line.");
- if (pti_mode == PTI_AUTO && !boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN))
+ if (pti_mode == PTI_AUTO && (!boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN) ||
+ !cpu_mitigate_attack_vector(CPU_MITIGATE_USER_KERNEL)))
return;
setup_force_cpu_cap(X86_FEATURE_PTI);
--
2.34.1
^ permalink raw reply related [flat|nested] 138+ messages in thread[parent not found: <20250110083627.xankiqhczr7ksldv@desk>]
* Re: [PATCH v3 00/35] x86/bugs: Attack vector controls
[not found] ` <20250110083627.xankiqhczr7ksldv@desk>
@ 2025-01-10 15:39 ` Borislav Petkov
[not found] ` <20250110171410.ttbt7cohzdjwi4hk@desk>
0 siblings, 1 reply; 138+ messages in thread
From: Borislav Petkov @ 2025-01-10 15:39 UTC (permalink / raw)
To: Pawan Gupta, David Kaplan
Cc: Thomas Gleixner, Peter Zijlstra, Josh Poimboeuf, Ingo Molnar,
Dave Hansen, x86, H . Peter Anvin, linux-kernel
On Fri, Jan 10, 2025 at 12:36:27AM -0800, Pawan Gupta wrote:
> Below patch does some of the above for spectre_v1 mitigation. Please share
> your feedback if this is a good direction to take.
This has come up in the past. My problem with looping over an array of
function pointers and calling them is debuggability: it is not as easy as it
is currently with plain functions.
So we'd need a well-intergrated way to enable debugging of what gets called
when.
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index ad63b5678250..d719450f89c2 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -53,8 +53,24 @@
> * mitigation option.
> */
>
> -static void __init spectre_v1_select_mitigation(void);
> -static void __init spectre_v1_apply_mitigation(void);
> +struct cpu_mitigation {
> + unsigned int x86_bug; /* X86_BUG_* */
> + int mitigation; /* AUTO, FULL, NONE etc. */
> + int mitigates; /* Attack vectors to mitigate e.g. user->kernel */
This should be an enum.
Even better: you can group those arrays by attack vectors so you simply run
the respective array when you want to enable an attack vector. And then it is
obvious and self-documenting.
> + char **strings; /* sysfs status strings */
> + void (*parse_cmdline) (struct cpu_mitigation *m);
> + void (*select_mitigation) (struct cpu_mitigation *m);
> + void (*update_mitigation) (struct cpu_mitigation *m);
> + void (*apply_mitigation) (struct cpu_mitigation *m);
> + void (*ap_init_mitigation) (struct cpu_mitigation *m); /* Mitigation during secondary CPU init e.g. MSR writes */
> + void (*smt_update_mitigation) (struct cpu_mitigation *m); /* Mitigation update on SMT toggle */
> + void (*sysfs_show_mitigation) (char *buf);
> + void (*s3_suspend) (struct cpu_mitigation *m); /* Mitigation quirks on S3 suspend */
> + void (*s3_resume) (struct cpu_mitigation *m); /* Mitigation quirks on S3 resume */
Too many "mitigation" words in there. Perhaps drop "mitigation" from the
function ptr names.
And no side comments pls - all ontop.
Otherwise, once the dust settles here, this could be a nice cleanup.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 138+ messages in thread